An Ars reader says ChatGPT is leaking passwords from users’ private chats

Chatgpt

Getty Images

ChatGPT leaks private chats containing login credentials and other personal details of unrelated users, screenshots provided Monday by an Ars reader show.

Of the seven reader-submitted screenshots, two stood out. Both had multiple pairs of usernames and passwords that appeared to be connected to a support system used by pharmacy prescription drug portal staff. An employee using an artificial intelligence chatbot seems to solve problems that arise when using the portal.

“Terrible, Terrible, Terrible”

“THIS is so crazy, horrible, horrible, horrible, I can’t believe how poorly it was set up in the first place and the barrier that was put in front of me preventing it from getting better,” the user wrote. . “I would shoot [redacted name of software] for the sheer absurdity of it, if it were my choice. This is wrong.”

In addition to friendly language and credentials, the leaked chat also includes the name of the app the employee was using to fix the problem and the store number where the problem occurred.

The whole conversation goes way beyond what is shown in the edited screenshot above. A link featuring Ars reader Chase Whiteside showed the conversation in its entirety. URL exposed additional credential pairs.

The results emerged Monday morning shortly after reader Whiteside used ChatGPT for an unrelated query.

“I went to make a request (in this case, help come up with clever names for the colors in the palette) and when I came back to log in a few minutes later, I saw additional conversations,” Whiteside said in an email. “They weren’t there when I used ChatGPT last night (I’m a pretty heavy user). No requests were made – they just appeared in my history and were definitely not from me (and I don’t think they were from the same user).”

Other conversations leaked to Whiteside included the title of a presentation someone was working on, details of an unpublished research proposal, and a script using the PHP programming language. The users of each leaked chat seemed different and unrelated. The conversation surrounding the prescription portal includes 2020. Dates are not specified in other conversations.

The episode and others like it highlight the wisdom of removing personal data from requests to ChatGPT and other AI services whenever possible. Last March, ChatGPT maker OpenAI took its AI chatbot offline after it caused the site to crash display titles from an active user’s chat history to disconnected users.

In November, researchers published a paper report how they use requests from ChatGPT to disclose email addresses, telephone and fax numbers, physical addresses, and other personal information included in the material used to teach the ChatGPT large language model.

Concerned about the possibility of leaking proprietary or personal information, companies including Apple have restricted their employees from using ChatGPT and similar sites.

Such practices are as old as the Internet, as an article in December noted, when many people discovered that Ubiquity’s UniFy devices were streaming private video belonging to unrelated users. As explained in the article:

The exact root causes of these types of system errors vary from incident to incident, but they often involve the “middle box” devices that sit between the front-end and back-end devices. To improve performance, middleboxes cache certain data, including the credentials of recently logged in users. Credentials for one account can be matched to a different account if mismatches occur.

An OpenAI representative said the company is investigating the report.

Exit mobile version