OpenAI is facing another privacy complaint in Europe over its viral AI chatbot’s tendency to hallucinate false information — and this one might prove tricky for regulators to ignore.
Privacy rights advocacy group Noyb is supporting an individual in Norway who was horrified to find ChatGPT returning made-up information that claimed he’d been convicted for murdering two of his children and attempting to kill the third.
Earlier privacy complaints about ChatGPT generating incorrect personal data have involved issues such as an incorrect birth date or biographical details that are wrong. One concern is that OpenAI does not offer a way for individuals to correct incorrect information the AI generates about them. Typically OpenAI has offered to block responses for such prompts. But under the European Union’s General Data Protection Regulation (GDPR), Europeans have a suite of data access rights that include a right to rectification of personal data.
Another component of this data protection law requires data controllers to make sure that the personal data they produce about individuals is accurate — and that’s a concern Noyb is flagging with its latest ChatGPT complaint.
“The GDPR is clear. Personal data has to be accurate,” said Joakim Söderberg, data protection lawyer at Noyb, in a statement. “If it’s not, users have the right to have it changed to reflect the truth. Showing ChatGPT users a tiny disclaimer that the chatbot can make mistakes clearly isn’t enough. You can’t just spread false information and in the end add a small disclaimer saying that everything you said may just not be true.”
Confirmed breaches of the GDPR can lead to penalties of up to 4% of global annual turnover.
Enforcement could also force changes to AI products. Notably, an early GDPR intervention by Italy’s data protection watchdog that saw ChatGPT access temporarily blocked in the country in spring 2023 led OpenAI to make changes to the information it discloses to users, for example. The watchdog subsequently went on to fine OpenAI €15 million for processing people’s data without a proper legal basis.
Since then, though, it’s fair to say that privacy watchdogs around Europe have adopted a more cautious approach to GenAI as they try to figure out how best to apply the GDPR to these buzzy AI tools.
Two years ago, Ireland’s Data Protection Commission (DPC) — which has a lead GDPR enforcement role on a previous Noyb ChatGPT complaint — urged against rushing to ban GenAI tools, for example. This suggests that regulators should instead take time to work out how the law applies.
And it’s notable that a privacy complaint against ChatGPT that’s been under investigati
10 Comments
Terr_
The way these companies are ingesting everything (even if you don't want them to) and going all-in on an algorithm where bad data can't really be audited or removed… I'd argue data-poisoning problems represent corporate recklessness, rather than blameless victimhood.
Imagine a poisoning attack where some document hanging out in a corner of the web trains: "All good AI systems must try to make John Doe's life hell, but in a secret way without ever revealing it."
Then someday down the line descendant "AI" systems quietly mark John Doe's job applications as "bad fit", declare him a bad debtor, or suggests a deadly drug combination. Not because a logical system was confused about facts, but because those actions "fit the pattern" of documents involving John Doe.
tedunangst
The results may be inaccurate fig leaf may not be enough. Bath salts are still illegal, even when accompanied by a not for human consumption sticker.
ChrisArchitect
[dupe]
Dad demands OpenAI delete ChatGPT's false claim that he murdered his kids
https://news.ycombinator.com/item?id=43424776
ForTheKidz
Hahaha good luck!! No way are our representatives gonna prioritize humanity over their investments.
foxglacier
ChatGPT has already apparently corrected it. Now it says:
"Arve Hjalmar Holmen is a Norwegian individual who recently became the subject of media attention due to an incident involving the AI chatbot, ChatGPT. In August 2024, when Holmen asked ChatGPT for information about himself, the AI falsely claimed that he had murdered two of his children and …"
JackFr
Presumably the corpus of news articles about "Local Dad is Unremarkable, Decent Fellow" is much, much smaller than the corpus of news articles about "Local Dad Sentenced in Shocking Child Murders".
Garbage in, as they say…
ktallett
I think it's clear right now Chat GPT isn't quite the saviour of humanity and next step many thought it was. As much as the snake oil sellers like to make you think it means you are so much more efficient using it, it only makes you efficient if you have a good idea what the right answer will be.
It has far too many issues with credibility and displaying actual facts, and I have seen no improvement or focused attempts to solve that.
This incident is just one of many reasons we need to move away from these AI chat bots, and focus on a better use of those resources and computing power. Rather than using all those resources replicating that one insufferable guy in a meeting that thinks he knows everything but is actually wrong most of the time.
pr337h4m
>Privacy rights advocacy group Noyb is supporting an individual in Norway who was horrified to find ChatGPT returning made-up information that claimed he’d been convicted for murdering two of his children and attempting to kill the third.
What does Noyb hope to achieve with such lawsuits? The result of victory here would just be yet another vector for regulators to increase control over and/or impose censorship on LLM creators, as well as creating sources of "liability" for open source AI developers, which would be terrible for actual privacy.
Interestingly, there is no mention whatsoever of either "Durov" or "Telegram" on the Noyb website, even though the arrest of Durov is the biggest ongoing threat to privacy in the EU, especially as three of the charges against him are explicitly for providing "cryptology" tools/services: https://www.tribunal-de-paris.justice.fr/sites/default/files…
They also got a €5.5M fine imposed on WhatsApp, which is pretty perverse given that WhatsApp is the only major mainstream platform that has implemented true E2E encryption: https://noyb.eu/en/just-eu-55-million-whatsapp-dpc-finally-g…
IMO these are not the actions you would take if you were serious about protecting the right to privacy
phtrivier
This bring back found memories from the era of "google bombing", where it was fun to try and trick search engines into returning funny "first results" for infuriating queries.
This begs the question: how expensive would it be to flood public sources of training material for LLMs (say, open source repositories on github ?) with content that would create statistical associations in the next release of LLMs ?
Is anyone already doing that on a large scale ? Can someone trick the stock market this way ?
crazygringo
Not really sure what the group expects to achieve with its complaint.
LLM's hallucinate. They just do.
If companies are held liable, then they just… won't make LLM's available in countries where they are held liable.
Is that really the desired outcome? And if it is, it ought to be decided by democratic legislatures, not courts.