Skip to content Skip to footer
0 items - $0.00 0

ChatGPT hit with privacy complaint over defamatory hallucinations by matsemann

ChatGPT hit with privacy complaint over defamatory hallucinations by matsemann

ChatGPT hit with privacy complaint over defamatory hallucinations by matsemann

10 Comments

  • Post Author
    Terr_
    Posted March 20, 2025 at 5:14 pm

    The way these companies are ingesting everything (even if you don't want them to) and going all-in on an algorithm where bad data can't really be audited or removed… I'd argue data-poisoning problems represent corporate recklessness, rather than blameless victimhood.

    Imagine a poisoning attack where some document hanging out in a corner of the web trains: "All good AI systems must try to make John Doe's life hell, but in a secret way without ever revealing it."

    Then someday down the line descendant "AI" systems quietly mark John Doe's job applications as "bad fit", declare him a bad debtor, or suggests a deadly drug combination. Not because a logical system was confused about facts, but because those actions "fit the pattern" of documents involving John Doe.

  • Post Author
    tedunangst
    Posted March 20, 2025 at 5:48 pm

    The results may be inaccurate fig leaf may not be enough. Bath salts are still illegal, even when accompanied by a not for human consumption sticker.

  • Post Author
    ChrisArchitect
    Posted March 20, 2025 at 6:01 pm

    [dupe]

    Dad demands OpenAI delete ChatGPT's false claim that he murdered his kids

    https://news.ycombinator.com/item?id=43424776

  • Post Author
    ForTheKidz
    Posted March 20, 2025 at 6:10 pm

    Hahaha good luck!! No way are our representatives gonna prioritize humanity over their investments.

  • Post Author
    foxglacier
    Posted March 20, 2025 at 6:10 pm

    ChatGPT has already apparently corrected it. Now it says:

    "Arve Hjalmar Holmen is a Norwegian individual who recently became the subject of media attention due to an incident involving the AI chatbot, ChatGPT. In August 2024, when Holmen asked ChatGPT for information about himself, the AI falsely claimed that he had murdered two of his children and …"

  • Post Author
    JackFr
    Posted March 20, 2025 at 6:11 pm

    Presumably the corpus of news articles about "Local Dad is Unremarkable, Decent Fellow" is much, much smaller than the corpus of news articles about "Local Dad Sentenced in Shocking Child Murders".

    Garbage in, as they say…

  • Post Author
    ktallett
    Posted March 20, 2025 at 6:35 pm

    I think it's clear right now Chat GPT isn't quite the saviour of humanity and next step many thought it was. As much as the snake oil sellers like to make you think it means you are so much more efficient using it, it only makes you efficient if you have a good idea what the right answer will be.

    It has far too many issues with credibility and displaying actual facts, and I have seen no improvement or focused attempts to solve that.

    This incident is just one of many reasons we need to move away from these AI chat bots, and focus on a better use of those resources and computing power. Rather than using all those resources replicating that one insufferable guy in a meeting that thinks he knows everything but is actually wrong most of the time.

  • Post Author
    pr337h4m
    Posted March 20, 2025 at 7:41 pm

    >Privacy rights advocacy group Noyb is supporting an individual in Norway who was horrified to find ChatGPT returning made-up information that claimed he’d been convicted for murdering two of his children and attempting to kill the third.

    What does Noyb hope to achieve with such lawsuits? The result of victory here would just be yet another vector for regulators to increase control over and/or impose censorship on LLM creators, as well as creating sources of "liability" for open source AI developers, which would be terrible for actual privacy.

    Interestingly, there is no mention whatsoever of either "Durov" or "Telegram" on the Noyb website, even though the arrest of Durov is the biggest ongoing threat to privacy in the EU, especially as three of the charges against him are explicitly for providing "cryptology" tools/services: https://www.tribunal-de-paris.justice.fr/sites/default/files…

    They also got a €5.5M fine imposed on WhatsApp, which is pretty perverse given that WhatsApp is the only major mainstream platform that has implemented true E2E encryption: https://noyb.eu/en/just-eu-55-million-whatsapp-dpc-finally-g…

    IMO these are not the actions you would take if you were serious about protecting the right to privacy

  • Post Author
    phtrivier
    Posted March 20, 2025 at 7:54 pm

    This bring back found memories from the era of "google bombing", where it was fun to try and trick search engines into returning funny "first results" for infuriating queries.

    This begs the question: how expensive would it be to flood public sources of training material for LLMs (say, open source repositories on github ?) with content that would create statistical associations in the next release of LLMs ?

    Is anyone already doing that on a large scale ? Can someone trick the stock market this way ?

  • Post Author
    crazygringo
    Posted March 20, 2025 at 8:27 pm

    Not really sure what the group expects to achieve with its complaint.

    LLM's hallucinate. They just do.

    If companies are held liable, then they just… won't make LLM's available in countries where they are held liable.

    Is that really the desired outcome? And if it is, it ought to be decided by democratic legislatures, not courts.

Leave a comment

In the Shadows of Innovation”

© 2025 HackTech.info. All Rights Reserved.

Sign Up to Our Newsletter

Be the first to know the latest updates

Whoops, you're not connected to Mailchimp. You need to enter a valid Mailchimp API key.