Skip to content Skip to footer
0 items - $0.00 0

AI-Generated Voice Evidence Poses Dangers in Court by hn_acker

AI-Generated Voice Evidence Poses Dangers in Court by hn_acker

AI-Generated Voice Evidence Poses Dangers in Court by hn_acker

16 Comments

  • Post Author
    fhd2
    Posted March 11, 2025 at 4:07 pm

    Sounds like reasonable changes.

    Generally speaking, I think evidence tampering is not a new problem, and even though it's easy in some cases, I don't think it's _that_ widespread. Just like it's possible to lie on the stand, but people usually think twice before they do it, because _if_ they are found to have lied, they're in trouble.

    My main concern is rather that legit evidence can now easily be called into question. That seems to me like a much higher risk than fake evidence, considering the overall dynamics.

    But ultimately: Humanity has coped without photo, audio or video evidence for most of its existence. I suppose it will cope again.

  • Post Author
    dghlsakjg
    Posted March 11, 2025 at 4:26 pm

    How is AI voice faking any different than any other type of faking? How is it different than a manipulated recording, or a recording where someone is imitating another?

    It is just as easy to fake many paper documents, and we have accepted documents as evidence for centuries.

    Photos can be faked, video can be edited or faked, witnesses lie or misremember.

    Is this just about telling lawyers that unvetted audio recordings can be unreliable? Because that shouldn't be news.

    Edit: this is a good faith question. I'm legitimately just curious. Splicing and editing have been around since recording was invented, I was legitimately curious why voice recordings would have been given extra evidential weight when manipulating recordings is a known possibility.

  • Post Author
    carra
    Posted March 11, 2025 at 4:41 pm

    Wait some more time and photos or even video recordings will be deemed just as dangerous. And then what? Even if there is real evidence, it will have to be discarded unless it can't be sufficiently validated. It will get very hard to prove anything.

  • Post Author
    breakingrules3
    Posted March 11, 2025 at 5:01 pm

    [dead]

  • Post Author
    nottorp
    Posted March 11, 2025 at 5:05 pm

    So… those talking head "influencers" who leave multiple hours of voice and video samples on social networking for anyone to download and clone are the most at risk for an attack like this?

  • Post Author
    treetalker
    Posted March 11, 2025 at 5:09 pm

    I question the wisdom of setting the judge up as a superjury / gatekeeper for this kind of situation. This seems like a reliability / weight of the evidence scenario, not a reliability / qualification of the witness scenario (as with an expert witness).

    Why would the judge be better qualified to determine whether the voice was authentic, as opposed to the witness? And why should the judge effectively determine the witness's credibility or ability to discern, when that's what juries are for?

    All that said, emulated voices do pose big problems for litigation.

  • Post Author
    exe34
    Posted March 11, 2025 at 5:27 pm

    On a related note, why oh why does Lloyds Bank insist on grabbing my voice for login every time I call them? I have to keep saying "no, fcuk off!" a dozen times until it gives up.

  • Post Author
    Lammy
    Posted March 11, 2025 at 5:37 pm

    In the future this stuff will get so good that the public will beg to be surveilled at all times because it will be the only way to prove what you didn't do. You will learn to love Total Information Awareness. Consent status: manufactured :)

  • Post Author
    tgv
    Posted March 11, 2025 at 5:43 pm

    I'll say it again, even though it is rather unpopular here: there has never been a need to develop these tools, nor one to make them easy to deploy, nor one to make them easy to use. Yet all this has happened, and now it may occur that someone is acquitted because AI generated media is so good, the evidence might be artificial. If that happens, and the suspect commits another crime, it's on the conscience of the people that contributed to this. You cannot create something and pretend its use has nothing to do with you.

    The tools aren't perfect yet, so it's not too late to stop. Stop the ridiculous image and audio generation tools before it's too late. Nothing of value is lost when these models are made private again, and research is simply halted.

  • Post Author
    recursive
    Posted March 11, 2025 at 5:58 pm

    Here's my plan.

    1. Cryptographically hash each piece of media when it's recorded.

    2. Submit the hash to a "trusted" authority.

    3. It will add a timestamp and sign the result.

    4. Now, as long as you keep the original, without re-compressing, and you trust the authority, you have some evidence that the media existed at a timestamp. On or before.

    This doesn't prove authenticity, but in many cases, establishing a timestamp would be enough. Forgeries probably wouldn't be created until later, after the shit hit the fan.

    Or maybe this doesn't work at all.

  • Post Author
    tiahura
    Posted March 11, 2025 at 6:03 pm

    Why can't objecting party ask to voir dire the witness and find out if they can distinguish between AI and real recordings?

  • Post Author
    gortok
    Posted March 11, 2025 at 7:47 pm

    What shocks (and irritates!) me is that Charles Schwab keeps wanting me to set up voice ID. Why would I want to set up a voice ID for something that is now trivially spoofed?

  • Post Author
    belter
    Posted March 11, 2025 at 7:48 pm

    To avoid the scenarios like the one described in the court case, setup a family password or keyword. The bot will not know it.

  • Post Author
    TeeMassive
    Posted March 11, 2025 at 8:24 pm

    I think we're going to see C2PA (https://c2pa.org/) being mandatory for cellphones. At least when there's at least one implementation and that all digital cameras has an integrated TPM.

  • Post Author
    TriangleEdge
    Posted March 11, 2025 at 9:36 pm

    AI threatens all digital perceptions, not just voice. Images, videos, recordings, … I think soon enough proving things in court beyond a reasonable doubt when the evidence is digital media will be difficult/impossible.

  • Post Author
    PolieBotics
    Posted March 11, 2025 at 10:58 pm

    I've developed a novel approach to creating tamper-evident video via cryptographic feedback loops between projectors and cameras. The process works as follows:

    1. A projector displays a challenge pattern (Perlin noise derived from of a hash)
    2. A camera captures this projection
    3. The system hashes the captured image concatenated with the previous hash and uses it to derive the next projection
    4. This chain demonstrates true temporal sequentiality that's difficult to forge

    By incorporating random noise derived from Byzantine Fault Tolerant networks and using these networks as timestamping servers, the proofs inherit the network's decentralization properties. ML then confirms that the feature distributions in projection-photograph pairs match expected patterns from the training dataset.

    Demo video and GitHub repo available here:
    https://www.reddit.com/r/PoliePals/comments/1j8qm2j/truth_be…

Leave a comment

In the Shadows of Innovation”

© 2025 HackTech.info. All Rights Reserved.

Sign Up to Our Newsletter

Be the first to know the latest updates

Whoops, you're not connected to Mailchimp. You need to enter a valid Mailchimp API key.