We just hit the deepfake-denial singularity
Claim: "That video evidence of me saying something bad was a deepfake!" I've been predicting this for years. But I never predicted Elon Musk would do it.
Deepfake panic — the fear that AI-generated video will falsely implicate notable people by showing them doing or saying something they didn’t do or say — is less of a threat than reverse-deepfake panic: The claim that true video evidence is just a deepfake and therefore can be dismissed.
I’ve been predicting for years that when politicians, celebrities and business tycoons get caught on video doing something bad, the false claim that it’s a fake, AI-generated deepfake smear job will become a normal defense.
I’ve been waiting for the “deefake-denial singularity” — the first high-profile case of a famous person boldly claiming that factual video evidence is fake, thereby making the “deepfake defense” socially acceptable.
Well, it happened. Elon Musk did it. In court.
During a wrongful death lawsuit involving Tesla's Autopilot system, Tesla’s lawyers claimed that Musk’s many lies — or, at least, delusional and self-serving predictions — about the progress of Tesla’s self-driving technology should be dismissed because they might be deepfakes.
The good news is that the judge isn’t buying it.
Here’s what the judge said and why I believe “deepfake denial” is the lie of the future.
Mike’s List of Shameless Self Promotions
Why remote work is the opportunity of the century for cities
Amazon’s Sidewalk could be a big boon to business
Can’t hire? Can’t get hired? How to avoid the “Great Mismatch” trap!
This Week in Tech: “Three Spiders a Night”
Check out my travels, observations and ideas on my ELGAN.COM blog!
Too many breadcrumbs, or suspicious lack thereof. It just forces journalists to be more evidence based. Human defensive networks will fall into place. If it is harmful to us as a society, the human organism will harden it's information immune system.