How to prepare for our fake future
We were promised jetpacks and food pills. Instead technology is bringing us a world where anything can be faked.
“I accept!” And with those two words, former president Donald Trump demonstrated a looming crisis — the technology-driven world where the fake hits like the real.
The post was a retweet on Trump’s verbally ironic “Truth Social” showing two real photograph of one woman wearing a “Swifties for Trump” t-shirt, and many fake images: 1) Taylor Swift as Uncle Sam calling on the public to vote for Trump; and 2) a smattering of AI-generated pictures showing large numbers of young women wearing “Swifties for Trump” t-shirts.
Of course, Swift has not endorsed Trump and there is no groundswell of Trump support among Swift fans. It’s a lie. But the nature of the lie has been misunderstood by the public. While the “claim” is obviously false, and the AI images are obviously false, the post creates an overall impression on millions of Trump fans who want to believe in a groundswell of support for their candidate.
Also: When cornered on the lie, Trump can say he was kidding, or merely retweeting what he thought was something real. It doesn’t matter. Trump has built his entire business and political career on the art of lying, cheating and stealing with plausible deniability. AI is a huge benefit for propagandists seeking to create “impressions” while never explicitly claiming alternative facts that can be proved wrong. (Trump likes to have it both ways: He allows followers to believe AI-generated fakes are real, but claims real video of Kamala Harris’s crowds are AI-generated fakes.)
With text-to-image generative AI tools, anything can be conjured out of nothing, any lie can made “true” through “photographic” evidence.
Even now, with AI-generated images nearly always identifiable as such, a huge number of people believe them to be true.
And, of course, the quality of AI fakes will only rise, and videos will become more believable, too.
The public is not ready for this.
A survey of 1,000 Americans recently revealed that two-thirds of respondents could not distinguish between AI-generated voices and real human voices, often misidentifying authentic human voices as AI.
The state of the art was on display during the Paris Olympics this summer. An AI version of Al Michaels gave Peacock viewers daily recaps. Michaels, who is still alive, was not involved in the recaps, which used advanced text-to-speech AI.
Voice-cloning technology will get better, too. And the percentage of people who can’t tell the difference will approach 100%.
Instagram recently introduced a new feature called AI Studio, which enables influencers to create AI versions of themselves, which interact with fans who may think they’re interacting with a person. It’s a new world of AI-driven parasocial relationships. (A parasocial relationship is one where a person — usually a superfan — invests time and energy into a sometimes obsessive relationship with another person — usually a celebrity — who doesn’t know they exist. Interactive AI avatar-and-chatbot tools give one person a longstanding, involved history of interaction and the other person the ability to have nothing at all to do with the other person.
Fans may not know they’re not personally interacting with an influencer. Or worse: They may know but not care.
Fighting fire with fire
At this moment, we are exiting an era where most deepfake videos and AI-generated content can be discerned by a knowledgable person and entering an era where only AI can tell the real from the fake.
McAfee announced this week the world's first automatic and AI-powered deepfake detector, which is available exclusively on select new Lenovo AI PCs. The tool leverages the Neural Processing Unit (NPU) in Lenovo AI PCs to perform the identification process locally on the device.
For the 99.99% of us who don’t have a Lenovo AI PC, there are other tools available. Some of the most prominent are:
Deepware Scanner: This tool is designed to detect deepfake videos and images, helping users identify manipulated content.
Sensity AI: Formerly known as Deeptrace, Sensity AI offers deepfake detection technology that can be used to analyze videos and images for signs of manipulation.
Microsoft Video Authenticator: This tool analyzes photos and videos to provide a percentage chance, or confidence score, that the media has been artificially manipulated.
Amber Authenticate: This software continuously verifies the authenticity of videos and images, ensuring that they have not been tampered with.
Reality Defender: A comprehensive tool that uses machine learning to detect deepfakes and other types of manipulated media.
These tools are unknown to most people. And even people who know about them won’t use them.
If AI-based detection tools are to make any impact, they have to be built into the tools we use for consuming content — social networks and browsers, mainly.
Also: I suspect that the era of AI tools that detect fakes is also temporary. Future technology will likely be able to create fake content indistinguishable from real content, even by AI tools.
Ultimately, there’s no saving us from ourselves
An actual video recently circulating on social featured Neil deGrasse Tyson explaining his take on the difference between scientists and conspiracy theorists.
I’m paraphrasing wildly, but essentially he says that when a scientist is confronted with evidence that contradicts an existing belief, the scientist changes the belief.
When a conspiracy theorist is confronted with evidence that contradicts an existing belief, the conspiracy theorist claims the evidence is fake and fabricated through a conspiracy.
Most people lean closer to conspiracy theorists than scientists. We believe what we want to believe, and seek out “evidence” to support that existing belief. If that “evidence” is AI-generated fakes, well, that’s OK. The important thing is that the cherished belief has been reinforced.
Confirmation bias is a bitch.
We can dispense with any confidence that, in the future, we’ll be able to tell the difference between real pictures, videos and audio and the fake ones. AI-based detection tools will help, no doubt.
But what we really need more than artificial intelligence is natural intelligence. We need to train ourselves (and, ideally, all children in school) to think like a scientist, or even a good journalist.
Shameless Self-Promotion
Humanoid robots are a bad idea
AI and AR can supercharge ‘ambient computing’
Where are my AR glasses?
Tell the truth: Do you want AI lie detectors in the workplace?
More from Elgan Media!
My Location: Oaxaca, Mexico
(Why Mike is always traveling.)
Call me crazy but I wouldn't have a lot of faith in a Lenovo detecting China's deepfakes.