Only AI can save us from disinformation
AI-generated, state-sponsored lies have spread across the globe to sow distrust. New AI-based counter-disinformation tools are the best way to discover them.
You’ve seen the headlines. Russia is ramping up AI-generated disinformation to sow internal division and choose its own candidates in election-year nations around the world:
“Election disinformation takes a big leap with AI being used to deceive worldwide”
“AI is turbocharging disinformation attacks on voters, especially in communities of color”
“Europe battles ‘avalanche of disinformation’ from Russia”
“Cyberattacks, disinformation, election meddling in the EU: European democracies under pressure”
“Russia's lies helped persuade Niger to eject US troops, AFRICOM says”
“Two Russians sanctioned by US for alleged disinformation campaign”
On an individual level — say, one person reading one social media post — disinformation posts usually seem legit. Even in the rare case where a social media user believes something is disinformation, the user can’t be sure and the social site can’t take action without evidence. And even if a miracle occurs, and a user identifies disinformation and the site takes action to delete the offending account, a mere grain of sand has been removed from the beach and nothing has changed.
What AI can do is detect a broad pattern of disinformation — to identify the emergence and nature of disinformation campaigns. Here’s an example.
An AI search tool late last year discovered a disinformation campaign on the messaging service Telegram in which Russia-based accounts were posting at scale in Spanish disinformation about the strategic alliance between the US, UK and Australia, which is called AUKUS. The aim was to make people in Latin America distrustful of the alliance and favor increased Chinese presence in Latin America. (That campaign is part of a massive, constant Russian effort to weaken the United States as a global superpower by sowing anti-US sentiment among allies.)
Companies like Blackbird AI, Logically.ai, VineSight, ActiveFence and Primer use AI to sift through large bodies of communication to find emerging malicious and coordinated narratives that target both governments and companies.
Ukraine — the laboratory of strategic lies and AI-based counter-disinformation
The war in Ukraine is the world’s first full-scale hybrid war, combine cyber attacks and sophisticated disinformation with military action. As one example, Ukrainian forces are pioneering the future of using drones for cyber attacks on the Russian military. If you want to understand the global future of national conflict, you have to look at Ukraine.
Here’s one example of how Russia combines disinformation with combat. Ukraine's Center for Countering Disinformation (CCD) recently reported that Russian propaganda organizations started spreading the false news that Ukrainian special services were getting ready to blow up Ukraine’s Dnipro hydroelectric power plant so they could blame Russia for the attack. Once the false rumor was spread, Russia blew up the dam.
The tactic is to gain the usual strategic benefits of blowing up a dam (impeding car and truck traffic across the dam and causing mayhem down river), plus the morale-degrading benefits of getting many Ukrainians to believe their own government is the perpetrator.
Russian disinformation hurts Ukraine. But it’s also fueling the rise of counter-disinformation AI startups in Ukraine.
Here’s another tactic used by the Russian government to impact sentiment across populations and discovered using AI. Russian propagandists allegedly inject racial dog whistles into the national conversation, according to an AI study by a Ukrainian AI startup called Osavul. (This is apparently part of a larger disinformation effort to reduce opposition to the war by vilifying the enemy and their allies.)
Looking at Russian-language chatter across 5,000 sources, including Facebook and X, but also Russia-centric services like Telegram and VK, the AI detected campaigns to inject into the national discourse a series of clever dog-whistle slurs impactful for Russian speakers but invisible to non-Russian speakers.
For example, Polish people were suddenly being called “psheks,” which imitates a common sound in the spoken Polish language. The slur rose along with Polish opposition to Russia’s invasion of Ukraine, but then decreased when Poland blocked grain imports from Ukraine.
Russian disinformation campaigners apparently have slur dials to turn the vilification of specific groups up or down based on the government’s interests at any given moment.
As part of a wider program of state-spread antisemitism, the AI detected a new and apparently out-of-the-blue rise of the anti-Jewish slurs “Abramovichs,” “Galkins” and “Chubaises” — three surnames of prominent Jews that when pluralized equate Jewishness with opposition to Russia.
Injected language was also found not only for slurs, but subtle manipulations that support the Russian government’s narrative around Ukraine. For example, the phrase “on Ukraine” is replacing “in Ukraine,” to imply Ukrainian non-independence.
Here’s why only AI can save us from the scourge of AI-generated disinformation.
Mike’s List of Shameless Self-Promotions
The future of work looks like sci-fi
Why Mexico City loves Oaxaca
AR got its ‘killer app’: GenAI
If you like food and travel, you’ll love our free Gastronomad newsletter!
Read ELGAN.COM for more!
Mike’s Location: Silicon Valley, California
(Why Mike is always traveling.)