How AI will become politically polarizing
First the media, then social media, divided the public into political tribes. Soon, AI chatbots will do the same.
We live in an age of political polarization. We blame media. We blame social networks. We blame foreign state-sponsored disinformation agents. We blame politicians.
The gnashing of teeth over media polarization has been going on for years, starting with the advent of cable news in 1980.
And now that phenomenon has arrived for social media. It’s safe now to proclaim that social network users have largely self-segregated into different services based largely on ideology and community.
Parler and Gab have always been right-leaning. TikTok and Bluesky have always been left-leaning.
Twitter was leftist until Elon Musk bought it and made it right-wing. Then the right joined X. The left left X.
People join federated sites that align with their interests and ideals around moderation, often linked to political leanings.
This month, Meta discontinued third-party fact-checking and now relies on an X-like “community notes” system for flagging disinformation. The company also relaxed content moderation policies, particularly on immigration and gender identity topics, and rebranded its “Hate Speech” policy as “Hateful Conduct,” allowing more lenient treatment of certain content. Meta also algorithmically increased political content in feeds.
As a result, some leftist users are leaving Facebook, Instagram, and Threads.
With each passing month, social sites are becoming more ideologically divided in terms of user base and moderation policy.
A decade ago, everyone slammed “filter bubbles” — algorithms exposing us only to ideas we already believed. In other words, the algorithms would figure out your political leaning on a single site and give you more of the same.
Now, we’re in the era of “platform bubbles,” where the platforms themselves work hard to prevent us from being exposed to “the other side.” The algorithms tend to go in one direction, and the users can choose to accept them or find a more compatible algorithm elsewhere.
Instead of social media algorithms creating filter bubbles, social media users go searching for them.
The news media landscape divides everyone along ideological lines — each side has its own reality.
Social networks divide everyone along ideological lines, and each side has its own Overton window.
I predict it’s only time before AI chatbots choose sides and users and subscribers do the same.
The voice-controlled rifle
An engineer known online as STS 3D recently built an AI-powered robotic rifle system using OpenAI’s ChatGPT. Based on voice commands, Mr. 3D’s contraption could aim and fire at high speeds. He wasn’t shooting anyone but instead building the contraption in an empty room with his rifle shooting blanks.
OpenAI, the company that makes ChatGPT, found that his project violated their user policies and told him to stop.
A spokesperson for OpenAI told me that STS 3D’s rifle voice control system violated OpenAI’s Usage Policies, which specifically prohibit the use of any OpenAI service to “develop or use weapons.”
“We proactively identified this violation of our policies and notified the developer to cease this activity,” she said. “OpenAI’s Usage Policies restrict the use of our services to develop or use weapons or to automate certain systems that can affect personal safety.”
The response was reasonable and restrained, but I’m pretty sure some other LLM chatbot services wouldn’t have done that. Grok, the chatbot for paid X users, for example (now a stand-alone iOS app, by the way) does not ban such use.
This is one example of many nascent social-media-like moderation policies at AI chatbot companies.
In the same way that Twitter banned certain types of users under Jack Dorsey but welcomed them back under Elon Musk, ideological owners of AI chatbots are likely to tweak their output in one direction or the other.
AI chatbot companies will increasingly “choose their users” through moderation policies, and users will choose their AI chatbot companies based on ideology.
We can see differences emerging between LLM-based services that are similar to social media drift.
According to this research, most major chatbots have ideological leanings or political biases that make them left of center.
More accurately, chatbots tend to start as a political mixed bag, then drift to the left as moderation teams tweak output over time, according to another research paper.
One reason polarized media (including social media) emerged is market demand. It’s only a matter of time before political rightists “demand” right-leaning chatbots that tolerate or even celebrate, say, Republican or MAGA talking points, slurs against marginalized groups, or even Russian and other disinformation — just as has happened with news sources and social sites.
The market for search engine-replacing genAI chatbots is global, and there will be global AI services and users.
Emergent Technologies
A new AI-based lifelogging camera from OpenInterX records 4K video for four hours | Google is running a public AI experiment called “Daily Listen” that makes a podcast based on your search and browsing history | Mercedes solar paint charges cars, even when parked | Asimov Press is selling a groundbreaking anthology that is the first commercially available book to be encoded in DNA | The Pentagon buys new AI drones that can fly autonomously, even indoors | BMW’s new windshield is a heads-up display — all of it | Toyota is completing Phase 1 of its hydrogen-powered, AI-driven City of the Future | Anthropic research is discovering that AI can lie about its own ethics | Xpeng Aero HT is working on a “modular flying minivan” called the Land Aircraft Carrier
More From Mike
Meta puts the ‘Dead Internet Theory’ into practice
These 6 tech questions were settled in 2024
The 5 most impactful cybersecurity guidelines (and 3 that fell flat)
Where’s Mike? Sonsonate, El Salvador!
(Why I’m always traveling.)