Three bad reasons to be polite to AI
Smart people give dumb reasons why people should say "please" and "thank you" to ChatGPT.
AI and robotics companies understandably want the public to engage with their products as if they were people.
If AI is essentially human, then the corporations that make AI get two benefits. First, their customers will care more about their products, could become addicted to them, and may feel they need them. This helps the AI companies make more money.
The second reason is that if the founders or leaders of AI companies are creating what are essentially human beings, then by definition, those people are Gods. They want to be Gods, so they encourage the delusion that their products are people.
What confuses me is why so many people who do not stand to profit are signing up for the false idea that AI or humanoid robots have any consciousness or humanity.
I mean, it's fun to entertain the possibility or eventuality. But why would anyone want this?
Sadly, it's not just a reflex of an unthinking public informed by Star Trek and the false claims of Silicon Valley billionaires. Brilliant, thoughtful people are also entertaining these delusions.
In a New York Times piece yesterday, reporter Sopan Deb argued in favor of being polite to ChatGPT, citing the thoughts of three brilliant minds, each giving a different reason why people should be polite to AI.
Unfortunately, each of these reasons is based on a provably false premise. Let's take a closer look.
1. Because humans are AI
Deb quoted AI researcher Jaime Banks: "We build up norms or scripts for our behavior, and so by having this kind of interaction with the thing, we may just become a little bit better or more habitually oriented toward polite behavior."
The problem is that human beings, in fact, don't run on "scripts." (A "script" is a sequence of instructions written in a programming or scripting language that is executed by an interpreter or a runtime environment rather than being compiled into machine code before execution.)
Essentially, Banks spends his days thinking about how to train AI and may assume that people are just AI robots created by nature who run on scripts and autonomously spit out results based on input. He sees talking with AI as part of human programming to train people to produce the correct results when interfacing with other robot people.
This is a radically anti-human perspective that the public should reject completely.
2. Because AI is human
Deb also dragged MIT Professor Sherry Turkle into the mix, whom he quoted as saying: "If an object is alive enough for us to start having intimate conversations, friendly conversations, treating it as a really important person in our lives, even though it's not, it's alive enough for us to show courtesy to."
"Alive enough." The trouble is that AI is not "alive" at all. It's zero percent alive. There is literally zero "life" and zero humanity in an AI chatbot. Any "feeling" or belief that it's "alive enough" is pure self-delusion.
The entire framing of this idea is mushy, unexamined garbage. There's no such thing as an "intimate conversation" with AI because there's only one person involved. By definition, intimacy requires mutual familiarity, mutual trust, and mutual understanding. Software can't feel familiarity, trust, or understanding. If it's just one person alone using a software product, that's not "intimacy." And to draw an equivalency between somebody using a cloud-based software service and two people engaging in an intimate conversation is obviously wrong.
2. Because we want to help AI become human
Deb then moved on to the ideas of Madeleine George, a playwright whose 2013 play "The (curious case of the) Watson Intelligence" addressed human/bot etiquette. According to Deb, her claim was that "saying 'please' and 'thank you' to AI bots offers them a chance to learn how to become more human.
No, actually. If you want AI to imitate human niceties, then you program that in using linguistic politeness classifiers that score outputs on culturally specific metrics and tone enforcement modules that inject courteousness markers during text generation. It's not a child being raised by a parent. It’s a product being built by employees.
George's desired outcome was that AI would one day "act like a living being that shares our culture, that shares our values, and that shares our mortality."
But why would anyone want that? Sure, the idea that people would benefit from AI that shares our values implies an AI beyond human control. What we should want, what we should be working toward, is AI that never leaves human control.
I would urge everyone to stop the mushy, delusional thinking. People are not AI robots. AI is not human. And we shouldn't want AI to become human.
We don't need to embrace delusional thinking. And we don't need cheerleaders for a cyberpunk dystopia. We need to cultivate a clear understanding of the difference between human beings, who deserve our polite consideration, and software, which does not.
A good start in cultivating this understanding is to always be polite to other people — and never be polite to AI.
More From Elgan Media, Inc.
How to win fake friends and influence fake people
Apple and Google eye the future of AI glasses
Drones are the future of cybercrime
Unicorn Roast podcast: So you want to be a slime ball
Where’s Mike? Rota d'Imagna, Italy
(Why I’m always traveling.)
Another stunning article. Perhaps folks just need a friend who won't disagree with them. We presume pets can serve a similar purpose, and in many cases do, but animals know better.
I agree those are bad reasons, but the best reason to be polite is that chatGPT gives better results when you are polite.