Apple is lying. Robots don't feel emotions.
Who will tell us the truth about robots when even researchers who know robots better than anyone flat-out lie?
Apple built an expressive lamp.
Researchers in Apple's Machine Learning Research division recently published a paper about the researchers' ELEGNT (pronounced "elegant") program, which exists to explore robotic expressiveness without humanlike voice or features.
Basically, they gave a lamp body language.
Inspired by Pixar's animated lamp logo, the ELEGNT lamp uses cameras, microphones, and even touch and torque sensors (to detect when the user touches it and how) to gather data about what's happening, then AI and other programming to process that information and respond with nodding or shaking its head, adjusting speed, pauses, and acceleration, positioning itself to indicate attention, approaching or avoiding objects, using its arm for "tail wagging" or "sitting down" gestures, gaze direction, head tilts, leaning, body stretches, dance-like motions and other movements.
The lamp’s movements are explicitly engineered to be "readable" as humanlike body language by users. The researchers explored the effect of the lamp's expressiveness to find out whether or how much people enjoyed or disliked interacting with an expressive lamp. (People generally enjoyed it unless they were trying to get the lamp to do something specific and practical, in which case they found the lamp's personality annoying.)
Study participants said expressiveness made the lamp more engaging, lively, and "fun to watch," while robots with only functional movements were considered "boring" or "machine-like."
We can reasonably assume that ELEGNT research will be applied to Apple's desktop robot, expected for $1,000 within two years. This robot will look like an iPad at the end of a robotic arm that will move, face the user, and essentially perform like a voice-driven personal assistant. Code-named J595, the personal assistant appliance will provide an interface for home automation and security products, do Siri-like stuff, and facilitate FaceTime calls.
All this is fine. We all might actually enjoy using such a product and have fun with an expressive appliance that feigns "personality."
But why do researchers find it necessary to lie in their research papers?
In the ELEGNT paper, researchers claimed: "The framework integrates function-driven and expression-driven utilities, where the former focuses on finding an optimal path to achieve a physical goal state, and the latter motivates the robot to take paths that convey its internal states—such as intention, attention, attitude, and emotion—during human-robot interactions."
There's so much wrong here. But let's focus on the big one: The researchers claim that the robot's actions convey an internal state of "emotion." They’re not claiming simulation. They’re claiming the lamp is expressing its feelings (i.e. “conveying its internal states, such as… emotion”).
Elsewhere in the paper, they say that "a robot might use light, bouncy movements to convey happiness, slow movements to suggest a relaxed state, lower its head to indicate sadness, or employ sudden, jerky motions to signal fear or other negative emotions."
Let's pause to remember that if Apple researchers had created a robot capable of feeling emotion or having "the internal state of emotion," this would be the biggest computer science event in human history. Such a singularity would upend philosophy, biology, religion, and law.
So why lie about it in a formal research paper?
Note that elsewhere in the paper, the authors begrudgingly admit the lie. They write: "While robots do not experience emotions as humans do, their ability to simulate emotional expressions is crucial for creating intuitive, engaging interactions."
What is the purpose of the phrase "as humans do"? No, Apple. The sentence ends with "emotions," as in "robots do not experience emotions"—and then you put a period after the word "emotions."
A scientific research paper is a place for precise language and not a place for evidence-free claims. If the researchers weren't either deluded or deliberately misleading, they would use the word "simulate" instead of "indicate" or "signal," as in: "lower its head to simulate sadness, or employ sudden, jerky motions to simulate fear or other negative emotions."
I'm making an example of Apple here, but this kind of misleading communication about AI and robotics is shockingly common in the industry.
More From Mike
Unicorn Roast: The Pro-Lifelog Position
Why I want glasses that are always listening
Apple is the latest company to get pwned by AI
Robots get their ‘ChatGPT moment’
Where’s Mike? Sicily!
(Why I’m always traveling.)
Amen!