Laura Michet's Blog

Thoughts about interfaces acting like people

Here's a photo of a tipped-over Bird scooter:

IMG_0912

"Please help me up!" it says on the underside of the scooter's main platform.

I've been thinking a lot about the ambiguity of human oversight and control in the "delivery robot" industry in west LA, where a lot of those devices are being tested. Living near delivery robots - or, rather, live human labor that is being marketed as autonomous and robot-like - is new for me. Usually, I deal with objects pretending to be people. They pretend either superficially, like the scooter above, or robustly and insidiously, like voice chat assistants.

Companies depicting their live-ops services as if they are people is, I think, the most harmful form of "pretend person." Voice UIs in particular hijack the human instinct to understand words as thought, and to treat thinking beings with sympathy and trust. After spending a lot of time making a two small Alexa games at a pair of game jams years and years ago, I grew to understand that even a relatively simple human-like voice UI can be extraordinarily manipulative.

Now that people can "speak to" an LLM, I think the issue is a lot more urgent. My strongest crank conviction is that we should regulate the use of software 'personalities' as disruptively as possible. I believe that it should be compulsory to disclose directly to the user - live, during a user session - when a voice assistant or audio UI is not a real person. After the seemingly-endless series of stories we got this past summer and fall about LLM-related deaths, I believe that text personalities like LLM chatbots also deserve extremely disruptive warnings, disclosures, and reminders.

We're very, very unlikely to ever get that kind of regulation. I'll probably always remain convinced that we needed it, though. Anything that can break the social spell of a conversational interface is beneficial to the humans who use it.

The other side of this earnestly-held crank opinion is that I don't have a problem when people are extremely rude to voice assistants. I think that when you have the full context on what they are, who made them, why, and for whom, you must grow to think of them as essentially tech company owners and investor boards wearing a little mask. Alexa is Jeff Bezos wearing a mask. You shouldn't feel the need to be polite to his little mask.

The trick is that I do reserve the right to judge people on the why and how of their rudeness. I don't think you should call the Alexa voice UI a cunt. I do think you should feel free to snap "Shut the fuck up, Jeff," at any Alexa product you're annoyed with. (There are certainly people who are rude to female voice assistants because they fantasize that they are able to berate, control, and demand service from a woman.) It would be self-protective for us all to be more distrustful of, and ruder to, tech company voice UIs. It should feel preposterous to extend Jeff's little mask the same courtesy you extend a real person.

And finally, to return to the delivery "robots" I wrote about yesterday: in a world where we need to protect one another from tech products harmfully pretending to be people, and where aggression and rudeness to voice UIs can be a perfectly good way of training yourself to distrust them... well, in that situation, it's even more insidious for companies to pretend their people are "robots". I believe that we have no hope of government regulation here, and that unions are probably going to be the last line along which human transparency might be defended. If your live human services are being sold to a customer as "autonomous", and if you can organize, I think you should demand that the company make your presence known to its customers.

The difficulty of unionizing teleoperation services at companies with more money than some nations is its own can of worms... but alongside an effort to break these companies up some more, I think it could have an impact. (I gotta keep telling myself that, anyway, because the alternative is extremely depressing!!)


These days, when I watch movies with voice interfaces or "AI" assistants in them, I find myself pretty surprised by how many fictional worlds seem to be full of people who never experience any ambiguity about whether they're talking to a person or to software. Everyone in a sci-fi setting has usually fully internalized the rules about what is or isn't "real" in their conversational world, and they usually all have a social script for how they're "supposed" to treat AI voice assistants.

Characters in modern film and TV are almost never rude or cruel to voice assistants except in scenes where they're being misunderstood by voice recognition. People in stories like these rarely ever get confused about whether something is a human or an AI unless that's, like, the entire point of the story. But in real life, we're constantly forced to interact with an unwanted voice UI, or a phone scammer voice that's pretending to be real. I have found myself really missing moments like these in movies, where humans express any material awareness of the false voices they interact with. Who made their voice assistant? How do they feel about that company or person? Are they ever tricked by a voice that is false when they expected it to be a real, live human?

I've also found myself increasingly frustrated by movies which use AI as a metaphor for real live human marginalization. The future is here, and "AI" is Sam Altman wearing a little mask; it is not a marginalized person. (It's possible that your delivery "robot" is actually a marginalized person, though!)

The reason I wrote this post at all was because I saw the new Running Man movie last week, and it contained a scene where the protagonist behaved toward an AI interface in a shockingly neutral way. It is so neutral that I was surprised, in the moment, that the script hadn't used this interaction to do a little more storytelling about the protagonist or about AI assistants in the world he lives in.

We all learned the forms of our classic stories too well over the last few decades. There is still too much urgency to understand all voice UI as either a person, as a Data or a Pinoochio. Or we understand it as as Mabel Roddenberry's voice on the Enterprise - a voice with no obvious material or social history, just existing to lubricate a scene or a plot.

I still haven't seen much media that reflects the way I actually feel about conversational interfaces in the real world - frustrated, tricked, manipulated, and inconvenienced. And I haven't seen any media at all recently about the equally insidious trend of real human labor being marketed as if it is an autonomous system.

I hope we don't have to wait too long to see some fiction that reflects the future-as-it-actually-arrived!

#so-called-ai