"The Enrichment Center reminds you that the Weighted Companion Cube cannot speak. In the event that the Weighted Companion Cube does speak, the Enrichment Center urges you to disregard its advice." -- GLaDOS, Portal
On February 12th, 2019, NASA reported its Opportunity rover mission to Mars was no longer functioning. After its fifteen years of service, a considerable extension past the original 90-day mission it was designed for, it was trapped in a Martian dust storm in June of 2018 and entered a special hibernation mode, hoping to recover when natural winds blew the dust off its solar panels. That salvation didn't come, and after NASA mission control's repeated attempts to raise a signal from the power-drained rover, NASA declared Opportunity dead.
So far, so typical. Opportunity was only the fifth of six robotic expeditions made to Mars, each time increasing in functionality as technology progresses. The 1971 Soviet "Mars 3" rover stopped communicating only 20 seconds after landing. Many missions would count themselves amazingly lucky if they managed to get a functioning virtual planetary explorer that lasted fifteen years.
But then a funny thing happened…
The Internet reacted, not just as if a great scientific mission were finished, but as if it had lost a beloved friend. The Opportunity rover was now nicknamed "Oppy," and heartfelt Twitter condolences poured in. Salon commented that a robot's "death" brought humanity together.
OK, might we be speaking metaphorically? No, people composed poems and others responded with sadness. They made memes about it going to robot heaven. The final transmission made to Opportunity was the Billie Holiday song "I'll Be Seeing You." The final transmission from Opportunity, a mere reading of its gauges, was poetically interpreted as "My battery is low and it’s getting dark," which became its own meme all over Twitter.
Yes, this is the same Internet that will wish it upon you to die in a fire of cancer spiders if you don't conform to mainstream tastes, but a robot goes kaput and everybody tears up into a gooey puddle of sentimental mush. What is going on here? Hey, we have just the answer for you, and it's going to get SCARY!
Meet the Eliza Effect
The ELIZA Effect is a bug in human behavior where we attribute human-like, or at least life-like, characteristics to inanimate objects, especially computers. That's not quite the whole story of the effect we're seeing with Opportunity, but it's a good starting point. Let's back up a bit…
The Turing test is the gold standard of AI goals. The mission is to create an artificial intelligence that's so good, that in a blind conversation with a human, the human could not tell that it was talking to a robot. It's the cross-entity version of "passing." The Turing test is, on one hand, forever out of reach by our best standards today, and yet, in another way, reveals a deep foible of human nature: It's shockingly easy to get humans to treat a robot as a human, even if they "know" that the robot is a robot.
Eliza, you see, is a chatterbot, one of the earliest prototypes made. The program is the simplest tinkertoy you could make and still carry on a conversation, mimicking a Rogerian psychotherapist which asks probing questions based on previous statements and following a very tight script. You can talk to Eliza right here, and if you happen to run the EMACS platform, you have a Lisp-coded version of Eliza at your fingertips by hitting "M-x doctor".
Despite the fact that Eliza was so simple, its creator, computer scientist Joseph Weizenbaum, was alarmed to discover that any human he sat down in front of Eliza for a demo chat soon grew attached to it. Human users started attributing understanding and feelings to the program that were not actually there.
It didn't matter how much Weizenbaum pointed out that it was just a stupid script with canned responses built around a Mad-Libs syntax engine, users kept insisting that Eliza actually comprehended their conversation and helped them sort out issues.
This extended right to Weizenbaum's own secretary, who…
"...thought the machine was a 'real' therapist and spent hours revealing [her] personal problems to the program. When Weizenbaum informed his secretary that he, of course, had access to the logs of all the conversations, she reacted with outrage at this invasion of her privacy. Weizenbaum was shocked by this and similar incidents to find that such a simple program could so easily deceive a naive user into revealing personal information." -- Richard S. Wallace, "From Eliza to A.L.I.C.E."
So it turns out the Turing test should be pointed the other way. Our goal should be to find a human who is too smart to be fooled by the conversational equivalent of a cheap calculator.
If you thought that was silly…
The Eliza Effect is so well-known among programmers that they exploited it for yet another tool, "rubber-duck debugging." To debug a program - or indeed to solve many other frustrating problems - you explain your problem to a rubber duck. In speaking the problem out loud, quite often you will suddenly have the light bulb go off and you'll have this flash of epiphany leading to your solution. Socrates was the first philosopher to stumble upon this method.
It works shockingly often! In fact, so does explaining your problems to any third party, including a fully comprehending, but silent, human. An amused toddler or a complacent cat is wonderful for this. The nut of the matter is, your brain works two different ways in conceiving a structure; one way for internal thinking, and another way for putting the abstract concepts into words. What you're really doing is explaining the problem to yourself. Your brain might make the problem into a scrambled, non-linear mess in your head, but when you're forced to describe it, it comes out in a logical order and you suddenly see the piece you were missing.
But aside from helpful phantom input for debugging life's little messes, humans are hard-wired to be social. Any child goes through a stage of making up imaginary friends. We've all won a few arguments in the shower. Heck, humans are so desperate for an audience that they'll make up one in their own head.
Now, don't be misled, on "an intellectual level," we "know" that we're talking to an inanimate object. But humans love to be fooled in this regard. Sometimes we embrace the delusions so hard that they become reality.
When A Human Loves A Robot…
Getting back to MIT, which is really getting a reputation for playing fast and dangerous with the human-robot interactions these days, one scientist at the lab found herself becoming attached to the robot that was their current experiment. So much so that she's taken to speaking out against the "seductive and potentially dangerous powers" of making robots a little too friendly. Not that that's stopped them from creating Kismet, yet another A.I. designed to learn social intelligence the same way a baby human learns - by interacting with adults.
Yeah, let's have humans sit with a robot and interact with it like it was a baby, right down to cuddly toy props. They even taught it to say "I love you." That isn't getting creepy yet, is it?
It doesn't even take an MIT cutting-edge experiment; lots of folks simply get attached to their Furbies. For example, this couple reports a version of Furby known as Shelby was eerily adept at getting humans to interact with it. To the point that they'd feel mean for not talking to it and grief when it was shut away in a box. They even go on to describe similar interactions with their Roomba, which they named "Ricky," as they found themselves talking to it. You know what's coming next, don't you?
Humans will anthropomorphize just about anything, from cars to chairs to the moon, thanks to bugs in the human brain like pareidolia. Even if we "know" there's no face there, our imaginations just plaster human anatomy over any three points on a shape and call it two eyes and a mouth, and hence we can't unsee it. Just like we know that that Mars rover isn't alive, but we still form an emotional attachment to it anyway.
Let's Make A Fake Friend Right Now
Come on, let's have some fun! In this age of fake news, catfishing, false profiles, astroturfing, and artificial intelligence playing with our minds, let's see how easy it is to fool ourselves right in the middle of this very article.
Because it's been done a lot recently. Take the case of Rachel Brewson, who stormed the Internet in 2015 with her tale of a rocky romance across political barriers, and who was later caught out being fake. It turned out to be a phony profile drummed up by a P.R. marketing firm who does this on a regular basis. The haunting quote in that article: "The funny thing is we do this all the time. You guys just found one."
And it's not that hard to do. This 2014 article at The Atlantic shows step-by-step how to create a fake profile that's totally convincing. In 2017, this Daily Dot article retraces almost the same steps. In each case, those were false profiles meant to conceal a real person's identity, for privacy means. But we're more interested in the human aspect: How fast can we form an emotional bond with an entity we literally create in seconds? We have some new tools for this task since even those articles were created.
First, we have the almighty This Person Does Not Exist generator. This handy-dandy A.I. algorithm has been making the rounds lately; you'd have to be dead not to have seen it already. Long story short, it creates convincing human faces that do not exist on any actual human anywhere. I hit refresh on it a couple times and out popped this:
Well, she looks like the fetching sort. She looks strangely familiar, in fact, like her features were borrowed from several actresses we know. Just to double-check, I uploaded that image to Google's reverse-image search and got results that confirmed there are way more Caucasian women with brown hair out there than I could shake a stick at:
When we look at all these other people, it makes this one photo stand apart. She does have unique qualities after all.
Now we head over to a tried-and-tested toy like the Fake Person Generator. This is another privacy tool, used to make realistic but fake profiles to fill in forms with. The ZIP code will match the city, the social security number follows standard numbering conventions, even the astrology sign matches the birthdate, but rest assured, we're not compromising anybody's privacy here. Now we see our person is a female, around age 24, and set the generator for those parameters.
And we have discovered that this person's name is Laurie B. Andrews. Now try it again with some stats attached:
Well, obviously Laurie B. Andrews could go on Tinder right now and get at least a few horndogs swiping right!
For further depth, Laurie B. Andrews's favorite sports are rowing and basketball. She watches Dancing With the Stars and Law & Order SVU. Her most recent book read is The Time Traveler's Wife by Audrey Niffenegger. Her interests include bird-watching and genealogy. She voted for Democrats in the last election. Her turn-ons include men with bushy mustaches like Sam Elliot, fast designer cars, and men that can make her laugh.
She's starting to sound familiar, isn't she? But we're running out of room here.
Look once more into the mellow brown eyes of Laurie Andrews, the Minnesota office worker who would just as soon watch a basketball game with you as long as you drive a sporty car. Who will go bird-watching with you in the wilds of Minnesota, a great state for it when it's not snowing. Behind her challenging, intelligent half-grin is a personality just waiting to burst into laughter if you make the right joke, and then she's tamed and all yours.
It's a shame to say goodbye to her after we just got to know her. But we have to get on with our day now.
"We do this all the time. You guys just found one."