OK, I don’t hate AI movies. I enjoyed Age of Ultron and I, Robot quite a bit, for instance. I hate AI movies that deal with the question of the humanity of AIs. (No, I have not watched Steven Spielberg’s A.I. although I know quite a bit about it, and probably never will.)
Why don’t I like them? You might guess it’s because it makes us consider uncomfortable questions. And you’d be right. But there’s a lot here on which I can elaborate.
What bothers me is the fact that they’re trying to make the AIs more human. In Blade Runner, the Tyrell Corporation, which manufactured the AIs had a motto something like “more human than human”. That’s disturbing. Blade Runner was probably the first big movie to explore these issues (if there were others that came before it, well then, since I’m no AI aficionado, I don’t much care to know).
I had had a passing interest in watching it, just because it is a classic and considered one of the best sci-fi flicks ever. With the sequel just hitting theaters, I decided to watch the original (the Final Cut) before the sequel, and I can’t say I loved it that much. In case any Blade Runner lovers are reading this, I will say though, that I thought both the original and the sequel were good movies and the sequel actually helped me appreciate the original more, but this isn’t a review of either of them.
Watching Blade Runner 2049, I realized that the movie finally helped me put my finger on why I hate AI stories. There is a line in the movie by Wallace, Jared Leto’s character. I’m paraphrasing: “Humanity has lost its appetite for slaves. But we forget that all civilizations were built on the backs of slaves.” (In the context of the movie, this is important because, off-screen, humans are building off-world colonies that require slave labor.) That’s exactly why I find Amazon Echo creepy. No, I don’t want a slave called Alexa, thank you very much. I also rarely use Siri, and never use Cortana though I have access to both. But why do we want Alexa and Siri to sound more human? Do we want to simulate the experience of owning slaves? Sure, maybe you treat your Alexa or Siri like a friend, but it’s a friend you are constantly using, and the more advanced these technology will get, the more we shall start to go down the slippery slope of these AI movies.
Blade Runner is fundamentally a different flavor of AI movies than Age or Ultron or I, Robot. Those are the kind of “the danger of AIs” type stories – the ones Elon Musk warns us against. Those are the “AIs try to solve problems but end up taking over the world” type of stories. Blade Runner (and Steven Spielberg’s A.I.) deal with complex moral questions. The AIs become so close to human that we have to grapple with the reality of should we now consider them human? If we’re going to think about it, then we might as well wonder – shouldn’t we think about how we are discriminating against them? Should we be treating them like slaves and less than us? Should they have equal rights? Should they be given citizenship and voting rights? Should they have representation in the government? Should they be allowed to marry humans? And I am not overthinking this, because these movies are going down the path of raising these questions.
Why were the AIs created at all? It was for the benefit of humans, so we can use them to accomplish things we can’t themselves. They are supposed to be machines without consciousness – tools that humans use. But what happens when the machines develop consciousness and personalities just like us? In Blade Runner, they are slaves who are treated horribly. In Her, they are like friends or companions, but those AIs don’t have physical form, so they don’t face the same issues. Maybe we can think of something like Marissa Meyer’s young adult Lunar Chronicles novels, where one of the principal characters is an android with a distinct personality, whom other humans consider a friend, and the author has been flirting with the idea of human/android romances.
And although the Planet of the Apes movies (especially since the most recent reboot) aren’t about AIs, they actually also deal with a similar question – should we accept the apes as equal to us and do we integrate, or do we keep viewing them as lesser beings and try to keep our species separate? Personally, I am not comfortable with the idea of a world where humans and apes live side by side with equal rights and equal voices, who can have relationships with humans and marry them (although they most likely won’t produce viable offspring). Disturbingly, I imagine this is analogous to how some races have historically felt about others (yes, let’s not get specific here, but you know exactly what I’m talking about). So just like bigoted people looked down on other races as lesser, am I being bigoted when I don’t want to think of apes or androids as equal? When I don’t want to integrate with apes and androids?
I don’t want to think about these questions. But these stories are designed to make me think about them.
Let’s take the discussion further. I’m sure many groups, especially religious ones, might claim that making androids human-like is immoral. Messing with God’s creations is immoral.
I’ll take a step back. From what I understand, scientists haven’t been able to create life. As in, they have tried to replicate conditions in which life could arise, and they didn’t manage to make it happen. Should they be trying to create life in the first place? Lots of people will say that’s immoral and they shouldn’t. It’s like Ian Malcolm’s line from Jurassic Park about hubris: the scientists were so engrossed in whether they could, that they never stopped to consider whether they should.
There are sci-fi books and movies out there I’m sure that deal with the implications of creating life. I can’t think of a famous one off the top of my head. But maybe that’s something that’s just never going to happen. Maybe that’s why we don’t have too many stories like this. But what if instead of creating life, we create consciousness? So if AIs have consciousness and personalities in the same way that we do, then are they human even though they’re not technically alive?
[I will insert a reference to Ex Machina here because I couldn’t fit it in anywhere else: but that movie would probably also lie in the realm of the android with AI being created as a vanity project of a tech genius and he and an innocent bystander pays the ultimate price for it.]
So yes, AI stories about the humanity of AIs bother me, because why would humans try to create a situation which introduces more bigotry and discrimination into the world? If you need slave labor, why not just create mindless machines to do it? Make them “smart” mindless machines, but machines without consciousness nonetheless. As stories movies raise questions about the rights of the AIs, what are they trying to prove? That humans are fundamentally bigoted? We might get over bigotry related to race, gender, sexuality, ability, etc., but can we get over bigotry related to species? Why are we even going there?
Coming back to Blade Runner, what makes the “replicants” so disturbing is that they don’t seem any more enhanced than humans are. They are not enhanced in order to perform tasks humans can’t. They are made to be exactly like humans, simply so that can be used as slaves or obedient servants. [In Blade Runner 2049, you do have characters like Joi who are similar to the AIs in Her, in that they function largely as companions. These are less disturbing simply because they don’t occupy the same physical space as humans and they don’t have as many avenues of abuse.]
[Also, maybe a reference to Westworld is warranted. I stopped watching the show after ~4 episodes because it is not my cup of tea. But the premise is the exploitation of extremely life-like androids for human entertainment. It’s kind of sick, when you think about the extent of exploitation. But in that world, all the humans know at the back of their mind that these are just machines. But if you’re trying to simulate the experience of exploitation, what does that say about you? And maybe those androids do have consciousness after all?]
Ultimately, I think the message in Blade Runner and similar AI/android stories is that humans are awful. And that’s a bummer.
UPDATE (May 2018): Just discovered that one of the episodes of one of my favorite podcasts was extremely relevant to everything I talked about here. The episode was released before Blade Runner 2049 came out, otherwise I imagine this movie would have been part of the discussion.
Can Robots Teach Us What It Means To Be Human? Hidden Brain.