We’re Using the Wrong Analogies to Think About AI and Sentience

Nebulasaurus
4 min readJun 22, 2022
Toy space boy, robot, and monkeys

If you follow the field of AI, you’ve probably heard about the Google engineer who claimed that one of Google’s latest AIs, called “LaMDA”, was sentient. The Washington Post published a story about it, titled ‘The Google engineer who thinks the company’s AI has come to life’. And the engineer, Blake Lemoine, has also written about it on Medium.

In Lemoine’s latest Medium post, ‘Scientific Data and Religious Opinions’, he explains his reasoning in further detail. It’s an interesting read, and I think he’s taken a mostly rational approach. But I think his argument has an essential flaw — which is that, as he says in the article, he gives a lot of credence to a philosophical school known as ‘Functionalism’ as his favored theory of mind.

To give just a brief summary, Functionalism posits that specific mental states correspond to specific functional roles. So for example, if a robot and a human both say ‘Ouch!’, and flinch after being damaged slightly, Functionalism presumes that both the robot and the human must be experiencing similar mental states, e.g. a feeling of pain.

But I don’t think this makes sense.

To be clear, I understand that, without any way to directly test for consciousness or sentience, it’s difficult to choose a useful theory of mind. And indeed, we do not have any direct way to test for sentience.

But in the absence of direct tests, I think it’s quite clear that the best tools we have are good analogies. After all, the only reason any individual human is able to presume consciousness in any other human is ultimately by analogy. And it’s also by analogy that most people presume that their favorite animals and pets are sentient.

But our analogies are not always as relevant. For instance, a child may presume sentience in their favorite action figure or doll. Because, after all, they have arms, legs, and faces that look like a human. But most adults will not consider that to be a very relevant analogy.

So what analogies do (and should) we use, if not the outward form or shape of something? I suspect many people’s minds will jump to ‘intelligence’ as the most relevant analogy (or, more specifically, the ability to communicate or demonstrate what we perceive to be ‘intelligence’).

But I actually think there’s a stronger analogy — one that we might call ‘provenance’. Because when an adult sees a toy, they know that it was simply made, by people, out of plastic. Whereas when an adult sees another person or animal, they know that they share a similar provenance. Just like themselves, all of Earth’s humans and animals are born, and have a similar ancestry.

And it turns out that a similar ancestry and provenance is a very relevant analogy, because it implies a lot. It implies similar biological processes, even at a microscopic level. And that’s something you just can’t get at from trying to observe something through the lense of outwardly observed ‘intelligence’.

Because even if an AI is able to perform actions that look like intelligence to us at an external, macro level, the stuff that’s going on under the hood at a micro level is just so incredibly different than us.

And that’s really important to recognize. Because even if there’s some sort of low level “hum” of sentience that pervades all things and all systems, the likelihood that we have any ability to predict the witnessed experience of that sentience, in a system that doesn’t closely resemble our own, even down to the microscopic level, is very small. In fact, it’s vanishingly small — unless we have some other predictions about which microscopic interactions might result in specific conscious experiences — which we don’t currently have.

And so the presumption that we can know anything about how a computer ‘feels’ based on its typing certain words — even if those words are very interesting — seems like a very bad bet. One with a probability of being accurate basically equal to 0.

To be clear, I don’t think the analogy of ‘provenance’ is the only relevant analogy with respect to sentience. After all, we usually presume such animals as insects and snails to be less sentient than humans, despite our shared provenance as creatures of Earth — and that presumption is primarily based on the analogy that they don’t demonstrate the same level of ‘intelligence’ as us.

But I do think it's clear that, at this point in time, intelligence and external forms and behaviors are not the most relevant analogies for presuming consciousness. And ultimately I’d just like to see us start picking our analogies with more care and deliberation.

--

--

Nebulasaurus

I think most people argue for what they want to believe, rather than for what best describes reality. And I think that is very detrimental to us getting along.