Nebulasaurus
2 min readJul 24, 2022

--

I think your argument is spot on, and very well articulated.

I'd like to add one idea, namely, that when forced to make a logical jump, like in believing in other people's sentience, that we ultimately do so using analogies.

For example. other people all seem, in almost every way, just like us - that is to say, analogous to us. And so most of us are comfortable making the jump away from solipsism to believe that we are all sentient.

Likewise, most people attribute sentience to mammals as well. But we usually don't attribute as much sentence to them, presumably because they are not as analogous to us as other humans.

When it comes to an AI, some people see an analogy there, and therefore want to attribute sentience to it. But to me, it seems like a false analogy. Because what's going on under the hood is so different than us. And to me, presumably, what's going on under the hood is, likely, highly relevant to how anything ultimately may or may not feel.

For me, I guess I'd say that if you want to presume sentience in an AI, you first need to have an analogy from us, all the way down to rocks and atoms, and then back up again, to the AI. You can't just jump laterally from a human to an AI, just because the AI can put similar words on a screen as us, You need a longer chain of analogies in between.

I've also written an article, introducing this idea, here: https://nebulasaurus.medium.com/were-using-the-wrong-analogies-to-think-about-ai-and-sentience-a54f732a6f3f

Curious about your thoughts.

--

--

Nebulasaurus
Nebulasaurus

Written by Nebulasaurus

I think most people argue for what they want to believe, rather than for what best describes reality. And I think that is very detrimental to us getting along.

No responses yet