I think you've mostly approached this rationally except for the fact that I think functionalism ultimately is just not a good theory of mind.
I understand that, without any way to directly test for consciousness, it's difficult to choose a useful theory of mind. But in the absence of direct tests, the best thing we can find are good analogies. After all, the only reason any individual human is able to presume consciousness in any other human is ultimately by analogy. And it's also by analogy that most people presume that their favorite animals and pets are sentient.
But our analogies are not always as relevant. For instance, a child may presume sentience in their favorite action figure or doll. Because, after all, they have arms, legs, and faces that look like a human. But most adults will not consider that to be a very relevant analogy.
So what analogies do (and should) we use, if not the outward form or shape of something? I suspect many people's minds will jump to 'intelligence' as the most relevant analogy (or, more specifically, the ability to communicate or demonstrate what we perceive to be 'intelligence').
But I actually think there's a stronger analogy - one that we might call 'provenance'. Because when an adult sees a toy, they know that it was just made, by people, out of plastic. Whereas when an adult sees another person or animal, they know that they were born, and have a similar ancestry.
And that similar ancestry and provenance is very important, because it implies a lot. It implies similar biological processes, even at a microscopic level. And that's something you just can't get at from trying to observe something through the lense of outwardly observed 'intelligence'.
Because even if an AI is able to perform actions that look like intelligence to us at an external, macro level, the stuff that's going on under the hood at a micro level is just so incredibly different than us.
And that's really important to recognize. Because even if there's some sort of low level hum of sentience that pervades all things and all systems, the likelihood that we have any ability to predict the witnessed experience of that sentience, in a system that doesn't closely resemble our own, even down to the microscopic level, is very small. In fact, it's vanishingly small - unless we have some other predictions about which microscopic interactions might result in specific conscious experiences - which we don't currently have.
And so the presumption that we can know anything about how a computer 'feels' based on its typing certain words - even if those words are very interesting - seems like a very bad bet. One with a probability of being accurate basically equal to 0.