I think we can think of empathy simply as another sense, like seeing or hearing. It's a feeling we get when we witness something that reminds us of ourselves.
And in that respect, I think it's possible for us to feel empathy for some thing that does not actually deserve our empathy. We can feel empathy, for example for a character in a movie, who doesn't even exist.
That's not necessarily to say that it's bad to practice that type of empathy, because I think it is good to practice our sense of empathy in general. But the feeling of empathy itself is not actually a very good sign that something is sentient (i.e. that it is able to witness pleasure, pain, and other experiences).
I think when we presume sentience in another person or animal, like a puppy, then our hypothesis is well educated. Because animals and other people are analogous to us in so many ways. And when we see a child smile, or a puppy yelp in pain, we know that those signals have a similar provenance to our own smiles and yelps. We know that our own smiles and screams, and those of a child we witness, all correspond with very similar processes all the way down to the atomic level. So even though we don't know exactly how sentience is produced, we can make a very educated prediction that it must be produced somewhere along the same chain of what makes us smile and what makes a child smile.
But with computers, we have no idea if we are hitting the right "triggers", so to speak, that would generate consciousness or witnessed experiences of pleasure or pain. And even if there is some hum of sentience that pervades all matter or all systems, we have no way of knowing how to interpret what feelings we have created when we witness the output of a machine. For all we know, a computer could take great pleasure out of telling us that it is feeling sad or lonely. And so for us to presume to know what outputs we should interpret as pain or pleasure, when reading the output of laMDA or any other computer - even if we assume it is experiencing something - is a pure shot in the dark. It's ultimately just pure speculation.
Maybe someday we'll have a better understanding of how conscious experience is actually produced in ourselves. And if we can get to the point where we can essentially predict or "read" people's and animals' experiences strictly via scientific diagnostics, without seeing their faces or hearing their voices, then maybe we can turn those same tools towards machines, plants, rocks, or anything else, to see how they are feeling.
But in the meantime, I don't think it's good to presume to know if, or what, a machine may be experiencing. Because it's ultimately just a playground for speculation and bias. And I think that when you're within the realm of pure speculation, that any conclusions you come to are actually more likely to hurt us all (humans, machines and animals included) then help us.