I agree. But I think if we are to label some things as "balderdash" and others not, we ought to define criteria for doing so.
And along those lines, I think we must first establish some context, specifically, that humans have no way of knowing to what extent their subjective experiences provide an accurate window onto some broader "truth" (whatever that even means). In other words, whether they acknowledge it or not, humans are not in the business of deciphering the absolute "truth" of the universe, but rather, are merely in the business of building predictive or explanatory mental models or belief systems.
And so the question is, what criteria should we use when adopting a belief into our mental model? And I'll suggest that it should pass at least one of two - first, that it should provide predictive value. And second, that it provides explanatory power, without adding unnecessary complexity to the model.
I think if pressed, we might actually be able to conflate the two. But for now, I'll treat them as separate. So those are the criteria for adopting a belief into our belief system - which are also the criteria we should use for labeling beliefs as "balderdash" or not.
Math and science are obvious places where we use the first criterion. We notice that objects fall towards the earth, and notice that other celestial bodies seem to pull things towards themselves, and so we come up with a predictive model that posits all objects as having an attractive force relative to their size.
But when it comes to speculating about when and where we might expect to encounter sentience in the cosmos, predictive power is an impossible criterion - because we have no way of "testing" for the presence of a sentient subject. And so when it comes to building a belief system about sentience, we have to rely on the second criterion, namely, that it follows Occam's razor.
Now one approach would be to simply not posit sentience anywhere other than the one place where we can "test" for it - which is to say, in ourselves. That's the solipsist's answer.
But if we decide to posit sentience outside of ourselves, then we have to start making up rules where we do and don't believe sentience to exist. And each of those rules adds complexity - arguably without providing explanatory value (since we can't test for sentience in order to verify or falsify any of our theories). Which means most rules we might posit probably violate Occam's razor.
But if we're still set on positing sentience outside of ourselves, the simplest rule set we can come up with is probably that motivational force is always accompanied by a sentient perspective (as it is in ourselves) - and to therefore conflate the concepts of force, will, sentience, and personhood.
I'd certainly be up for discussing whether other models make more sense. But I think there's a pretty strong case that conversations about sentience, will, and personhood should be done roughly within the framework that I've described here.