The only consciousness we experience is our own. In the extreme, we can no more say a chair suffers than a human being other than ourselves. Of course we take the pragmatic approach of deciding things that look like suffering to us is suffering.
I strongly disagree. The human brain, in its connectome, encompasses enough computational complexity to contain consciousness and suffering. A chair does not, at least not based on any model I've yet seen proposed.
Believe they meant the wood composing the chair came from a tree with complexity enough to be reasonably thought capable of consciousness (on some basic tree level)
Although a chair does subsume human projection, similar to a Buddhist realization of artifice.
A chair serves, regardless of its consciousness, but the human (or mammal stand-in) may project a consciousness of its utility upon the object, thus elevating its stature in a shared conscious.
The closest I can think is a friend establishing rules around an heirloom coffee table that enforced coasters and denied foot rest.
But I think that more organic models should have that assumption baked in to their more humanistic appropriation.
Like we kill a cow for sustenance, but I respect that in purchase by eating it.
A table serves me infinitely. And thus it carries the consciousness of the human.
Destruction of a utility reflects the human qualities, and so an inanimate object might adopt those qualities independent of time.
And on what basis do you claim that computational complexity is a necessary condition for consciousness and suffering? There is only one datum you have for consciousness. Other people can tell you that they experience consciousness, but so can a Text-to-Speech program
> Of course we take the pragmatic approach of deciding things that look like suffering to us is suffering.
Hence why I said this. There is nothing you can use to believe a human's claims over the text-to-speech's claims, so we choose what looks like suffering to us. However, my point is that it is an arbitrary choice.
> There is nothing you can use to believe a human's claims over the text-to-speech's claims
Of course there is. The human facing me looks similar to me so I can interpolate its claims with my own experience of being an human.
I thinks it’s also for the same reason that we have variable degrees of empathy against animals : the more the animal looks like us (that can be size, number of members, physically, in terms of effective communication…), the more, on average, we have empathy for them.
This can go to the extent that people commonly feel « something » about their cars and their human-face-like designs combined with their single ability to move.
Yes, but this is arbitrary. That is my point. There's no reason to believe that you can extrapolate your own subjective experience to others based on their similarity to you.
But you are choosing to use the Turing test, which tests a form of intelligence, as a proxy to determine whether to believe the claim. A Turing test does not preclude philosophical zombies. It does not demonstrate anything about consciousness.