This is a crucial question. Is there a Turing test for consciousness? While qualia can’t be measured, they do affect behavior — especially when they become a shared dataset in a social context of fellow qualia-experiencing entities.
In my other comment I write about this a bit, but basically it doesn’t seem like non-conscious entities would be able to accurately predict the behavior of conscious entities, due to their lack of a shared meta-dataset of qualia. At best, they could find patterns of behavior and create a representation of qualia. But this isn’t the same as actually having the same data. It’s the difference between creating a representation of a state that causes another agent to cry, scream, and writhe, and that of knowing the precise state of pain itself. The former — a representation — doesn’t generalize past training data, especially when confronted with a multitude of qualia in varying combination. The latter — direct, precise, concrete data — might still suffer from inaccuracy (even knowing the precise potential states of another agent doesn’t mean we can infer which state that agent is in), but it’s better than the alternative: a guess built upon a guess.
I find the philosophical zombie to be a great thought experiment for this, along with the prisoners’ dilemma. Two conscious entities have a shared dataset that enables communication without words — spooky-action-at-a-distance via qualia. Two friends with great loyalty to one another can solve the dilemma by their knowledge of what love and betrayal is. A p-zombie would understand that given past behavior, that their prisoner counterpart might not choose betrayal. But qualia-experiencing agents know what is happening in one another’s minds in a way a non-qualia-experiencing entity can never know. The p-zombie would lack all empathy. It would always be logical, and choose the Nash Equilibrium. It would never mourn the dead. It would never commit suicide. It would never sacrifice its life for love, or for an ideal, because it would have neither.
> it doesn’t seem like non-conscious entities would be able to accurately predict the behavior of conscious entities
Or the consciousness is just an observer and doesn't control anything, the control part is just an illusion where it feels like it made choices.
> It would never mourn the dead. It would never commit suicide. It would never sacrifice its life for love, or for an ideal, because it would have neither.
All of those can be explained by optimizing the curve for other cases. Mourning the dead happens since there is a conflict between quickly killing feelings for things and to keep the feelings for important things. Self sacrifice happens for any p zombie that values something else higher than itself. Love is just highly valuing something combined with momentum making it hard to stop valuing it, hate is the same but the opposite direction.
Now maybe there is consciousness with effects etc, but it isn't necessarily needed.
Some theories of consciousness do claim that it is somewhat of an accidental side-show with no affect on anything except the pitiable entities who are just along for the ride.
But I don’t find these convincing, in that it is clear that in the animal kingdom, mammals display very different behaviors from other types of animals and are the only ones to have a neocortex. And among mammals, the species with the largest most developed neocortex also exemplifies and amplified the very behavior that sets mammals apart. That unique behavior is flexibly adaptive social activity.
Your claim is that a p-zombie would act as if it were conscious. But the evidence is the opposite — that all those organisms which display conscious behavior are also conscious, and that no non-conscious organisms display conscious behavior. The only argument for consciousness as theatre is that although conscious behavior always exists with conscious experience, it is an accidentally perfect correlation — a weird but necessary artifact of a brain capable of conscious behavior must always have the byproduct of non-affective conscious experience.
Let’s say that position is true. That conscious experience is just a non-affective but inevitable side-effect of the kind of brain capable of conscious behavior. In that case p-zombies are still impossible, since under this assumption conscious behavior is always accompanied by the illusion of conscious experience.
So in either case my main point still holds: if conscious reasoning is AGI, and conscious reasoning follows from conscious behavior, then the path to AGI is to train for those peculiarly unique conscious behaviors that are most distinguished from non-conscious behaviors. It’s impossible to train directly for qualia, so whether qualia exist as affective components of conscious behavior or not is somewhat irrelevant. Conscious experience will always be a “hard problem”. But what matters is finding the right conscious behavior that enables future growth toward conscious reasoning.
The most uniquely conscious behavior (so unique it is built into us with mammalian milk production) is “parental care”. The simplest concrete behavior that humans share with other animals (such as breathing air), but have also amplified the most (we haven’t amplified breathing at all) is parental care.
If we want to train agents to achieve conscious behavior, I believe this makes parental care the best option. Fortunately, unlike biological evolution which has to contend with a range of variables that may or may not include parental care (plenty of species succeed without it), an artificial training environment can be entirely focused on optimizing for this one variable — success can hinge entirely on parental care.
In my other comment I write about this a bit, but basically it doesn’t seem like non-conscious entities would be able to accurately predict the behavior of conscious entities, due to their lack of a shared meta-dataset of qualia. At best, they could find patterns of behavior and create a representation of qualia. But this isn’t the same as actually having the same data. It’s the difference between creating a representation of a state that causes another agent to cry, scream, and writhe, and that of knowing the precise state of pain itself. The former — a representation — doesn’t generalize past training data, especially when confronted with a multitude of qualia in varying combination. The latter — direct, precise, concrete data — might still suffer from inaccuracy (even knowing the precise potential states of another agent doesn’t mean we can infer which state that agent is in), but it’s better than the alternative: a guess built upon a guess.
I find the philosophical zombie to be a great thought experiment for this, along with the prisoners’ dilemma. Two conscious entities have a shared dataset that enables communication without words — spooky-action-at-a-distance via qualia. Two friends with great loyalty to one another can solve the dilemma by their knowledge of what love and betrayal is. A p-zombie would understand that given past behavior, that their prisoner counterpart might not choose betrayal. But qualia-experiencing agents know what is happening in one another’s minds in a way a non-qualia-experiencing entity can never know. The p-zombie would lack all empathy. It would always be logical, and choose the Nash Equilibrium. It would never mourn the dead. It would never commit suicide. It would never sacrifice its life for love, or for an ideal, because it would have neither.