Basically, while the structure of the network has been known for decades, they don't yet have the technology to figure out how the neurons behave (the weights of the neurons) nor how they learn (changes in the weights), so they can't "upload" data from a real worm.
I'm not an expert in anything relevant to this, so this is ill-informed speculation.
But isn't referring to not knowing "the weights" (as the article and researchers do) making a rather large assumption about the nature of the problem? It sounds more like the researchers don't know "why the neurons fire when they do", which would leave open the question of whether "weights" is even a sufficient model.
Oh I think it's an awesome project, too. It's just that it sort of makes concrete what were previously hypothetical arguments about simulating living beings.
I can't bring myself to care much about a C. elegans, but let's say we scale this up enough to simulate something the size of a dog plus its environment. Presumably it has subjective experience, like a real dog? What are the ethics of writing code that would result in its "suffering"?
What are the ethics of writing code that would result in its "suffering"?
Do you have the same concerns about other, perhaps much simpler but also virtual and pseudointelligent entities, such as computer-controlled players in games?
It does raise some deeply philosophical questions about life and what it means in general.
Nah, I'm not particularly worried about that. Because those are all currently based on models too simple to conceivably have anything like consciousness.
On the other hand, once we can simulate a human and its environment at OpenWorm's targeted level of abstraction, presumably we would have a conscious being there?
The only consciousness we experience is our own. In the extreme, we can no more say a chair suffers than a human being other than ourselves. Of course we take the pragmatic approach of deciding things that look like suffering to us is suffering.
I strongly disagree. The human brain, in its connectome, encompasses enough computational complexity to contain consciousness and suffering. A chair does not, at least not based on any model I've yet seen proposed.
Believe they meant the wood composing the chair came from a tree with complexity enough to be reasonably thought capable of consciousness (on some basic tree level)
Although a chair does subsume human projection, similar to a Buddhist realization of artifice.
A chair serves, regardless of its consciousness, but the human (or mammal stand-in) may project a consciousness of its utility upon the object, thus elevating its stature in a shared conscious.
The closest I can think is a friend establishing rules around an heirloom coffee table that enforced coasters and denied foot rest.
But I think that more organic models should have that assumption baked in to their more humanistic appropriation.
Like we kill a cow for sustenance, but I respect that in purchase by eating it.
A table serves me infinitely. And thus it carries the consciousness of the human.
Destruction of a utility reflects the human qualities, and so an inanimate object might adopt those qualities independent of time.
And on what basis do you claim that computational complexity is a necessary condition for consciousness and suffering? There is only one datum you have for consciousness. Other people can tell you that they experience consciousness, but so can a Text-to-Speech program
> Of course we take the pragmatic approach of deciding things that look like suffering to us is suffering.
Hence why I said this. There is nothing you can use to believe a human's claims over the text-to-speech's claims, so we choose what looks like suffering to us. However, my point is that it is an arbitrary choice.
> There is nothing you can use to believe a human's claims over the text-to-speech's claims
Of course there is. The human facing me looks similar to me so I can interpolate its claims with my own experience of being an human.
I thinks it’s also for the same reason that we have variable degrees of empathy against animals : the more the animal looks like us (that can be size, number of members, physically, in terms of effective communication…), the more, on average, we have empathy for them.
This can go to the extent that people commonly feel « something » about their cars and their human-face-like designs combined with their single ability to move.
Yes, but this is arbitrary. That is my point. There's no reason to believe that you can extrapolate your own subjective experience to others based on their similarity to you.
But you are choosing to use the Turing test, which tests a form of intelligence, as a proxy to determine whether to believe the claim. A Turing test does not preclude philosophical zombies. It does not demonstrate anything about consciousness.
"What are the ethics of writing code that would result in its "suffering"? "
I doubt that code can suffer. We do not understand feelings enough and this might help to understand it eventually - but for now I see no reason to assume that they can recreated digitally.
And I am much more worried, about many real life beings that suffer in our food chain and co.
Why would we assume subjective experience can't be recreated digitally?
Inert "code" can't suffer, of course. But assuming we accept the materialistic view, what reason do we have to believe that a mind in a simulated body has any less subjective experience than a mind in a "real" body?
"what reason do we have to believe that a mind in a simulated body has any less subjective experience than a mind in a "real" body? "
That reality is seemingly indefinitely complex and any simulation we have, very simplifying. I am pretty sure, for emotions and co, it takes a bit more sophisticated simulation - if it can be achieved digitally, at all.
This project is simulating an entire organism, including its neurology. To what extent does running this code create a "real" worm?
Sure, it's just C. elegans for now. But over time, we'll gradually see additional projects to simulate larger organisms...