Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Does this freak anyone else out just a little?

This project is simulating an entire organism, including its neurology. To what extent does running this code create a "real" worm?

Sure, it's just C. elegans for now. But over time, we'll gradually see additional projects to simulate larger organisms...



Don't worry, it doesn't work.


In what sense does it not work?


The state of the art is described in this blog post (shared by moyix above): https://www.lesswrong.com/posts/mHqQxwKuzZS69CXX5/whole-brai...

Basically, while the structure of the network has been known for decades, they don't yet have the technology to figure out how the neurons behave (the weights of the neurons) nor how they learn (changes in the weights), so they can't "upload" data from a real worm.


I'm not an expert in anything relevant to this, so this is ill-informed speculation.

But isn't referring to not knowing "the weights" (as the article and researchers do) making a rather large assumption about the nature of the problem? It sounds more like the researchers don't know "why the neurons fire when they do", which would leave open the question of whether "weights" is even a sufficient model.


This is why I roll my eyes when people say we understand the human brain. We don’t even fully understand what individual neurons are doing.


It's not that freaky for me. I think it's awesome!


Reminds me of this short fiction: https://qntm.org/mmacevedo


If you liked that, you might enjoy the Bobiverse series by Dennis E. Taylor.

http://dennisetaylor.org/

https://www.goodreads.com/book/show/32109569-we-are-legion-w...


Or the novel, "Permutation City", by Greg Egan. https://www.goodreads.com/book/show/156784.Permutation_City


The Bobiverse is great.


I hadn't run across that story before, so thanks for sharing it.


Oooh, that was awesome.


Oh I think it's an awesome project, too. It's just that it sort of makes concrete what were previously hypothetical arguments about simulating living beings.

I can't bring myself to care much about a C. elegans, but let's say we scale this up enough to simulate something the size of a dog plus its environment. Presumably it has subjective experience, like a real dog? What are the ethics of writing code that would result in its "suffering"?


What are the ethics of writing code that would result in its "suffering"?

Do you have the same concerns about other, perhaps much simpler but also virtual and pseudointelligent entities, such as computer-controlled players in games?

It does raise some deeply philosophical questions about life and what it means in general.


> computer-controlled players in games?

Nah, I'm not particularly worried about that. Because those are all currently based on models too simple to conceivably have anything like consciousness.

On the other hand, once we can simulate a human and its environment at OpenWorm's targeted level of abstraction, presumably we would have a conscious being there?


The only consciousness we experience is our own. In the extreme, we can no more say a chair suffers than a human being other than ourselves. Of course we take the pragmatic approach of deciding things that look like suffering to us is suffering.


I strongly disagree. The human brain, in its connectome, encompasses enough computational complexity to contain consciousness and suffering. A chair does not, at least not based on any model I've yet seen proposed.


I think a chair does, in it's full life cycle.


How, exactly?


Believe they meant the wood composing the chair came from a tree with complexity enough to be reasonably thought capable of consciousness (on some basic tree level)


Yiss. Tree has a different consciousness. Maybe a little bit more abstract.


Although a chair does subsume human projection, similar to a Buddhist realization of artifice.

A chair serves, regardless of its consciousness, but the human (or mammal stand-in) may project a consciousness of its utility upon the object, thus elevating its stature in a shared conscious.

The closest I can think is a friend establishing rules around an heirloom coffee table that enforced coasters and denied foot rest.

But I think that more organic models should have that assumption baked in to their more humanistic appropriation.

Like we kill a cow for sustenance, but I respect that in purchase by eating it.

A table serves me infinitely. And thus it carries the consciousness of the human.

Destruction of a utility reflects the human qualities, and so an inanimate object might adopt those qualities independent of time.


And on what basis do you claim that computational complexity is a necessary condition for consciousness and suffering? There is only one datum you have for consciousness. Other people can tell you that they experience consciousness, but so can a Text-to-Speech program


Would you believe that a text-to-speech program was suffering if it told you so?


> Of course we take the pragmatic approach of deciding things that look like suffering to us is suffering.

Hence why I said this. There is nothing you can use to believe a human's claims over the text-to-speech's claims, so we choose what looks like suffering to us. However, my point is that it is an arbitrary choice.


> There is nothing you can use to believe a human's claims over the text-to-speech's claims

Of course there is. The human facing me looks similar to me so I can interpolate its claims with my own experience of being an human.

I thinks it’s also for the same reason that we have variable degrees of empathy against animals : the more the animal looks like us (that can be size, number of members, physically, in terms of effective communication…), the more, on average, we have empathy for them.

This can go to the extent that people commonly feel « something » about their cars and their human-face-like designs combined with their single ability to move.


Yes, but this is arbitrary. That is my point. There's no reason to believe that you can extrapolate your own subjective experience to others based on their similarity to you.


> However, my point is that it is an arbitrary choice.

It does not have to be arbitrary though. We could go by the Turing test. If some text-to-speech could clear it i would believe what it said.


But you are choosing to use the Turing test, which tests a form of intelligence, as a proxy to determine whether to believe the claim. A Turing test does not preclude philosophical zombies. It does not demonstrate anything about consciousness.

https://en.wikipedia.org/wiki/Philosophical_zombie


"What are the ethics of writing code that would result in its "suffering"? "

I doubt that code can suffer. We do not understand feelings enough and this might help to understand it eventually - but for now I see no reason to assume that they can recreated digitally.

And I am much more worried, about many real life beings that suffer in our food chain and co.


Why would we assume subjective experience can't be recreated digitally?

Inert "code" can't suffer, of course. But assuming we accept the materialistic view, what reason do we have to believe that a mind in a simulated body has any less subjective experience than a mind in a "real" body?


"what reason do we have to believe that a mind in a simulated body has any less subjective experience than a mind in a "real" body? "

That reality is seemingly indefinitely complex and any simulation we have, very simplifying. I am pretty sure, for emotions and co, it takes a bit more sophisticated simulation - if it can be achieved digitally, at all.


Thanks for spurring this thread. This has been pretty great.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: