Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's an assumption. Perhaps if we were able to understand the brain's inner workings, we could see that 'the experience of red' is precisely 'these 3 neurons firing ever 0.0112 seconds at an intensity of X while receiving 0.001 micrograms of serotonin' (completely made up, obviously).

Until we start understanding how the brain encodes and 'computes' thought, we can't really claim to know if it is or isn't simply a computer.



> Perhaps if we were able to understand the brain's inner workings, we could see that 'the experience of red' is precisely 'these 3 neurons firing ever 0.0112 seconds at an intensity of X while receiving 0.001 micrograms of serotonin' (completely made up, obviously).

Even if we knew that a person saw red when such and such neurons fired, the neurons firing would still just be a material correlate. It would be in no way equivalent to or explain anything about the sensation itself.


You are thinking of something similar to the level of today's neuroscience and brain imaging, where indeed we can only establish correlations.

But I am talking about a much more in-depth understanding of the working of the brain, similar to the level of understanding we have of a microprocessor all the way from transistors to the algorithms running on it. If we could understand human thought at a similar level, we MIGHT find out that "the feeling of red" is not fundamentally different than "the understanding that 1 + 1 = 2", and we could come up with quantifications of it in different ways, from the physical representation in the brain to a certain "bit pattern" in the abstract model of the human brain computer.

Note that the argument for qualia is not one that proves the existence of qualia - it is essentially only a definition. We have no reason to believe that the thing which the term qualia describes actually exists in the world, beyond our own personal experience, which is circular in a way. The argument goes "I feel like this thing I'm experiencing is a qualia, therefore I assume that things similar to me also have qualia", which sounds logical enough. But then, "things similar to me" is actually defined in such a way that it basically assumes qualia exist, since an AGI whose internal state we could probe precisely enough to prove that qualia do not exist for it is then assumed to be outside of "things similar to me".


> the level of understanding we have of a microprocessor all the way from transistors to the algorithms running on it

Good example, because the vast majority of people don't understand that. I tried and I still don't, nevermind someone who doesn't even care.

I mean, I know the theory, I know the individual parts, but can't quite fully understand how a complete processor works.

If someone from as early as 1920 would find an advanced robot that is a combination of some Boston Dynamics model and offline/autonomous Google Assistant (so it could walk, listen/talk/reply, and maybe pick stuff up), they would not be able to figure out how its "brain" works. At best they'd have a general idea/theory.

Same thing with our brains and current understanding of it. I believe it is possible to reverse engineer it completely, but not with today's tools.


> If we could understand human thought at a similar level, we MIGHT find out that "the feeling of red" is not fundamentally different than "the understanding that 1 + 1 = 2", and we could come up with quantifications of it in different ways, from the physical representation in the brain to a certain "bit pattern" in the abstract model of the human brain computer.

I guess the idea is that an abstract concept like "the understanding that 1 + 1 = 2" would be easier to "quantify" in the relevant sense than "the feeling of red", but I don't think that's true.

The very concept of a representation presumes an intellect in which that representation is mapped to the underlying concept. No particular physical state objectively signifies some abstract concept any more than the word "dog" objectively signifies that particular type of animal. But our mental states must be able to do so, because denying this would be denying our ability to engage in coherent reasoning and therefore self-defeating. So those mental states can't be "implemented" solely using physical states.

This argument was actually proposed by the late philosopher James Ross and developed in greater detail by Edward Feser. [1] A similar argument -- though he didn't take it as far -- was made by John Searle (of Chinese Room fame). [2]

But in any event, I would reject the notion that any representation of "the feeling of red" is equivalent to the sensation itself.

> Note that the argument for qualia is not one that proves the existence of qualia - it is essentially only a definition. We have no reason to believe that the thing which the term qualia describes actually exists in the world, beyond our own personal experience, which is circular in a way.

Well, I think it is self-evident that qualia exist for me, and that those same qualia demonstrate that there are physical correlates of qualia. I also think there is good reason to think that qualia exist in others because we share the same physical correlates.

Can I completely prove or disprove that others have qualia? No -- not you, not a rock, not an AGI. But I still have the physical correlates, which gives me some basis to draw conclusions.

[1] http://edwardfeser.blogspot.com/2017/01/revisiting-ross-on-i...

[2] https://philosophy.as.uky.edu/sites/default/files/Is%20the%2...


> A similar argument -- though he didn't take it as far -- was made by John Searle (of Chinese Room fame). [2]

I have read the entire paper - thank you for the link! - and I find it either false or trivial (to use a style of observation from Chomsky). Searle is asserting that computers don't do anything without homunculi to observe their computation, which is patently false. If I create a robot with an optical camera that detects if there is a large object near itself and uses an arm to open a door if so, the system works (or doesn't work) regardless of any meaning that is ascribed to its computations by an observer. It is true that the computation isn't "physical" in the sense that there isn't a particle of 0 or 1 that could be measured, but it is also impossible to describe the behavior of the system without ultimately referring to the computation it performs. So, if Searle is claiming that such a system only works (opens the door) in relation to some observer, then he is obviously wrong. If he is claiming that the physical processes that occur inside the microprocessor and actuators are the real explanation for how the system behaves, not the computational model, then he is in some sense right, but that is trivially true and no one would really contest it.

Furthermore, there likely is no way to actually give an accurate, formal physical model of this entire system that does not also include some kind of computational model of the algorithm it performs to interpret the photons hitting the sensor as an image, to detect the object, to determine if the object is large enough that the door should be opened, to control the actuator that opens the door etc.

Basically, you can look at human beings as black boxes that take in inputs from the environment and produce output. Searle and I both agree that there exists some formal mathematical model that describes how the output the human being will give is related to the input that it gets (including all past inputs and possibly the entire evolutionary history). However, he seems to somehow believe that computation is not necessary as a part of this formal model, which I find perplexing.

His claims that cognitivists believe that if they successfully create a computer mimicking some aspect of human capacity that the computers IS that human capacity seems completely foreign to me, I have never seen someone truly claim something this absurd. At most, I have seen claims that if we have successfully created a computer system mimicking a human capacity, that this constitutes proof against mind/body dualism at least for that particular capacity, which is I think relatively correct, though more formally this should be called evidence against the need for mind/body dualism rather that actual proof.

> because denying this would be denying our ability to engage in coherent reasoning and therefore self-defeating. So those mental states can't be "implemented" solely using physical states.

I don't think this holds water. A computer (the theoretical model) is, be definition, something that can perform coherent reasoning without any special internal state. A physical realization of a Turing machine can "think about" any kind of computational problem and come up with the same answer that a human would come up with, at least in the Chinese room sense. Yet we know that the Turing machine doesn't have any qualia, so why should we then believe that qualia are fundamental to reason itself?

To me, computer science has taken out all of the wind from any kind of qualia-based representation of the human mind.

> But in any event, I would reject the notion that any representation of "the feeling of red" is equivalent to the sensation itself.

This I agree with in some sense - the map is not the thing. Let's assume for a moment that we have an AGI which uses regular RAM to store its internal state. Let's also assume that the AGI claims that it is currently experiencing the feeling of seeing red. We could take a snapshot of its RAM and analyze this, and even show it to another AGI, which could recognize that some particular bit pattern is the representation of the AGI feeling of red. Still, that second AGI would not be feeling "I am seeing red" when analyzing this bit pattern. It could though feel "I am seeing red" if it copied the bit pattern into the relevant part of its own memory, even if its optic sensors were in no way receiving red light.


> If I create a robot with an optical camera that detects if there is a large object near itself and uses an arm to open a door if so, the system works (or doesn't work) regardless of any meaning that is ascribed to its computations by an observer.

Whether the system "works" or "doesn't work" is dependent on what the machine was designed to do, which is not an objective physical fact about the machine. Perhaps the machine was not meant to open the door when an object is detected, but to close it instead, or to do something else entirely; only the designer would be able to tell you one way or the other.

The same is true for all computation, and that is Searle's point.

> A computer (the theoretical model) is, be definition, something that can perform coherent reasoning without any special internal state.

Computers don't actually engage in reasoning, though, for the same reason. A machine is just a physical process, and physical processes do not have determinate semantic content.

Ross and Feser then argue that because thoughts do have determinate semantic content, they are necessarily immaterial, and I think they are correct.

(This argument is unrelated to qualia; I don't think qualia are fundamental to reason itself.)


The machine does the same thing regardless of whether you ascribe meaning to it or not. In this sense it is like the thermostat from Searle's example, which he was claiming computers are not.

This property of determinacy seems ill defined as well. It's basically defined from the assumption that the human mind is immaterial. If a machine and a human both arrive at the same result when posed a question (say, they both produce some sound that you interpret as meaning '42'), by what measure can you claim that one had semantic meaning and the other did not?

The idea of cognitivism is that there is no fundamental difference (even though of course it is very likely that the process by which this particular machine arrived at that result is different from the process by which the human did).

If I stand by a door and open it when big objects come into my field of view, how is that different from a machine doing the same?

And then, if I had a Machine that could converse and act just like a human (including describing its feelings and internal sensations) while doing nothing fundamentally different from our current PCs, by what measure would you say that this machine is 'simualting' a mind and is not in fact a mind in itself? (though of course it would be a different mind than a human would have).


I don't agree, but I've reached my personal limit for philosophical discussion for the day, so I'll let you have the last word.

Thanks for the discussion! :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: