Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What could 'meaning' possibly be other than probabilistic associations of sounds and concepts?


Your observation is so vague and general to the point of being rather meaningless. Almost every physical theory is described by an underlying mapping between inputs.

The interesting point is the expression power of your model. : to take an example I am somehow familiar with, current large vocabulary speech recognizers have millions of parameters. They work relatively well, but they are very difficult to interpret, and it is hard to see how they help us understanding how speech recognition actually works in our brain.

To make a somehow flawed analogy, every Turing complete language is equivalent, but getting the machine code of a very large project is not very interesting if you want to understand it, while it is mostly enough if you just want to use it.


Do you have any reason to suspect this isn't how the brain works? Maybe language isn't a small set of high level rules. Why should we suspect it to be? The probablistic models seem to be very similar to how real people actually learn informal language. Formal languages of course have high-level rules, and these are well modelled algorithmically.


I don't particularly have any reason to believe one way or the other. Certainly, the probabilistic models for language are created "out of the blue" without any attempt to model how human learn languages.


That is EXACTLY the stance that Chomsky is challenging here. Don't get me wrong, I think the probabilistic view has truth value and is a powerful predictive tool. However, I don't think it captures the phenomenological properties of the act of language based cognition. See my cousin reply referencing Heidegger.


Chomsky claims language is built-in. So probabilistic associations are the exact opposite of his claims. Interestingly, there have been some very good baby studies that show babies inherently know statistics needed to learn probabilistic associations. You can show them a couple different color balls going into a box, then take out a bunch of the rare color that went in, and they will register more surprise even long before they can even talk. Things like that. Chomsky's assertions are, well, what you would expect from someone that old. They didn't understand neurons and biology and genetics so well then, so yay, magic things are possible!


> Chomsky claims language is built-in.

In general he does. In this article he doesn't talk about he talks about approaches to AI.

> there have been some very good baby studies that show babies inherently know statistics needed to learn probabilistic associations.

Very good. Can we identify how that works and then build a robot that has the same mechanism in a more efficient way than simply simulating a brain at a molecular level. That is his argument here.

> They didn't understand neurons and biology and genetics so well then, so yay, magic things are possible!

So where were you 4-5 decades ago when he proposed his theory to propose a better one?


The idea that meaning consists of associations is extremely primitive. It works for concrete nouns and verbs but it quickly fails as things get more complex. Language is used to refer to refer to abstract things, imaginary things, counterfactual situations, etc. And even if you do arrive at a series of concepts using associations, you have to understand how they are supposed to combine, even for completely novel sentences. In all these cases, there's arguably nothing there to associate with. I can't answer your question (I think no one can), but we can conclude that meaning is more than associations.


Isn't that Chomsky's argument here (in this article, not in his approach to linguistics in general)? -- That it is a good idea to try to find a better understanding of how the internal mechanisms work so we can build or simulate it better. You do that with carefully constructed experiments not with just observing inputs and outputs and training a neural network or a Markov model with it.


Please argue that there is nothing to associate with.

Why is a real observation from your senses more privileged inside your brain that a random well-formed value by a (hypothetical) random number generator neuron?


I argued that if you only have associations with previous experiences, you won't be able to deal with novel input. Ergo, you need more than just associations (synthesis, imagination, counterfactual reasoning, etc.).

As to your second question, I don't see how it relates to my argument, but I'll answer anyway. If you're comparing a observation to a random number, you're looking at the observation qua value, in which case it has the same status. If, however, you look at the level of interpretation (what it means in your brain), the observation has a complex set of relations with the rest of your brain and gives rise to a perception, wheres the random number value is just noise that has to be tolerated by the brain.


Saying everything is just probabilistic associations is like saying everything is made from quanta of energy and thus every higher level concept or model is useless -- just simulate the quarks and you are set. Not only that -- simulate it by recording and observing the energy patterns going in and coming out of a black box.

Yes you can get some things to work and some to work well but the idea is that perhaps there is a better model that describes the mechanism or the encoding of meaning. That's what Chomsky is trying to say in this particular article. Some stopping at a brute force approach is a fine engineering approach but that doesn't mean everyone should, it is still worth trying to find a better model for it, if at least, just to gain an understanding.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: