Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don’t know much about AI but this:

> That’s possible but doesn’t look likely at this point.

Why doesn’t it seem likely?

We’ve been trying to do great things with AI for decades. We don’t seem to have an excellent grasp on why certain things work well or if the strategies we’re using can ultimately yield much better intelligence.

My impression is that we really don’t have a lot of control over progress and we could very likely hit walls and stall for many years without meaningful progress. What am I missing?



I can think of two situations that might lead to an AI winter:

1. We are wildly underestimating the computation requirements.

2. There are theoretical roadblocks coming up such that even a very large number of smart people being paid to solve the problem won’t find a key sequence of ideas. Think Riemann Hypothesis, or Fermat’s Last Theorem, etc.

The counter-argument to (1) is that available computational resources are very high given the billions of dollars available. The one system we know to possess human-parity intelligence (the human brain) uses 12 watts and is not exactly a data center.

The counter-argument to (2) is that we’ve made faster than expected progress since the discovery of transformers, and we seem to be quite close already given the capabilities of GPT-4. Of course you don’t know that you’ve hit a roadblock until you hit it, but so far it’s been smooth sailing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: