Hacker Newsnew | past | comments | ask | show | jobs | submit | LZ_Khan's commentslogin

The thing is.. this is more akin to testing a blind person's performance on a driving test than testing his intelligence.

I would imagine if you simply encoded the game in textual format and asked an LLM to come up with a series of moves, it would beat humans.

The problem here is more around perception than anything.


I had the same theory back when ARC-AGI-2 came out, and surprisingly encoding it into text didn't help much - LLMs just have a huge blind spot around spatial reasoning, in addition to being bad at vision. The sorts of logic and transformations involved in this just don't show up much in the training data (yet)

I still agree that this is like declaring blind people lack human intelligence, of course.


Is open source really open source if it can be bought by big companies and manipulated freely? Technically one day it can just be pulled off github.

*What makes you think it hasn’t already been pulled/used in training from GitHub?

Im just confused how such a mediocre project came out of such a big budget.

Damn the narrative was just at "we are entering RSI" and this week all of a sudden it changed to "Transformers hit a wall AI winter is coming."

Very suspicious.


Gemini does it but not in a sensationalized way.

More like "Would you like to know more about XYZ, or circumstances that led to situation XYZ?"


How come all the departed researchers are Chinese nationals?


This is simply not true. Igor Babuschkin and Christian Szegedy left as well. Only 10 of the 12 remain at this point.


I don't know. Elon Musk personally founded xAI and these were his hand selected cofounders.


Because xAI = Jian-Yang x N.

I'm kidding... I think.


After seeing the type of people he hired for doge.. yikes.


Was doge ever anything more than a "get root, grab the data, and run" operation?


Maybe, but destroying USAID was an unforgivable sin. Short of nukes, rapidly turning off direct medical and food aid that people in critical need have relied on for years is objectively one of the fastest way to kill millions of people.


Don't forget the destruction of USAID and countless projects that had the word "diversity" in its work.


I think more important than that was shutting down all investigations into Musk's companies.


It's pretty obvious now.


It was obvious at the time too.


Karparthy worked for Elon for, what, 5 years? How did he do it, if Elon is Ivan the Terrible?


Mate, wouldn’t it make sense that these rules are applied via hierarchy? If Elon respects Karparthy he almost certainly gave him a longer leash and Karparthy’s output was strong enough to not warrant intervention. It’s clear he did not want to stay long term so I’m not sure this is a strong line of thinking.


It's possible. I don't know. My tone comes off as support Elon, and I do not, at all. I've seen first-hand almost all of these tactics while I was at <Elon Company>. I'm observing that some people seem to do OK at Elon's companies, and for many years, and never seem to get the boot or be abused in other ways. Therefore, Elon is probably not quite as bad a manager as he is made out to be. This is all I am saying. Since I have firsthand knowledge, I believe my opinion has value. Those that disagree? Show me your Source of Truth. Thank you.


I don’t believe Elon is even remotely related to a people manager. He’s a stakeholder and operator which require different skill sets. He finds folks who will manage to o bring the empathy he tends to lose in his pursuit of his next project. I believe your evidence may be anecdotally valuable but let’s be clear about the dynamic of a founder/ceo.


Karpathy makes great educational content. It's not clear what industry (or academic) research he did even now, five years later.


AI comments are certainly bad for discourse on HN. But who's to be the judge of AI or human? Are you reading humanity's Jeff Dean or computerized Elon Musk? It's certainly a tricky situation to be in!


It's fine except for their argument that it makes people less safe. If they want to disallow encryption they don't need to lie to people while they're at it.


OpenAI's just trading equity for GPU credits at this point?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: