I had the same theory back when ARC-AGI-2 came out, and surprisingly encoding it into text didn't help much - LLMs just have a huge blind spot around spatial reasoning, in addition to being bad at vision. The sorts of logic and transformations involved in this just don't show up much in the training data (yet)
I still agree that this is like declaring blind people lack human intelligence, of course.
Maybe, but destroying USAID was an unforgivable sin. Short of nukes, rapidly turning off direct medical and food aid that people in critical need have relied on for years is objectively one of the fastest way to kill millions of people.
Mate, wouldn’t it make sense that these rules are applied via hierarchy? If Elon respects Karparthy he almost certainly gave him a longer leash and Karparthy’s output was strong enough to not warrant intervention. It’s clear he did not want to stay long term so I’m not sure this is a strong line of thinking.
It's possible. I don't know. My tone comes off as support Elon, and I do not, at all. I've seen first-hand almost all of these tactics while I was at <Elon Company>. I'm observing that some people seem to do OK at Elon's companies, and for many years, and never seem to get the boot or be abused in other ways. Therefore, Elon is probably not quite as bad a manager as he is made out to be. This is all I am saying. Since I have firsthand knowledge, I believe my opinion has value. Those that disagree? Show me your Source of Truth. Thank you.
I don’t believe Elon is even remotely related to a people manager. He’s a stakeholder and operator which require different skill sets. He finds folks who will manage to o bring the empathy he tends to lose in his pursuit of his next project. I believe your evidence may be anecdotally valuable but let’s be clear about the dynamic of a founder/ceo.
AI comments are certainly bad for discourse on HN. But who's to be the judge of AI or human? Are you reading humanity's Jeff Dean or computerized Elon Musk? It's certainly a tricky situation to be in!
It's fine except for their argument that it makes people less safe. If they want to disallow encryption they don't need to lie to people while they're at it.
I would imagine if you simply encoded the game in textual format and asked an LLM to come up with a series of moves, it would beat humans.
The problem here is more around perception than anything.
reply