Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Hey, thanks for responding. You're a very evocative writer!

I do want to push back on some things:

> We treat "cognitive primitives" like object constancy and causality as if they are mystical, hardwired biological modules, but they are essentially just

I don't feel like I treated them as mystical - I cite several studies that define what they are and correlate them to certain structures in the brain that have developed millennia ago. I agree that ultimately they are "just" fitting to patterns in data, but the patterns they fit are really useful, and were fundamental to human intelligence.

My point is that these cognitive primitives are very much useful for reasoning, and especially the sort of reasoning that would allow us to call an intelligence general in any meaningful way.

> This "all-at-once" calculation of relationships is fundamentally more powerful than the biological need to loop signals until they stabilize into a "thought."

The argument I cite is from complexity theory. It's proof that feed-forward networks are mathematically incapable of representing certain kinds of algorithms.

> Furthermore, the obsession with "fragility"—where a model solves quantum mechanics but fails a child’s riddle—is a red herring.

AGI can solve quantum mechanics problems, but verifying that those solutions are correct still (currently) falls to humans. For the time being, we are the only ones who possess the robustness of reasoning we can rely on, and it is exactly because of this that fragility matters!

 help



> The argument I cite is from complexity theory. It's proof that feed-forward networks are mathematically incapable of representing certain kinds of algorithms.

Claiming FFNs are mathematically incapable of certain algorithms misses the fact that an LLM in production isn't a static circuit, but a dynamic system. Once you factor in autoregression and a scratchpad (CoT), the context window effectively functions as a Turing tape, which sidesteps the TC0 complexity limits of a single forward pass.

> AGI can solve quantum mechanics problems, but verifying that those solutions are correct still (currently) falls to humans. For the time being, we are the only ones who possess the robustness of reasoning we can rely on, and it is exactly because of this that fragility matters!

We haven't "sensed" or directly verified things like quantum mechanics or deep space for over a century; we rely entirely on a chain of cognitive tools and instruments to bridge that gap. LLMs are just the next layer of epistemic mediation. If a solution is logically consistent and converges with experimental data, the "robustness" comes from the system's internal logic.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: