Hacker Newsnew | past | comments | ask | show | jobs | submit | 7777777phil's commentslogin

Blocklists assume you can separate malicious infrastructure from legitimate infrastructure. Once phishing moves to Google Sites and Weebly that model just doesn't work.

I read it as Jones describing intent, not mechanics. The harder version of this argument isn't that companies want to replace workers, it's that even when AI genuinely augments productivity, the goalposts shift and you get displacement anyway. I wrote about that dynamic: https://philippdubach.com/posts/does-ai-mean-the-demand-on-l... The conclusion was that AI was supposed to free us, but inescapability might be closer to the truth.

Nvidia sells chips to whoever wins, so investing in a specific lab creates downside with no real upside. The more interesting read is whether Huang sees model providers compressing toward commodity pricing. I wrote about why that layer is structurally squeezed: https://philippdubach.com/posts/is-ai-really-eating-the-worl...

Classified multi-year contracts and government-funded compute are hard to walk away from when you're burning cash at that rate. Defense economics always do this to companies. Same thing that consolidated the primes in the 90s.

Wrote about why the door only opens one way: https://philippdubach.com/posts/when-ai-labs-become-defense-...


You write GPL code so improvements flow back, then AWS wraps it in a managed service and the license never fires because nothing is distributed. Worked exactly as designed and still failed. Fontana's right that static permissions can't solve dynamic power problems. I'm less sure about his "capacities" framework though, it would need governance structures most projects can barely imagine.

What about AGPL?

Adoption is the problem. Most projects won't touch AGPL because it scares away corporate contributors, so you end up choosing between a license companies will use but can exploit, or one they'll just avoid. Closed the loophole only on paper imo.

So? If the goal was to attract corporations you'd release under a BSD license.

The "biggest model that fits" instinct is just wrong now. Compact models routinely beat massive predecessors from 12 months ago. Scaling laws only reliably predict pre-training loss anyway, not how the model actually performs on your task. Dug into the research behind this: https://philippdubach.com/posts/the-most-expensive-assumptio...

These stories always focus on a vulnerable person, but what is the chatbot optimizing for?? It wants you to keep talking. Longer sessions, more training data, better engagement numbers. A therapist has a professional reason to make you need them less. ChatGPT has the exact opposite incentive baked in

Even if theres no business purpose, tge function of these models is to produce tokens. That alone skews the intelligence capabilities.

Good taxonomy but in practice these compose more than they compete I think..

Production agent stacks already layer MCP for tool access on top of RAG for context retrieval, with RLM-style orchestration wrapping the whole thing. The question nobody answered cleanly to me yet is which layer owns state. I wrote about this decomposition (1) and RAG's specific failure mode with sequential data (2).

(1) https://philippdubach.com/posts/dont-go-monolithic-the-agent...

(2) https://philippdubach.com/posts/beyond-vector-search-why-llm...


N=6 Phase 1, so this is purely a safety readout. But does the stem cell patch actually prevents hydrocephalus or just delays it.

lets see 35 patients in Phase 1/2a


Observe-only at the OS level is the right design! You can't trust the agent to report what it actually did. This is part of why I think monolithic agent platforms won't last. Auditing has to be independent of the thing being audited.

I wrote about the layer split happening in agent tooling: https://philippdubach.com/posts/dont-go-monolithic-the-agent...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: