Hacker Newsnew | past | comments | ask | show | jobs | submit | andrei_says_'s commentslogin

AGI is a sales pitch, not a realistic goal achievable by LLM-based technology. The exponential growth sold to investors is also a pitch, not reality.

What’s being sold is at best hopes and more realistically, lies.


LLM technology does not have a connection with reality nor venues providing actual understanding.

Correction of conceptual errors require understanding.

Vomiting large amounts of inscrutable unmaintainable code for every change is not exactly an ideal replacement for a human.

We have not started to scratch the surface of the technical debt created by these systems at lightning speed.


> We have not started to scratch the surface of the technical debt created by these systems at lightning speed.

Bold of you to assume anyone cares about it. Or that it’ll somehow guarantee your job security. They’ll just throw more LLMs on it.


Amazing.

But last year’s iPhone cannot download a critical security iOS update for last year’s iOS 18.

Shoving the horribly broken iOS 26 down our throats is not a pleasant experience, Apple.


If the photo at the bottom of the post is a photo of the OpenAI team, then it’s white bros all the way.

Meaning, these products are being created by representatives of the kind of people carrying the most privilege and having the least impact of negative decisions.

For example, Twitter did not start sanitizing location data in photos until women joined the team and indicated that such data can be used for stalking.

White rich bros do not get stalked. This problem does not exist in their universe.


It's not OpenAI. It's ClawCon.

Am I crazy thinking that interacting with such a system is a nightmarishly frustrating way to write code?

Like trying to write with a wet noodle - always off in some way.

Write the code feels way more precise and not less efficient.


Sometimes people forget that you don't have to use AI to actually write the code. You can stick to "Ask" mode and it will give you useful suggestions and generate code but won't actually modify your files.

This has been my experience as well. I get drained at the end, and I spent most of my energy and thinking capacity dealing with the LLM instead of the problem space.

Thank you for this, can’t wait to use. Minimalism at its best.

No one being surprised at this statement is an indication of how much enshittification and betrayal we have agreed to accept.

I’d like to acknowledge the damage I carry as a human being as a result of the pressure to pretend that this is normal. Just because there doesn’t seem to be real alternatives in so many areas of this “free market” /s economy.


It has been a long decent. I blame advertising, because I doubt we would have put up with as much enshittification if the majority of our digital lives wasn't free, paid for by advertising, like it currently is.

We live in a world where the powerful deceive us. We know they lie. They know we know they lie. They don't care. We say we care but do nothing

My take is sometimes we get paid to be that guy and precision has its place and value.

We get lost when being right is seen as having value - instead of improving clarity and precision if needed in a specific context.


It signals a commitment which will help me make a decision.

I do pay extra for privacy and quality.

Enshittified LLMs are not something I’m interested in.


I'm skeptical about banning sales of tobacco and alcohol products to children because children may (over)use them.

Also do we trust adults prescribed oxytocin to manage their use?

We are speaking of weaponized addiction at planetary scale.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: