Hacker Newsnew | past | comments | ask | show | jobs | submit | dannersy's commentslogin

Yeah, it is wild seeing with my eyes how bad these tools are in a lot of cases. We do have some vibe coders on our team but they basically are banned from my current project because they completely ruin the design and nuke throughput. HN would have me believe I'm a Luddite who shouldn't be writing code, however. I truly do not understand how to reconcile this experience and many times it is too complicated a topic to explain to someone who isn't an engineer. AI is the uiltmate Dunning-Kruger machine. You cannot fix what you do not know because you do not know that you did not know.

As you say, I think things are just going to fall apart and we're just going to have to learn the hard way.


No, these tools are really great in a lot of cases. But they still don't have general intelligence or true understanding of anything - so if people using them wrong and rely on their output because it looks good and not because they verified it, then this is on the people using them.

I mean, that is fine, but then it seems like people at large are not using them "right". I think you'll find that since these tools are convenient and produce a lot of code in terms of lines, that verifying goes out the window. Due diligence was hard before these tools existed.

This is insane.

I feel like a crazy person, especially when I read HN. Half or more of the comments on this thread are saying how the game is over for even writing code. Then at my job, I see people break things at a rate I can't personally keep up with. Worse, I hear more and more colleagues talk about mandated AI tooling usage and massive regression rates. My company isn't there yet, but I feel it is around the corner.

> People say OpenAI is burning money and is on the verge of collapse. The same people will say OpenAI building an ads business on ChatGPT is "enshittifcation". These people are quite insufferable, no offense to the many who are exactly as I described.

I guess ignore the evidence of what I can see? If it provided the value everyone says it does, then charging the amount of what you would generate for ad revenue doesn't seem like a huge ask. But that's not the objective, is it? All the players want to become the defacto AI provider, and they know bait and switch tactics is all they have.

This sentiment comes off as an abusive relationship with the tech industry. Rewarding new ways to define a race to the bottom. We never demand or expect better, just gladly roll over and throw money at your new keeper. It's sad.


  If it provided the value everyone says it does, then charging the amount of what you would generate for ad revenue doesn't seem like a huge ask.
Vast majority of Youtube viewers do not pay for Premium. No one pays for Google search premium. No one pays for Instagram or Facebook or Whatsapp.

There are certain class of services that work best with ads driven business model. ChatGPT is one of them.

If Google and all other search engines locked search behind a subscription, it'd do a great disservice to the world since it means the poor can't use it.


Except that this product isn't comparable whatsoever to Youtube. Contrary to your point, there are whole businesses popping up because people are paying for search engines due to users feeling that Google's results are insufficient for serious search. I'm not sure this is a proper comparison.

The blog isn't even necessarily anti-AI yet the majority of responses here are defending it like the author kicked their dog.

The sentiment that developers shouldn't be writing code anymore means I cannot take you seriously. I see these tools fail on a daily basis and it is sad that everyone is willing to concede their agency.


They don't care about good code, but they do pay people a lot of money to care about good code. If the people you hired didn't care, our software quality would be worse than it is. And since people are caring less in the face of AI, it is getting worse.

This is about to get substantially worse as companies introduce more AI into their workflows.

Because if you don't know the language or problem space, there are footguns in there that you can't find, you won't know what to look for to find them. Only until you try to actually use this in a production environment will the issues become evident. At that point, you'll have to either know how to read and diagnose the code, or keep prompting till you fix it, which may introduce another footgun that you didn't know that you didn't know.

This is what gets me. The tools can be powerful, but my job has become a thankless effort in pointing out people's ignorance. Time and again, people prompt something in a language or problem space they don't understand, it "works" and then it hits a snag because the AI just muddled over a very important detail, and then we're back to the drawing board because that snag turned out to be an architectural blunder that didn't scale past "it worked in my very controlled, perfect circumstances, test run." It is getting really frustrating seeing this happen on repeat and instead of people realizing they need to get their hands dirty, they just keep prompting more and more slop, making my job more tedious. I am basically at the point where I'm looking for new avenues for work. I say let the industry just run rampant with these tools. I suspect I'll be getting a lot of job offers a few years from now as everything falls apart and their $10k a day prompting fixed one bug to cause multiple regressions elsewhere. I hope you're all keeping your skills sharp for the energy crisis.


Before LLMs, I've watched in horror as colleagues immediately copy-paste-ran Stack Overflow solutions in terminal, without even reading them.

LLM agents are basically the same, except now everyone is doing it. They copy-paste-run lots of code without meaningfully reviewing it.

My fear is that some colleagues are getting more skilled at prompting but less skilled at coding and writing. And the prompting skills may not generalize much outside of certain LLMs.


You're cherry picking. The open world games aren't as compelling anymore since the novelty is wearing off. I can cherry pick, too. For example, Starfield in all its grandeur is pretty boring.

And the users may not care about code directly, but they definitely do indirectly. The less optimized and more off-the-shelf solutions have seen a stark decrease in performance but allowing game development to be more approachable.

LLMs saving engineers and developers time is an unfounded claim because immediate results does not mean net positive. Actually, I'd argue that any software engineer worth their salt knows intimately that more immediate results is usually at the expense of long term sustainability.


Startfield is boring because of the bad writing and they made a space exploration game where there are loading screens between the planet and space and you don’t actually explore space.

They fundamentally misunderstood what they were promising, it’s the same as making a pirate game where you never steer the ship or drop anchor.

You can prove people are not bored with the concept as new gamers still start playing fallout new Vegas or skyrim today despite them being old and janky.


This is why Sid Meier's Pirates [0] remains such a great game.

It was really a combination of mini-games:

- you got steer a ship (or fleet of ships) around the Caribbean

- ship to ship combat

- fencing

- dancing (with the Governors' daughters)

- trading (from port to port or with captured goods0

- side quests

Each time I played it with my oldest, it felt like a brand new game.

https://en.wikipedia.org/wiki/Sid_Meier%27s_Pirates!


Played this as a kid, genuinely great gameplay loop and felt very immersive at the time.

I think my point stands. Procedural generation is a tool that usually works best when it is supplementary. What makes New Vegas an amazing game is all the hand built narratives and intricate storylines. So yeah, I agree, Starfield is boring because of the story. But if the procedural vastness was interesting enough to not be boring, then we wouldn't be talking about this to begin with.


Starfield wasn't procedural vastness though, No Man's Sky is but what Starfield was is handmade content then a loading screen then a minigame then a loading screen then a small procedural "instance"/"dungeon" not a vast seamless world to explore.

Im inclined to say that if Bathesta used LLMs for story based on known best seller books - it would be better than the garbage created by so called “modern script writers”.

The same could be said about Hollywood movies and series.

When agenda is more important than fun, books, movies, games are not labour of love but neglet.


Yeah I mean, I think procgen is cool tech, but there's a reason we don't talk about Daggerfall the same way we talk about Morrowind


Agreed.


> Starfield in all its grandeur is pretty boring.

And yet "No Mans Sky" is massively popular.

> ny software engineer worth their salt knows intimately that more immediate results is usually at the expense of long term sustainability.

And any software engineer worth their salt realizes there are 100s if not 1000s of problems to be solved and trying to paint a broad picture of development is naive. You have only seen 1% (at best) of the current software development field and yet you're confidently saying that a tool that is being used by a large part of it isn't actually useful. You'd have to have a massive ego to be able to categorically tell thousands of other people that what they're doing is both wrong and not useful and that they things they are seeing aren't actually true.


No Man's Sky got better as they were more intentional with their content. The game has more substance and a lot of that had to be added by hand. It is dropped in procedurally but they had to touch it up, manually, to make it interesting. Let's not revise history.

I don't think it has anything to do with ego. There are studies on the topic of AI and productivity and I assume we have a way to go before we can say anything concretely. Software workflows permeate the industry you're in. You're putting words in my mouth, I said nothing about what people are doing is wrong or not useful. I said the claim that generative AI is making engineers more productive is an unfounded one. What code you shit out isn't where the work starts or ends. Using expedient solutions and having to face potentially more work in the future isn't even something that is a claim about software, I can make that claim about life.

You need to evaluate what you read rather than putting your own twist on what I've said.


You said:

> LLMs saving engineers and developers time is an unfounded claim

By whom exactly? If I say it saves me time, and another developer says the same, and so on, than it is categorically not unfounded. In fact, it's the opposite.

You've completely missed the point if you don't understand how telling other people that their own experience in such a large field is "unfounded" simply because it doesn't line up with your experience.

> we have a way to go before we can say anything concretely

No YOU do. It's quite apparent to me how it can save time in the myriad of things I need to perform as a software developer (and have been doing).


Anecdotal evidence, how scientific of you. When I say it's unfounded, I'm saying it hasn't been proven with actual research and data. So when you ask, "by whom?", that's exactly my point, it is unfounded. That's what that word means, no one has made a claim, backed by data, that AI is making significant waves on productivity. I don't think I've missed the point at all, but it seems I hit an emotional nerve with you though, so the conversation is over.

Do I have to explain to another adult (presumably) what the word "unfounded" means? Are you purposely ignoring the hundreds of articles popping up on this site demonstrating the capabilities of these tools? Are they all lying?

Let us not pretend that they won't be used for war eventually. If they cave immediately under pressure, then this is an inevitably.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: