Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Only if you accept the premise that the code generated by LLMs is identical to the developer's output in quality, just higher in volume. In my lived professional experience, that's not the case.

It seems to me that prompting agents and reviewing the output just doesn't.... trigger the same neural pathways for people? I constantly see people submit agent generated code with mistakes they would have never made themselves when "handwriting" code.

Until now, the average PR had one author and a couple reviewers. From now on, most PRs will have no authors and only reviewers. We simply have no data about how this will impact both code quality AND people's cognitive abilities over time. If my intuition is correct, it will affect both negatively over time. It remains to be seen. It's definitely not something that the AI hyperenthusiasts think at all about.

 help



> In my lived professional experience, that's not the case.

In mine it is the case. Anecdata.

But for me, this was over two decades in an underpaid job at an S&P500 writing government software, so maybe you had better peers.


I stated plainly: "we have no data about this". Vibes is all we have.

It's not just me though. Loads of people subjectively perceiving a decrease in quality of engineering when relying on agents. You'll find thousands of examples on this site alone.


I have yet to find an agent that writes as succinctly as I do. That said, I have found agents more than capable of doing something.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: