Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've gotten responses now on PRs of the form. "I don't know either, this is what Copilot told me."

If you don't even understand your own PR, I'm not sure why you expect other people can.

I have used LLMs myself, but mostly for boilerplate and one-off stuff. I think it can be quite helpful. But as soon as you stop understanding the code it generates you will create subtle bugs everywhere that will cost you dearly in the long run.

I have the strong feeling that if LLMs really outsmart us to the degree that some AI gung-ho types believe, the old Kernighan quote will get a new meaning:

"Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it."

We'll be left with code nobody can debug because it was handed to us by our super smart AI that only hallucinates sometimes. We'll take the words of another AI that the code works. And then we'll hope for the best.



Copilot is best for me when it's letting me hit tab to auto-complete something I was going to write anyways.


Copilot does hit that sweet to spot sometimes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: