I wish I could upvote this comment more than once. There does appear to be a prejudice with more senior programmers arguing why it cannot work, how they just cause more trouble, and other various complaints. The tools today are not perfect but they still amaze me at what is being accomplished, even a 10% gain is incredible for something that costs $10/month. I believe progress will be made in the space and the tooling in 5 years will be even better.
The prejudice comes down to whether they want to talk the LLM into the right solution vs applying what they know already. If you know your way around then there’s no need to go through the LLM. I think sr devs often tend to be more task focused so the idea of outsourcing the thinking to an LLM feels like another step to take on.
I find Claude good at helping me find how to do things that I know are possible but I don’t have the right nomenclature for. This is an area where Google fails you, as you’re hoping someone else on the internet used similar terms as you when describing the problem. Once it spits out some sort of jargon I can latch onto, then I can Google and find docs to help. I prefer to use multiple sources vs just LLMs, partially because of hallucination, but also to keep amassing my own personal context. LLMs are excellent as librarians.
The trouble is that they seem to be getting worse. Some time ago I was able to write an entire small application by simply providing some guidance around function names and data structures, with an LLM filling in all of the rest of the code. It worked fantastically and really showed how these tools can be a boon.
I want to taste that same thrill again, but these days I'm lucky if I can get something out of it that will even compile, never mind the logical correctness. Maybe I'm just getting worse at using the tools.
As a senior, I find that trying to use copilot really only gives me gains maybe half the time, the other half the time it leads me in the wrong direction. Googling tends to give me a better result because I can actually move through the data quicker. My belief is this is because when I need help I'm doing something uncommon or hard, as opposed to juniors who need help doing regular stuff which will have plenty of examples in the training data. I don't need help with that.
It certainly has its uses - it's awesome at mocking and filling in the boilerplate unit tests.
I find their value depends a lot on what I'm doing. Anything easy I'll get insane leverage, no exaggeration I'll slap together that shit 25x faster. It's seen likely billions of lines of simple CRUD endpoints, so yeah it'll write those flawlessly for you.
Anything the difficult or complex, and it's really a coinflip if it's even an advantage, most of the time it's just distracting and giving irrelevant suggestions or bad textbook-style implementations intended to demonstrate a principle but with god-awful performance. Likely because there's simply not enough training data for these types of tasks.
With this in mind, I don't think it's strange that junior devs would be gushing over this and senior devs would be raising a skeptical eyebrow. Both may be correct, depending on what you work on.
I think for me, I'm still learning how to make these tools operate effectively. But even only a few months in, it has removed most all the annoying work and lets me concentrate on the stuff that I like. At this point, I'll often give it some context, tell it what to make and it spits out something relatively close. I look it over, call out like 10 things, each time it says "you're right to question..." and we do an iteration. After we're thru that, I tell it to write a comprehensive set of unit tests, it does that, most of them fail, it fixes them, and them we usually have something pretty solid. Once we have that base pattern, I can have it pattern and extend variants after the first solid bit of code. "Using this pattern for style and approach, make one that does XYZ instead."
But what I really appreciate is, I don't have to do the plug and chug stuff. Those patterns are well defined, I'm more than happy to let the LLM do that and concentrate on steering whether it's making a wise conceptual or architectural choice. It really seems to act like a higher abstraction layer. But I think how the engineer uses the tool matters too.
As a senior, you know the problem is actually finishing a project. That's the moment all those bad decisions made by junior need to be fixed. This also means that a 80% done project is more like 20% done, because in the state it is, it can not be finished: you fix one thing and break 2 more.
I am seeing that a lot - juniors who can put out a lot of code but when they get stuck they can't unstick themselves, and it's hard for me to unstick them because they have a hard time walking me through what they are doing.
I've gotten responses now on PRs of the form. "I don't know either, this is what Copilot told me."
If you don't even understand your own PR, I'm not sure why you expect other people can.
I have used LLMs myself, but mostly for boilerplate and one-off stuff. I think it can be quite helpful. But as soon as you stop understanding the code it generates you will create subtle bugs everywhere that will cost you dearly in the long run.
I have the strong feeling that if LLMs really outsmart us to the degree that some AI gung-ho types believe, the old Kernighan quote will get a new meaning:
"Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it."
We'll be left with code nobody can debug because it was handed to us by our super smart AI that only hallucinates sometimes. We'll take the words of another AI that the code works. And then we'll hope for the best.
Is that so new? We used to complain when someone blindly copied and pasted from stack overflow. Before that, from experts-exchange.
Coding is still a skill acquisition that takes years. We need to stamp out the behavior of not understanding what they take from copilot, but the behavior is not new.
You're right, it isn't entirely new, but I think it's still different. You still had to figure out how that new code snippet applied to your code. Stack overflow wouldn't shape the code you're trying to slot in to fit your current code.
Juniors don't know enough to know what problems the AI code might be introducing. It might work, and the tests might pass, but it might be very fragile, full of duplicated code, unnecessary side-effects, etc. that will make future maintenance and debugging difficult. But I guess we'll be using AI for that too, so the hopefully the AI can clean up the messes that it made.
Now some junior dev can quickly make something new and fully functional in days, without knowing in detail what they are doing. As opposed to weeks by a senior originally.
Personally I think that senior devs might fear a conflict within their identity. Hence they draw the 'You and the AI have no cue' card.
I have seen mostly senior programmers argue why ai tools don't work. Juniors just use them without prejudice.