Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It matters what you measure. The studies only looked at Copilot usage.

I’m an experienced engineer. Copilot is worse than useless for me. I spend most of my time understanding the problem space, understanding the constraints and affordances of the environment I’m in and thinking about the code I’m going to write app. When I start typing code, I know what I’m going to write, and so a “helpful” Copilot autocomplete is just distraction for me. It makes my workflow much much worse.

On the other hand, AI is incredibly useful for all of those steps I do before actually coding. And sometimes getting the first draft of something is as simple as a well crafted prompt (informed by all the thinking I’ve done prior to starting. After that, pairing with an LLM to get quick answers for all the little unexpected things that come up is extremely helpful.

So, contrary to this report, I think that if experienced developers use AI well, they could benefit MORE than inexperienced developers.



Copilot isn't particular useful. At best it comes up with small snippets that may or may not be correct, and rarely can I get larger chunks of code that would be working out of the gate.

But Claude Sonnet 3.5 w/ Cursor or Continue.dev is a dramatic improvement. When you have discrete control over the context (ie. being able to select 6-7 files to inject), and with the superior ability of Claude, it is an absolute game changer.

Easy 2-5x speedup depending on what you're doing. In an hour you can craft a production ready 100 loc solution, with a full complement of tests, to something that might otherwise take a half day.

I say this as someone with 26 yoe, having worked in principal/staff/lead roles since 2012. I wouldn't expect nearly the same boost coming at less than senior exp. though, as you have to be quite detailed at what you actually want, and often take the initial solution - which is usually working code - and refine it a half dozen times into something that you feel is ideal and well factored.


> I wouldn't expect nearly the same boost coming at less than senior exp. though, as you have to be quite detailed at what you actually want, and often take the initial solution - which is usually working code - and refine it a half dozen times into something that you feel is ideal and well factored.

Agreed. I feel like coding with AI is distilling the process back to the CS fundamentals of data structures and algorithms. Even though most of those DS&As are very simple it takes experience to know how to express the solution using the language of CS.

I've been using Cursor Composer to implement code after writing some function signatures and types, which has been a dream. If you give it some guardrails in the context, it performs a lot better.


The one thing I'm a little concerned about is my ability as an engineer.

I don't know if I'm losing or improving my skillset. This exercise of development has become almost entirely one of design and architecture, and reading more than writing code.

Maybe this doesn't matter if this is the way software is developed moving forward, and I'm certainly not complaining in working on a 2 person startup


Which do you prefer Cursor or Continue.dev?


Honestly haven't tried out Cursor yet, it looks impressive but I've heard it has some teething issues to work out. For my use case I'd end up using it very similar to how I use Continue.dev and probably pay for Claude API usage separately, which has been working out to about $12-$15 a month.


Human + ai writing tests >> human writing tests


For me, AI is like a documentation/Googlefu accelerant. There are so many little things that I know exactly what I want to do, but can't remember the syntax or usage.

For example, writing IaC especially for AWS, I have to look up tons of stuff. Asking AI gets me answers and examples extremely fast. If I'm learning the IaC for a new service I'll look over the AWS docs, but if I just need a quick answer/refresher, AI is much faster than going and looking it up.


This is exactly how I think of it as well.

Search is awful when you can't remember the exact term with your language/framework/technology - but highlighting code and asking AI helps out a ton.

Before, I'd search over and over fine-tuning my search until I get what I want. Tools like copilot make that fine-tuning process much shorter.


I find that for AWS IaC specifically with a high pace of releases and a ton of versions dating back more than a decade the AI answers are a great spring board but require a bit of care to avoid mixing APIs.


My experience with IaC output is that it's so broken to not only be unhelpful but actively harmful.


Contrarian take: I feel that copilot rewards me for writing patterns that it can then use to write an entire function given a method signature.

The more you lean into functional patterns: design some monads, don’t do I/O except at the boundaries, use fluent programming, then it’s highly effective.

This is all in Java, for what it’s worth. Though, I’ll admit, I’m 3.5y into Java, and rely heavily on Java 8+ features. Also, heavy generic usage in my library code gives a lot of leash to the LLM to consistently make the right choice.

I don’t see these gains as much when using quicker/sloppier designs.

Would love to hear more from true FP users (Haskell, OCaml, F#, Scala).


I used the Copilot trial. I found myself waiting to see what it would come up with, analyzing it, and most often time throwing it away for my own implementation. I quickly realized how much of a waste of time it was. I did find use for it in writing unit tests and especially table-driven testing boilerplate but that's not enough to maintain a paid subscription.


copilot isn't a worthwhile example


I think my experience mirrors your own. We have access at my job but I’ve turned it off recently as it was becoming too noisy for my focus.

I found the tool to be extremely valuable when working in unfamiliar languages, or when doing rote tasks (where it was easy for me to identify if the generated code was up to snuff or not).

Where I think it falters for me is when I have a very clear idea of what I want to do, and its _similar_ to a bog standard implementation, but I’m doing something a bit more novel. This tends to happen in “reduce”s or other more nebulous procedures.

As I’m a platform engineer though, I’m in a lot of different spaces: Bash, Python, browser, vanilla JS, TS, Node, GitHub actions, Jenkins Java workflows, Docker, and probably a few more. It gives my brain a break while I’m context switching and lets me warm up a bit when I move from area to area.


> (where it was easy for me to identify if the generated code was up to snuff or not).

I think you have nailed it with this comment. I find copilot very useful for boilerplate - stuff that I can quickly validate.

For stuff that is even slightly complicated, like simple if-then-else, I have wasted hours tracking down a subtle bug introduced by copilot (and me not checking it properly)

For hard stuff it is faster and more reliable for me to write the code than to validate copilots code.


the fact that Copilot hallucinates methods/variables/classes that do not exist in compiled languages where it could know they do not exist is just unbelievable to me.

it really feels like people building the product do not care about the UX.


> So, contrary to this report, I think that if experienced developers use AI well, they could benefit MORE than inexperienced developers.

A psychology professor I know says this holds in general. For any new tool, who will be able to get the most benefits out of it? Someone with a lot of skill already or someone with fewer skill? With less skill, there is even a chance that the tool has a negative effect.


I only use Copilot and Claude to do all the boilerplate and honestly just the mechanical part of writing code. But I don't use it to come up with solutions. I'll do my thing understanding the problem, figuring out a solution, etc. and once I've done everything to ensure I know what needs to be written, I use to AI to do most of that. It saves a hell of a lot of time and typing.


Yeah, Copilot is meh. Aider-chat for things with GPT-4 earlier this year was a huge step up.

But recently using Claude Sonnet + Haiku through OpenRouter also with aider, and it is like a new dimension of programming.

Working on new projects in Rust and a separate SPA frontend, it just ... implements whatever you ask like magic. Gets it about 90-95% right at the first prompt. Since I am pretty new to Rust, there are a lot of idiomatic things to learn, and lots of std convenience functions I don't yet know about, but the AI does. Figuring out the best prompt and context for it to be effective is now the biggest task.

It will be crazy to see where things go over the next few years... do all junior programmers just disappear? Do all programmers become prompt engineers first?


Thats the point. The less experienced you are the more gains you see, and vice versa.

The issue in the first case is that you have no idea if it tells you good stuff or garbage.

Also in simple projects it shines, when the project is more complex - it becomes mostly useless.


I think everything you said was wrong. Cursor is amazing now with the large context windows it’s capable of handling with Claude, especiallly at the hands of an expert programmer.

A junior writing simple code is the exact recipe for disaster when it comes to these tools.


Isn't that the opposite? The more experienced you are the higher the gains since you are able to see what it outputs and immediately tell if it is what you expected. You also know how to put on the best input for it to auto complete rest of it while you are already planning next steps in your head. I feel like a superhuman with it, honestly.


One thing i've been doing more of lately with Copilot is using prompts directly in a //comment. Although I distinguish this from writing a detailed comment doc about a function and then let Copilot write the function. Theres "inline prompting" and "function prompting".


I noticed that AI is a bit like having a junior dev in time capsule. He won't solve your problem however he can Google, find stuff and write simple stuff, all you'd be forced to do otherwise. And does it in minutes rather than weeks or months.


Just for clarity - are you saying that going back and forth with ChatGPT is more useful than Co-pilot? The reason I ask is I have both and 95% of the benefit is ChatGPT.


I like to use copilot when writing tests. It's not always perfect but makes things less tedious for me.


I recently switched to cursor, and am in the process of wrangling an inherited codebase that had effectively no tests and cursor has saved me _hours_ it's generally terrible at any actual code refactoring, but it has saved me a great deal of typing while adding the missing test coverage.


Excuse my ignorance, i have avoided copilot until now..

Does it have (some of) the other files of the project in it's context, when you use it in a test file?


Copilot has your open tabs in context.

Cursor has that plus whatever files you want to specifically add. Or it has a mode where you can feed it the entire project and it searches to decide which files to add


Thanks!


Canceled copilot, using cursor now


I wish I worked at a place where it’d be enough for me to “understand the problem space” as I pull down seven figures. But those bastards also want me to code, and Copilot at least helps with the boilerplate


But despite the theory/wish that "if experienced developers use AI well", at the present, inexperienced developers are benefitting more, which is what the study found.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: