> no one is going to care about anyone’s bespoke manual keyboard entry of code if it takes 10 times as long to produce the same functionality with imperceptibly less error.
No one is going to care about anyone’s painstaking avoidance of chlorofluorocarbons if it takes ten times as long to style your hair with imperceptibly less ozone hole damage.
This is a non-argument. All of the cloud LLM's are going to move to things like micronuclear. And the scientific advances AI might enable may also help avoid downstream problems from the carbon footprint
> It's clear that doing it by hand would mostly be because you enjoy the process.
This is gaslighting. We're only a few years into coding agents being a thing. Look at the history of human innovation and tell me that I'm unreasonable for suspecting that there is an iceberg worth of unmitigated externalities lurking beneath the surface that haven't yet been brought to light. In time they might. Like PFAS, ozone holes, global warming.
> Claude and GPT regularly write programs that are way better than what I would’ve written
Is that really true? Like, if you took the time to plan it carefully, dot every i, cross every t?
The way I think of LLM's is as "median targeters" -- they reliably produce output at the centre of the bell curve from their training set. So if you're working in a language that you're unfamiliar with -- let's say I wanted to make a todo list in COBOL -- then LLM's can be a great help, because the median COBOL developer is better than I am. But for languages I'm actually versed in, the median is significantly worse than what I could produce.
So when I hear people say things like "the clanker produces better programs than me", what I hear is that you're worse than the median developer at producing programs by hand.
A lot of computer users are domain experts in something like chemistry or physics or material science. Computing to them is just a tool in their field, e.g. simulating molecular dynamics, or radiation transfer. They dot every i and cross every t _in_their_competency_domain_, but the underlying code may be a horrible FORTRAN mess. LLMs potentially can help them write modern code using modern libraries and tooling.
My go-to analogy is assembly language programming: it used to be an essential skill, but now is essentially delegated to compilers outside of some limited specialized cases. I think LLMs will be seen as the compiler technology of the next wave of computing.
The difference is that compilers involve rules we can enumerate, adjust, etc.
Consider calculators: Their consistency and adherence to requirements was necessary for adoption. Nobody would be using them if they gave unpredictable wrong answers, or where calculations involving 420 and 69 somehow keep yielding 5318008. (To be read upside-down, of course.)
But thats the point, an llm is a vastly different object to a calculator. Its a new type of tool for better or worse based on probabilities, distributions.
If you can internalise that fact and look at it like having a probable answer rather than an exact answer it makes sense.
Calculators cant have a stab at writing an entire c compiler. A lot of people cant either or takes a lot of iteration anyway, no one one shotted complicated code before llms either.
I feel discussion shouldnt be about how they work as the fundamental objection, rather the costs and impacts they have.
It can certainly be true for several reasons. Even in domains I'm familiar with, often making a change is costly in terms of coding effort.
For example just recently I updated a component in one of our modules. The work was fairly rote (in this project we are not allowed to use LLMs). While it was absolutely necessary to do the update here, it was beneficial to do it everywhere else. I didn't do it in other places because I couldn't justify spending the effort.
There are two sides to this - with LLMs, housekeeping becomes easy and effortless, but you often err on the side of verbosity because it costs nothing to write.
But much less thought goes into every line of code, and I often am kinda amazed that how compact and rudimentary the (hand-written) logic is behind some of our stuff that I thought would be some sort of magnum opus.
When in fact the opposite should be the case - every piece of functionality you don't need right now, will be trivial to generate in the future, so the principle of YAGNI applies even more.
I can agree with that. So essentially: "Claude and GPT regularly write programs that are way better than what I would’ve written given the amount of time I was willing to spend."
How much time and effort are you willing to spend on maintaining that code though? The AI can't do it on its own, and the code quality is terrible enough.
Have you tried the latest models at best settings?
I've been writing software for 20 years. Rust since 10 years. I don't consider myself to be a median coder, but quite above average.
Since the last 2 years or so, I've been trying out changes with AI models every couple months or so, and they have been consistently disappointing. Sure, upon edits and many prompts I could get something useful out of it but often I would have spent the same amount of time or more than I would have spent manually coding.
So yes, while I love technology, I'd been an LLM skeptic for a long time, and for good reason, the models just hadn't been good. While many of my colleagues used AI, I didn't see the appeal of it. It would take more time and I would still have to think just as much, while it be making so many mistakes everywhere and I would have to constantly ask it to correct things.
Now 5 months or so ago, this changed as the models actually figured it out. The February releases of the models sealed things for me.
The models are still making mistakes, but their number and severity is lower, and the output would fit the specific coding patterns in that file or area. It wouldn't import a random library but use the one that was already imported. If I asked it to not do something, it would follow (earlier iterations just ignored me, it was frustrating).
At least for the software development areas I'm touching (writing databases in Rust), LLMs turned into a genuinely useful tool where I now am able to use the fundamental advantages that the technology offers, i.e. write 500 lines of code in 10 minutes, reducing something that would have taken me two to three days before to half a day (as of course I still need to review it and fix mistakes/wrong choices the tool made).
Of course this doesn't mean that I am now 6x faster at all coding tasks, because sometimes I need to figure out the best design or such, but
I am talking about Opus 4.6 and Codex 5.3 here, at high+ effort settings, and not about the tab auto completion or the quick edit features of the IDEs, but the agentic feature where the IDE can actually spend some effort into thinking what I, the user, meant with my less specific prompt.
I feel like we're talking about different things. You seem to be describing a mode of working that produces output that's good enough to warrant the token cost. That's fine, and I have use cases where I do the same. My gripe was with the parent poster's quote:
> Claude and GPT regularly write programs that are way better than what I would’ve written
What you're describing doesn't sound "way better" than what you would have written by hand, except possibly in terms of the speed that it was written.
yeah it writing stuff that's way better than mine is not the case for me, at least for areas I'm familiar with. In areas I'm not familiar with, it's way better than what I could have produced.
> I am talking about Opus 4.6 and Codex 5.3 here, at high+ effort settings
So you have to burn tokens at the highest available settings to even have a chance of ending up with code that's not completely terrible (and then only in very specific domains), but of course you then have to review it all and fix all the mistakes it made. So where's the gain exactly? The proper goal is for those 500 lines to be almost always truly comparable to what a human would've written, and not turn into an unmaintainable mess. And AI's aren't there yet.
no. I'm a pretty skilled programmer and I definitely have to intervene and fix an architectural problem here and there, or gently chastise the LLM for doing something dumb. But there are also many cases where the LLM has seen something that i completely missed or just hammered away at a problem enough to get a solution that is correct that I would have just given up on earlier.
The clanker can produce better programs than me because it will just try shit that I would never have tried, and it can fail more times than I can in a given period of time. It has specific advantages over me.
"Ah, now here's the thing about that. I wrote this custom 'surveillance countermeasures' tool a few years ago and have been using it ever since. It constantly emits false data about me to confuse these sorts of data collection services. Funny that it thinks I would like prunes."
> Also, I would like to point out that almost all women have had more then 0 sexual partners before wedding. Hence your statement would actually be kinda correct of you remove the "recklessly".
Having premarital sex is not everyone's definition of "promiscuous".
I agree, which itself is also contributing to falling birth rates. I think everyone in this thread is imagining me as a bitter incel being outraged by not getting the attention I supposedly deserve, which couldn't be further from the truth.
I'm merely observing a lot of factors which in aggregate can unquestionably be seen as causing this.
The reality is that the traditional gender roles where very positive in the context of reproduction, which was literally my first sentence of my first comment.
It is not a judgement on wherever we should aim to revert to them, it's just factual.
Arguing against that is basically at a level of arguing that water isn't wet.
Now to link this back to the discussion at hand: a significant chunk of society would consider premarital sex with people whom they aren't planning to marry to be promiscuous. And those people are part of the population which wouldve become families in a different age.
You keep claiming the things you're saying are unarguable and as obvious as water being wet, in a thread of folks repeatedly talking about the nuances and differences.
Birth rates going down seems to be a thing. That's about all I agree are facts here. I struggle to even meet you at "traditional gender roles" like that's some universal constant - is that Protestant America? Catholic Ireland? Is that one of the Chinese dynasties? Sub-saharan African tribal society?
I think, like most things, it's unlikely you've found the "as obvious as water being wet" single smoking gun to a broader solution.
Social pressure to marry young and breed will clearly have an effect on birth rates. I'd be surprised if anyone would disagree there, all other things being equal. It feels ridiculous to assert that is the only possible influence and even more ridiculous to assert one particular set of social norms is the only way back. I know so many people that don't fit this incredibly narrow view, including everything from "traditional" couples not wanting kids (for lots of different reasons from money to global stability to being jaded to genuinely not caring) to very very not "traditional" people who ARE having kids.
If this is worth talking about I think it's worth taking in more info than just blaming resentment over women being more empowered over their own lives (or more slutty or more undesirable or however you want to frame it).
You're saying it's de facto selfish to not have kids? What if someone can't have kids?
In reality everyone who's thinking about having kids exists on a spectrum of what's possible: either it's going to be really easy for you (because you're Elon Musk and you don't give a fuck) or it's going to be borderline impossible (because you're infertile, or you're broke, or whatever).
Just because someone looked at the odds and said "you know what, maybe this isn't a great idea" doesn't make them selfish. Meanwhile you're the one imposing your worldview on them...
Not to put words in their mouth, but I think part of the poster's point is that inability is is a more complex equation than simple biological capacity. A couple who judges it economically risky or otherwise irresponsible to start a family (which represents a wide swath of the population) could for example consider themselves unable.
No, this is a pretty typical conversation on the Internet these days: someone takes a relatively well-defined stance on an issue, and then someone else wildly misinterprets or misrepresents it, just to get in a dig at the original person for... Unclear reasons.
It's either terrible reading comprehension, an inability to understand nuance, or just plain trolling. None of these lead to productive conversations.
For me, it's tabs-vs-spaces, but doesn't every codebase have its own peeing-against-the-wind patterns that are necessary because of some historical reason or another? What's the way to mitigate against this trend towards the center other than throwing up my hands and admitting defeat?
Absolutely not. Every codebase has some nuances but majority folows very similar rules and patterns. There is nothing to mitigate just don’t expect that something trained on A would suddenly be good at B
No one is going to care about anyone’s painstaking avoidance of chlorofluorocarbons if it takes ten times as long to style your hair with imperceptibly less ozone hole damage.
reply