Considering LLMs are models of language, investing in the clarity of the written word pays off in spades.
I don't know whether "literate programming" per se is required. Good names, docstrings, type signatures, strategic comments re: "why", a good README, and thoughtfully-designed abstractions are enough to establish a solid pattern.
Going full "literate programming" may not be necessary. I'd maybe reframe it as a focus on communication. Notebooks, examples, scripts and such can go a long way to reinforcing the patterns.
Ultimately that's what it's about: establishing patterns for both your human readers and your LLMs to follow.
Yeah, I think what is needed is somewhere between docstrings+strategic comments, and literate programming.
Basically, it's incredibly helpful to document the higher-level structure of the code, almost like extensive docstrings at the file level and subdirectory level and project level.
The problem is that major architectural concepts and decisions are often cross-cutting across files and directories, so those aren't always the right places. And there's also the question of what properly belongs in code files, vs. what belongs in design documents, and how to ensure they are kept in sync.
The question being - are LLMs 'good' at interpreting and making choices/decisions about data structures and relationships?
I do not write code for a living but I studied comp sci. My impression was always that the good software engineers did not worry about the code, not nearly as much as the data structures and so on.
The only use of code is to process data, aka information. And any knowledge worker that the success of processing information is mostly relying on how it's organized (try operating a library without an index).
Most of the time is spent about researching what data is available and learning what data should be returned after the processing. Then you spend a bit of brain power to connect the two. The code is always trivial. I don't remember ever discussing code in the workplace since I started my career. It was always about plans (hypotheses), information (data inquiry), and specifications (especially when collaborating).
If the code is worrying you, it would be better to buy a book on whatever technology you're using and refresh your knowledge. I keep bookmarks in my web browser and have a few books on my shelf that I occasionally page through.
Wow, the world is getting much faster at exploiting CVEs
> 67.2% of exploited CVEs in 2026 are zero-days, up from 16.1% in 2018
But the exploit rate (the pct of all published CVEs that are actually exploited in the wild) has dropped from a high of 2.11% in 2021 to 0.64% in 2026. Meaning we're either getting worse at exploitation (not likely) or reporting more obscure, pragmatically not-really-an-issue issues that can't be replicated IRL.
So we're in a weird situation:
The vast majority 99.4% of CVEs will never see the light of day as an actual attack. Lots of noise, and getting noisier.
But those that do will happen with increasing speed! So there are increased consequences for missing the signal.
The entire zeitgeist of software technology revolves around the assumption that making things efficient, easy, and quick is inherently good. Most people who are "sitting in front of rectangles, moving tiny rectangles" have sometime grandiose notions of their works' importance; we're making X work better for the good of Y to enable Z. Abstract shit like that.
No man, you're just making X easier. If the world needs more X, fine. If not, woops.
The detachment from reality makes it all too easy to deceive yourself into thinking "hey this actually helps people".
> Most people who are "sitting in front of rectangles, moving tiny rectangles"
Hey dude these are my emotional support rectangles!
Truth is, anything can be meaningful. We make our own meaning and almost anything will do as long as you believe in it. If optimizing rectangles on the screen makes you happy, that’s great. If it doesn’t, find something else to do.
It’s really just because those of us choosing this profession are also very good at optimizing chosen metrics. But don’t always ask whether they are good metrics and whether they become counterproductive past some point.
This is one of the reasons why I'm so disgusted by the mainstream voices around AI. As if I'm going to be "left behind" because my only priority isn't increasing shareholder value or building a saas that makes the world a worse place.
Requirements handed down - never seen it in 25 years. The requirements are always fluid, by definition. At best, you get a wish list which needs to be ammended with reality. If you have completely static requirements, you don't need an engineer! You just do it. Engineering IS refining the requirements according to empirical data.
Once you have requirements that are correct (for all well-defined definitions of "correct"), the code implementation is so trival that an LLM can do it :-)
Doing things "faster" and "easier" is an interesting way to put it. It places all of the value on one's personal experience of using the AI, and completely ignores the quality of the thing produced. Which explains why most stuff produced by LLMs is throwaway garbage! It only reinforces the parent comment - there is virtually no emphasis on making things "better".
There is a funny, deep observation made by The Good Place character Michael (a non-human) that has stuck with me since. He says that humans took ice cream, which was perfect, and "ruined it a little" to invent frozen yogurt, just so they could have more of it. There's supposedly a 'guilt' angle there somewhere but I never felt guilty for eating "too much" ice cream so can't relate.
Still, this "making something worse so you can have more of it" shows up pretty much everywhere in human experience. Sometimes it's depressing, other times amazing to see what was achieved with that mentality, and it seems AI is just accelerating it.
There won't even be a quality conversation if a thing isn't built in the first place, which is the tendency when the going is slow and hard. AI makes the highly improbable very probable.
I agree. I think this is the LLM superpower: making quick prototypes that allow us to speak concretely about technical tradeoffs.
My comment was pointed at people who use AI specifically with the goal of making anything easier and faster. Doesn't matter what it is. "Faster and easier is better". as though doing more of the same shit are primary goals in themselves.
If you're using AI to explore better technical decisions, you're doing it right! AI can be a catalyst for engineering and science. But not if we treat it like a mere productivity tool. The quality of the thing enabled by the AI very much matters.
Doing things faster/easier means I now do most of these things whereas I didn't before.
Because I have limited time and energy. Take learning as an example:
I couldn't afford to spend a weekend learning the tradeoffs made by the top 5 WebGL JavaScript game engines AND generate the same demos for all of them to compare DX, and performance on my phone. And as I had more questions about their implementation I would have to scavenge their code again, for each question.
A sample of the questions I had (and as I asked it would suggest new question for things I didn't know I should ask):
- Do they perform sorting or are their drawing immediate? sort on z? z and y? z/y and layers? immediate'ish + layers? Frustum culling supported? What's their implementation for it if any?
- What are their GPU atlas strategies? fixed size? multiple with grouping by drawing frequency to reduce atlas switching? 2048? 4096? How many atlases? Does it build the atlas at boot or does it support progressive atlas sprite loading? How does it deal with fragmentation? What does it use for packing algo? Skyline ir something more advanced? How is their batch splitting behaviour and performance characteristics?
- Does it help with ECS? How is their hierarchical entity DX, if any? Does it math with matrices for transformations or simpler math? Shaders support? Do they use an Uber shader for most things? And what about polygons? Also, how do they help with texture bleeding? What's their camera implementation? Do they support spatial audio?
...and so on. Multiply the number of questions by at least 10.
And I asked LLM to show me the code for each answer, on all 5 engines.
This kind of learning just wasn't feasible for me before with my busy life.
So when I say "easier" it often means "made possible".
Finally let's not forget most of us in HN are incredibly privileged and can afford to learn futile things on the weekend. But for a great part of the less privileged population, having access to easier learning is LIFE CHANGING.
I agree with the sentiment, that most non-decisions are really implicit decisions in disguise. They have implications whether you thought about them up front or not. And if you need to revisit those non-decisions, it will cost you.
But I don't like calling this tech debt. The tech debt concept is about taking on debt explicitly, as in choosing the sub-optimal path on purpose to meet a deadline then promising a "payment plan" to remove the debt in the future. Tech debt implies that you've actually done your homework but picked door number 2 instead. A very explicit choice, and one where decision makers must have skin in the game.
A hurried, implicit choice has none of those characteristics - it's ignorance leading (inevitably?) to novel problems. That doesn't fit the debt metaphor at all. We need to distinguish tech debt from plain old sloppy decision making. Maybe management can even start taking responsibility for decisions instead of shrugging and saying "Tech debt, what can you do, amirite?"
> Succinctness, functionality and popularity of the language are now much more important factors.
Not my experience at all. The most important factor is simplicity and clarity. If an LLM can find the pattern, it can replicate that pattern.
Language matters to the extent it encourages/forces clear patterns. Language with more examples, shorter tokens, popularity, etc - doesn't matter at all if the codebase is a mess.
Functional languages like Elixir make it very easy to build highly structured applications. Each fn takes in a thing and returns another. Side effects? What side effects? LLMs can follow this function composition pattern all day long. There's less complexity, objectively.
But take languages that are less disciplined. Throw in arbitrary side effects and hidden control flow and mutable state ... the LLM will fail to find an obviously correct pattern and guess wildly. In practice, this makes logical bugs much more likely. Millions of examples don't help if your codebase is a swamp. And languages without said discipline often end up in a swamp.
This is the hidden super power of LLM - prototyping without attachment to the outcome.
Ten years ago, if you wanted to explore a major architectural decision, you would be bogged down for weeks in meetings convincing others, then a few more weeks making it happen. Then if it didn't work out, it feels like failure and everyone gets frustrated.
Now it's assumed you can make it work fast - so do it four different ways and test it empirically. LLMs bring us closer to doing actual science, so we can do away with all the voodoo agile rituals and high emotional attachment that used to dominate the decision process.
I basically just _accidentally_ added a major new feature to one of my projects this week.
In the sense that, I was trying to explain what I wanted to do to a coworker and my manager, and we kept going back and forth trying to understand the shape of it and what value it would add and how much time it would be worth spending and what priority we should put on it.
And I was like -- let me just spend like an hour putting together a partially working prototype for you, and claude got _so close_ to just completely one-shotting the entire feature in my first prompt, that I ended up spending 3 hours just putting the finishing touches on it and we shipped it before we even wrote a user story. We did all that work after it was already done. Claude even mocked up a fully interactive UI for our UI designer to work from.
It's literally easier and faster to just tell claude to do something than to explain why you want to do it to a coworker.
That's only because no one understood agile or XP and they've become a "no one actually does that stuff" joke to many. I have first hand experience with prototyping full features in a day or two and throwing the result away. It comes with the added benefit of getting your hands dirty and being able to make more informed decisions when doing the actual implementation. It has always been possible, just most people didn't want to do it.
> Even the crotchetiest and most out-of-touch people I know basically accept that the Earth is warming now
Same. Empirical evidence is just too hard to ignore.
It's quite amazing watching the "climate change isn't real" folks transition to "climate change is no big deal", then to "climate change is too hard/expensive to deal with".
> It's quite amazing watching the "climate change isn't real" folks transition to "climate change is no big deal", then to "climate change is too hard/expensive to deal with".
At the top level (of government and corporate entities) those people always knew it was real, the messaging just changed as it became harder to keep a straight face while parroting the previous message in the face of overwhelming empirical evidence.
Exxon's (internal) research in the 1970s has been very accurate to the observed reality since then.
They just didn't care that it was real because they valued profits/power/etc in the moment over some difficult to quantify (but certainly not good) future calamity.
You would think they would care at least in the cases where they had children and grandchildren who will someday have to really reckon with the outcome, but you'd be wrong, they (still) don't give a shit.
Except it's the opposite - empirical evidence is very easy to ignore. Between herding, the replication crisis, and the overall insularity of academia, trust in "studies" has never been lower.
But people still respond very well to demonstrative or pragmatic evidence. Empirically there's nothing special about a keto diet. But demonstratively the effects are very convincing.
People just lived through a crisis in which public health officials were telling them to avoid a deadly virus by using glory holes[0]. Skepticism of institutions is at an all time high for good reason.
Thanks for that reminder of some cultural differences (!) between us and our friends across the pond. Hopefully it goes without saying, that rather colorful example is a few steps removed from the replication crisis, although the point about governing institutions spending their credibility in poor ways is taken.
The US had a version of this as well. At the height of lockdowns and social distancing a lot of health officials were saying protesting racial injustice was more important than Covid 19, which we closed a lot of businesses for.
I've got to admit that I'm unclear on the relationship between the US's attempts to juggle public health priorities with the constitutional right to freedom of assembly and... the UK glory hole thing. But I'm wondering if those George Floyd protests were a lot more fun than I always suspected.
That didn't happen.
And if it did, it wasn't that bad.
And if it was, that's not a big deal.
And if it is, that's not my fault.
And if it was, I didn't mean it.
And if I did, you deserved it.
Narcissism is America’s greatest vice, imo. Not surprising to see it take center stage on what may be the nation’s greatest challenge: ensuring our future in the face of climate change.
That didn't happen.
And if it did, it wasn't that bad.
And if it was, that's not a big deal.
And if it is, that's not my fault.
And if it was, I didn't mean it.
And if I did, you deserved it.
Historically, you'd get your polyphenols from your garden or wild gathering. But we know that industrial crops (even organic grown) have extremely low polyphenol content compared to their wild counterparts. So coffee remains as one of the few strong sources you can buy in a grocery store.
Hypothesis: Polyphenols from other sources would be just as protective as coffee.
Hypothesis 2: 2-3 coffees a day is a symptom of a normal life
You get that kind of issue coming up a lot in this sort of research. Like people who don't drink at all are probably more likely to drop dead in the next year than moderate drinkers. Not because drink protects but because people critically ill tend not to drink.
Sorry it wasn't clear, I was not advocating for anything here, certainly not that people should smoke to prevent dementia.
Large quantities of nicotine are poisonous (just like caffeine) and it is addictive (more so than caffeine).
Regarding cardiovascular health, I'm not sure. As far as I know, nicotine itself is safe unless overdosed, but smoking and vaping of course is unhealthy.
I like coffee, and drinking to much of it is also unhealthy.
However, I love to remind myself of all the pop-sci articles saying that 2-3 cups are healthy when I'm making myself my 5th or 6th cup for the day.
Oh I did not think you were advocating anything. I was just thinking aloud.
I used to drink lots of coffee earlier but now my caffeine metabolism seems to have ground to halt. Anything more than two mugs and night's sleep is history. Even that infrequent second mug pushes it a lot.
In all fairness, my mug is around 2 to 3 espresso shots.
Have to find myself some good local decaf. That's not easy in India.
Oh I see, and tbh it's the same for me with sleep, I should be more discplined about caffeine given I have trouble sleeping anyway. Still often make the mistake of brewing a coffee in late afternoon because I love the ritual and it gets me off my desk.
It also has replaced cigarette breaks for me.
Drinking local coffee is admirable, but not an option in Germany :) I often buy "fair" brands in hope that does something but only when I can get it at OK prices...
Since you mention decaf, mixing 50/50 decaf/regular is also a good option to reduce caffeine intake for me.
I don't know whether "literate programming" per se is required. Good names, docstrings, type signatures, strategic comments re: "why", a good README, and thoughtfully-designed abstractions are enough to establish a solid pattern.
Going full "literate programming" may not be necessary. I'd maybe reframe it as a focus on communication. Notebooks, examples, scripts and such can go a long way to reinforcing the patterns.
Ultimately that's what it's about: establishing patterns for both your human readers and your LLMs to follow.
reply