Anyone who's seen an enterprise deal close or dealt with enterprise customer requests will know this, the build vs buy calculus has always been there yet companies still buy. Until you can get AI to the point where it equivalent to a 20 person engineering team, people are not going to build their own Snowflake, Salesforce, Slack or ATS. Maybe that day is 3 years away but when that happens the world will be very different
Companies do make/buy decisions on everything, it just software. Cleaning services are not expensive, yet companies contract them instead of hiring staff.
This is called transaction cost economics, if anyone’s interested.
We’ve also got to consider the fourth dimension, what happens over time.
Salesforce is getting LLM superpowers at the same time the Enterprise is, so customizing and maintaining and extending Salesforce are all getting cheaper and better and easier for customers, consultants, and Salesforce in parallel.
Unless the LLMs are managing the entire process there’s still a value proposition around liability, focus, feature updates, integrations, etc. Over time that tech should make Salesforce get way cheaper, or, start helping them upsell bigger and badder Sales things that are harder to recreate.
And, big picture, the LLMs are well trained on Salesforce API code. Homegrown “free” versus industry-standard with clear billing, whatever we know versus man-decades of learning at a vendor, months of effort and all the risk & liability versus turnkey with built-in escape goats… at some point you’re paying money not to own, not to learn, not to be distracted, and to have jerks to sue if something goes bad.
I agree generally, but some of these enterprise contracts are eye-watering. If the choice is $2M/year with a 3-year minimum contract, or rolling your own, I think calculus really has shifted.
With that said, the entire business world does not understand that software is more than just code. Even if you could write code instantly, making enterprise software would still take time, because there are simply so many high-stakes decisions to make, and so much fractal detail.
> If the choice is $2M/year with a 3-year minimum contract, or rolling your own, I think calculus really has shifted.
But why? It was always dramatically cheaper for enterprises to build rather than buy. They stopped doing that becuase they did that in the 90s and ended up with legacy codebases that they didn't know how to maintain. I can't see AI helping with that.
This might be the biggest benefit of AI coding. If I have a large legacy code base I can use AI to ask questions and find out where certain things are happening. This benefit is huge even if I choose not to vibe code anything. It ends up feeling a lot like the engineer that wrote the code is still with you or documented everything very well. In the real world there is a risk that documentation is wrong or that the engineer misremembers some detail so even the occasional hallucination is not a particularly big risk.
> This might be the biggest benefit of AI coding. If I have a large legacy code base I can use AI to ask questions and find out where certain things are happening. This benefit is huge even if I choose not to vibe code anything.
If you consider total cost of ownership including long-term maintenance costs, it means building has not always been cheaper than buying. I think what's changing is that it's now becoming dramatically cheaper to build AND operate AND maintain "good enough" bespoke software for a lot of use cases, at least in theory, below a certain threshold of complexity and criticality. Which seems likely to include a sizeable chunk of the existing SAAS market.
I can't believe I'm saying this, but I guess you don't even really need to maintain software if it's just a tool you hacked together in a week. You can build v2 in another week. You'll probably want to change it anyways as your users and your org evolve. It's a big question for me how you maintain quality in this model but again, if your quality standard is "good enough", we're already there.
Oh I don't know, I think AI is very helpful at maintaining and modernizing legacy codebases. And in the old days, the "build" option was often not really cheaper once you factored in four salaries for developers to maintain the product. But now…
Exactly. I was building an app to track bike part usage. It was an okay app, but then I just started using ai with the database directly. Much more flexible, and I can get anything I need right then. AI will kill a lot of companies, but it won’t be the software it develops, it will be the agent itself
"This entire stack could give you computing power equivalent to a 25k euro/month AWS bill for the cost of electricity (same electricity cost as running a few fridges 24/7) plus about 50k euros one-time to set it up (about 4 Mac Studios). And yes, it's redundant, scalable, and even faster (in terms of per-request latency) than standard AWS/GCP cloud bloat. Not only is it cheaper and you own everything, but your app will work faster because all services are local (DB, Redis cache, SSD, etc.) without any VM overhead, shared cores, or noisy neighbours."
Makes me think there will be these prompts like "convert this app to suit a new stack for my hardware for locally-optimized runtime."
How are people building the best local stacks? Will save people a ton of money if done well.
Yep, we'll evolve patterns which facilitate system to system interaction better than the ones we had built for human in the loop by humans. That's inevitable. CRUD apps with a frontend will be considered legacy etc. They'll be replaced by more efficient means we haven't even considered. We live in an exciting time.
If an AI agent ever became as productive at writing code as a well-organized 20 person engineering team you'd still need to run it for a year or more to replicate any nontrivial SaaS product.
And the thing about many of these products isn't their feature set, it's their stability. It's their uptime. It's how they handle scaling invisibly and with no effort on your part. These are things you can't just write down from whole cloth, they are properties that emerge over time by adapting the the reality of scale. Coding isn't the whole deal, and your 20x clanker which can do nothing but re-arrange text in interesting patterns is going to have some trouble with the realities of taking that PoC to production. You'll still need experienced, capable people for that. And lots of time.
A lot of this "ermahgerd everything will change" drivel is based on some magical fundamentally new technology emerging in the near future that can do things that LLMs cannot do. But as far as anyone knows, that future may be never.
So even given a large improvement in agentic coding I'm not convinced it really changes the build vs buy equation much.
I recently browsed r/chatgptcomplaints expecting to see You're absolutely right type memes and similar but it was all farewell posts to o4 and people showing each other how to set up o4 using the API
This is very "distant" suggestion if you enjoyed Antimemetics, but The Unconsoled by Kazuo Ishiguro is another one of my favourites, and it too explores this idea of unreliable and inconsistent memories, although from a completely different angle.
I consider Recursion by Blake Crouch to be similar, even though I liked Antimemetics much better. I haven't read Crouch's other books, but have heard that Dark Matter is better than Recursion, though it may be less similar to Antimemetics.
I've enjoyed Peter Watts in kind of similar way I enjoy qntm. It's nerdy, explores interesting ideas, and written by a professional in a field who draws on their education, skills and interests. Premier work is probably Blindsight but the Sunflower cycle stories are likely easier to start. Like qntm, a lot of his works are online for free:
I read the original antimemetic division book a few times, and gifted the book to few friends too (love his other works too:).
I pre ordered the update, but only got a third through. I'm not quite nerdy enough to do a page or sentence comparison, but it felt less "tight" - not sure if the exposition is more prosaic or there's less mystery or just more description that wasn't strictly needed (for me). Or, maybe I just reread the original too recently! Anybody else read both versions? :-).
2025 paid version has more coherent ending (which is nice) and more linear timeline for your average non-technical Joe. Which is probably a good thing.
I read it in print and thought it was awful, such an interesting idea but explored by a rank amateur, curious to know how different the original creepypasta was.
That comment throws all econ 101 in the wrong way. A land owners decision to build a golf course over solar farm is a decision based on competing land uses which are demand/supply for land and the potential services you could provide on that land. Which is why you don't often see solar farms or farms or power plants in the middle of cities...
negative benchmark isn't it? no sane lab is going to realease PR that states our newest model is best at lying, if anything the reverse may occur, if this catches on, they will make their model play werewolf badly and claim "alignment improvements, our model no longer lies as much in werewolf" but it lies more often in other domains
would be interesting to see what scores it's get when it is actually degraded via the status page, it gets degraded pretty often, so there's at least something to compare or to know at what point Anthropic declares degradation
I'm not sure what the point is wrt ASML, they made good bets, they won their monopoly, their shareholders who funded the bets get to enjoy monopoly pricing. If they start cutting R&D and lose their crown, yes it's shame I guess but that's all there is. To expect a company to sell their goods cheaply when they are the only ones in the world who can them is asking for too much. It's great that they and their investors took the punt on EUV all those years ago, we probably would not have the chips we have today and all the economic benefits around it
reply