More and more plainly, OpenAI and Anthropic are making plays to own (and lease) the "means of production" in software. OK - I'm a pretty happy renter right now.
As they gobble up previously open software stacks, how viable is it that these stacks remain open? It seems perfectly sensible to me that these providers and their users alike have an interest in further centralizing the dev lifecycle - eg, if Claude-Code or Codex are interfaces to cloud devenvs, then the models can get faster feedback cycles against build / test / etc tooling.
But when the tooling authors are employees of one provider or another, you can bet that those providers will be at least a few versions ahead of the public releases of those build tools, and will enjoy local economies of scale in their pipelines that may not be public at all.
It’s a small tool shop building a tiny part of the Python ecosystem, let’s not overstate their importance. They burned through their VC money and needed an exit and CLI tool chains are hyped now for LLMs, but this mostly sounds like an acquihire to me. Dev tools are among the hardest things to monetize with very few real winners, so good for them to get a good exit.
Just a tiny project with over 100 million downloads every month, over 4 million every day. No big deal. Just a small shop, don't overstate its importance.
Ruff is nice, but not important, uv is one of the few things making the python ecosystem bearable. Python is a language for monkeys, and if you don't give monkeys good tools, they will forever entangle themselves and you. It is all garbage wrapped in garbage. At least let me deploy it without having to manually detangle all that garbage by version.
I'm done pretending this is a "right tools for the right job" kind of thing, there's wrong people in the right job, and they only know python. If no one self-writes code anymore anyway, at least use a language that isn't a clusterfuck of bad design decisions, and has 1 trillion lines of code in the collective memory of people who don't know what a stack is.
> If no one self-writes code anymore anyway, at least use a language that isn't a clusterfuck of bad design decisions
I can get behind the idea that LLM's probably don't need a language designed for humans if humans arent writing it, but the rest of this is just daft. Pythons popularity isn't just pure luck, in fact its only been in recent years that the tooling has caught up to the point where its as easy to setup as it is to write, which should really tell you something if people persevered with it anyway.
I'm sorry your favourite language doesnt have the recognition it so rightfully deserves, but reducing python to just "stupid language for stupid people" is, well, stupid
Keep in mind that when Graham coined that term Java and C++ were considered blub languages.
Speaking as a grey beard myself, I think its safe to say that the grey beards among us will always deride those who didn't have to work as hard as they did.
I used to do backend development in superior languages, and sometimes do hobby frontend in superior languages, but my work is Python now. And it kind of has to be Python: we do machine learning, and I work with GDAL and PDAL and all these other weird libraries and everything has Python bindings! I search for "coherent point drift" and of course there's a Python library.
The superior languages I mentioned... perhaps they have like a library for JSON encoding and decoding. You need anything else? Great, now you're a library author and maintainer!
relax, soon u be rewriting the essence of all these libs into something new. python has its days numbered also perhaps for many engineering decisions that are now cheap via llms.
I guess it's an individual solution to that, but it's a solution that basically worsens the actual problem, as I see it, which is strict/narrow version pinning with frequent updates to latest and minimal effort to track backwards compatibility let alone try to maintain it. It just turns it into nodejs constant wrestling with package.json changes.
if you are working on one tiny project on your machine that pips in four packages you probably think pip was OK.
Circa 2017 I was working on systems that were complex enough that pip couldn't build them and after I got to the bottom of it I knew it not my fault but it was the fault of pip.
I built a system which could build usable environments out of pre-built wheels and sketched out the design of a system that was roughly 'uv but written in Python' but saw two problems: (1) a Python dependent system can be destroyed by people messing with Python environments, like my experience is that my poetry gets trashed every six months or so and (2) there was just no awareness by the 'one tiny project on your machine that pips in four packages' people that there was a correctness problem at all and everybody else was blaming themselves for a problem and didn't have a clear understanding of what was wrong with pip or what a correct model for managing python dependencies is (short answer: see maven) or that a 100% correct model was even possible and that we'd have to always settle for a 97% model. The politics looked intractable so I gave up.
Now written in rust, uv evaded the bootstrap problem and it dealt with the adoption problem by targeting 'speed' as people would see the value in that even if they didn't see the value in 'correctness'. My system would have been faster than pip because it would have kept a cache, but uv is faster still.
I still believe Rust is a red herring here. Your ‘uv but written in Python’ would probably have the same success as uv does now, if you did focus on speed over correctness. And I’ve yet to hear about pipx or Poetry getting trashed, but if it is a problem, I don’t think it’s impossible to solve in Python vs Rust.
> The politics looked intractable so I gave up.
So yeah, this is your actual problem. (Don’t worry, I’m in the same camp here.)
As much as I'm a Python fan I strongly disagree here that rust is a red herring.
Having a static binary makes distribution way simplier. There are a bunch of ways you could try to achive something like in python but it would be significantly larger.
Performance-wise writing it in python would have heavy startup overhead and wouldn't be able to get close to the same level of performance.
Obviously you could achive the same thing in many other languages, but rust ends up being a really good fit for making a small static binary for this workload of network heavy, IO-bound, async/threading friendly with the occasional bit of CPU heavy work.
> everybody else was blaming themselves for a problem and didn't have a clear understanding of what was wrong with pip or what a correct model for managing python dependencies is (short answer: see maven)
I always looked down on the Java ecosystem but if it turns out Maven had a better story all along and we all overlooked it, that's wild.
How is uv awesome and Poetry so bad? They do basically the same things except Astral re-invents the wheel but only part way instead of just relying on the existing tools. uv is fast. As far as I can tell, there's hardly any difference in functionality except for it also replacing PyEnv, which I never use anyway.
uv assuming your local Python is busted to hell and back helps a lot with isolation.
Poetry's CLI would often, for me, just fall over and crash. Crashing a lot is not a fundamental problem in the sense you can fix the bugs, but hey I'm not hitting uv crashes.
pipenv was even worse in terms of just hanging during package resolution. Tools that hang are not tools you want in a CI pipeline!
The end result: `uv run` I expect to work. `pipenv` or `poetry` calls I have to assume don't work, have to put retries into CI pipelines and things like that.
uv has a lot of sensible defaults that prevent clueless developers to shoot their own feet. Uv sync, for example, would uninstall packages not in pyproject.toml
i kind of disagree with this. uv run is clunky, i don't want that. i want to keep the activate the venv and do shit model. i hate uv run as a primitive.
I've used python for roughly 15 years, and 10 of those years I was paid to primarily write and maintain projects written in Python.
Things got bearable with virtualenv/virtualenv wrappers, but it was never what I would call great. Pip was always painful, and slow. I never looked forward to using them - and every time I worked on a new system - the amount of finaggling I had to do to avoid problems, and the amount of time I spent supporting other people who had problems was significant.
The day I first used uv (about is as memorable to me as the the day I first started using python (roughly 2004) - everything changed.
I've used uv pretty much every single day since then and the joy has never left. Every operation is twitch fast. There has never once been any issues. Combined with direnv - I can create projects/venvs on the fly so quickly I don't even bother using it's various affordances to run projects without a venv.
To put it succinctly - uv gives me two things.
One - zero messing around with virtualenvwrappers and friends. For whatever reason, I've never once run into an error like "virtualenvwrapper.sh: There was a problem running the initialization hooks."
Two - fast. It may be the fastest software I've ever used. Everything is instant - so you never experience any type of cognitive distraction when creating a python project and diving into anything - you think it - and it's done. I genuinely look forward to uv pip install - even when it's not already in cache - the parallel download is epically fast - always a joy.
Everything “just works” and is fast - and that’s basically it.
You can run a script with a one liner and it will automatically get you the same python and venv and everything as whoever distributed the python code, in milliseconds if the packages are already cached on your local computer.
Very easy to get going without even knowing what a venv or pypi or anything is.
If you are already an expert you get “faster simpler tooling” and if you are a complete beginner it’s “easy peasy lemon squeezy”.
for one, it's one tool, that does the job of all three.
it just works. i'm not sure how else to describe it other than less faffing about. it just does the right thing, every time. there's a tiny learning curve (mostly unlearning bad or redundant habits), but once you know how to wield it, it's a one stop shop.
uv is nice, but not irreplaceable. An open source, maintenance mode fork would work just as fine. And even if all of uv disappeared today, I’d go just back to Poetry. Slower? Sure, a bit.
...and then I’ve read the rest of your comment. Please do go read the HN guidelines.
I was using poetry pretty happily before uv came along. I’d probably go back.
Note that uv is fast because — yes, Rust, but also because it doesn’t have to handle a lot of legacy that pip does[1], and some smart language independent design choices.
If uv became unavailable, it’d suck but the world would move on.
Like, the whole point of open source is that this thread is not a thing. The whole point is "if this software is taken on by a malevolent dictator for life, we'll just fork it and keep going with our own thing." Or like if I'm evaluating whether to open-source stuff at a startup, the question is "if this startup fails to get funding and we have to close up shop, do I want the team to still have access to these tools at my next gig?" -- there are other reasons it might be in the company's interests, like getting free feature development or hiring better devs, but that's the main reason it'd be in the employees' best interests to want to contribute to an open-source legacy rather than keep everything proprietary.
The leadership and product direction work are at least as hard as the code work. Astral/uv has absolutely proven this, otherwise Python wouldn't be a boneyard for build tools.
Projects - including forks - fail all the time because the leadership/product direction on a project goes missing despite the tech still being viable, which is why people are concerned about these people being locked up inside OpenAI. Successfully forking is much easier said than done.
I had a lot of trouble convincing people that a correct Python package manager was even possible. uv proved it was possible and won people over with speed.
I had a sketched out design for a correct package manager in 2018 but when I talked to people about it I couldn't get any interest in it. I think the brilliant idea that uv had that I missed was that it can't be written in Python because if is written in Python developers are going to corrupt its environment sooner or later and you lose your correctness.
I think that now that people are used to uv it won't be that hard to develop a competitor and get people to switch.
You seem to be underestestimating the laziness of the people, and overestimating their resolve. Angry forks usually don't last, angst doesn't prevent maintenance burnouts.
You underestimate the value that something like uv and company bring to the ecosystem. Given enough time I could have seen it replacing some core utilities, now that its owned by OpenAI I don't see that happening, unless OpenAI "donates" the project but keeps the devs on a payroll.
You are aware that ty has only recently entered beta status?
Ruff isn’t stable yet either and has evolved into the de facto standard for new projects. It has more than double the amount of rules than Pylint does. Also downloaded more than 3 times as often as Pylint in the past month.
Pylint has some advantages, sure, but Ruffs adoption speaks for itself. Pylint is 25 years old. You’d hope they do some things better.
Saying that uv is their only winner is a hilarious take.
> I would stare longingly into the void, wondering if I can ever work another python project after having experienced uv, ruff, and ty.
You think you're disagreeing with me, but you're agreeing. To wit: The original post is silly, because ty is beta quality and Ruff isn't stable yet either. Your words.
These are just tools, Pylint included. Use them, don't use then, make them your whole personality to the point that you feel compelled to defend them when someone on the Internet points out their flaws. Whatever churns your butter.
>Saying that uv is their only winner is a hilarious take.
na this news is good enough reason to move from Ruff back to black and stay the course, I won't use anything else from Astral. I will use uv but only until pip 2/++ gets its shit together and catches up and hopefully then as a community we should jump back on board and keep using pip even if it's not as good, it's free in the freedom sense.
i think the main problem was that people didn't believe that pip was broken, or didn't think there was any value in a 100% correct package manager over a 97% correct package manager (e.g. misread "worse is better")
I had the problem basically understood in 2018 and I am still pissed that everybody wanted to keep taking their chances with pip just like they like to gamble with agent coders today.
Now that people know a decent package manager is possible in Python I think there is going to be no problem getting people to maintain one.
Idk how anyone could sustain the impression that pip was not broken unless they had basically never used anything else (including Linux package managers) long enough to have even a basic understanding of it.
And that's a big part of what's so frustrating about Python generally: it seems to be a language used by lots of people who've never used anything else and have an attitude like "why would I ever try anything else"?
Python has a culture where nominal values of user-friendliness, pragmatism, and simplicity often turn into plain old philistinism.
that makes zero sense to me. developing something like ruff from scratch takes a lot of things happening - someone having the idea, the time to develop it from scratch in their free time, or the money to do it as a job, and perhaps the need to find collaborators if it's too large a project for one person. but now ruff is there, there's no need to build it from scratch. if I wanted to build a python linter or formatter I would simply fork ruff and build on top of it. as others have said in this subthread, that's the whole point of open source!
> the time to develop it [not] from scratch in their free time, or the money...
How do you think the magic of open source resolves this issue? Think about this for it to make some sense
> I would simply fork
The only simple part here is pressing the "fork" button, which only gives you exactly the same code that already exists, without user awareness or distribution
Cannot we at one point consider the tool to be "done"? I mean, what is there to constantly change and improve? Genuinely curious. It sounds like a tool that can be finished. Can it not be?
Don't understate its importance. I've been using Python for more than 30 years. They solved a problem that a lot of smart people didn't solve (). Python developer experience improved an order of magnitude.
The “requests” package gets downloaded one billion times every month, should that be a multi billion dollar VC company as well? It’s a package manager and other neat tooling, it’s great but it’s hardly the essence of what makes Python awesome, it’s one of the many things that makes this ecosystem flourish. If OpenAI would enshittify it people would just fork or move on, that’s all I’m saying, it’s not in any way a single point of failure for the Python ecosystem.
This is not the point of uv or any good package manager. The point is what prevents Python to suck. For a long time package management had been horrible in Python compared what you could see in other languages.
I mean, these sorts of numbers speak to the mind-bogglingly inefficient CI workflows we as an industry have built. I’d be surprised if there were 4 million people in the world who actually know what ‘uv’ is.
I am still not sure why everyone jumped on uv. Sure, it's quicker than pip, but an installation rarely takes so long as to become annoying. Anyway, pip is still there, so whatever impact they have made can be rolled back if they try to pull the rug
Maybe there needs to be some nonprofit watchdog which helps identify those cases in their early stages and helps bootstrap open forks. I'd fund to a sort of open capture protection savings account if I believed it would help ensure continuity of support from the things I rely on.
Right. If anything, this "tiny part" has pretty much taken over Python and turned it from OSS BDFL language into a company-backed one (like Erlang, Scala, C#).
In the 2024 Python developer survey, 18% of the ecosystem used Poetry. When I opened this manifold question[0], I'm pretty sure uv was about half of Poetry downloads.
Estimating from these numbers, probably about 30% of the ecosystem is using `uv` now. We'll get better numbers when the 2025 Python developer survey is published.
Same. It's game-changing - leaps and bounds above every previous attempt to make Python's packaging, dependency management, and dev workflow easy. I don't know anyone who has tried uv and not immediately thrown every other tool out the window.
I use uv here and there but have a bunch of projects using regular pip with pip-tools to do a requirements.in -> requirements.txt as a lockfile workflow that I've never seen enough value in converting over. uv is clearly much faster but that's a pretty minor consideration unless I were for some reason changing project dependencies all day long.
Perhaps it never grabbed me as much because I've been running basically everything in Docker for years now, which takes care of Python versioning issues and caches the dependency install steps, so they only take a long time if they've changed. I also like containers for all of the other project setup and environment scaffolding stuff they roll up, e.g. having a consistently working GDAL environment available instantly for a project I haven't worked on in a long time.
2 things: First, you can (and should) replace your `pip install` with `uv pip install` for instant speed boost. This matters even for Docker builds.
Second, you can use uv to build and install to a separate venv in a Docker container and then, thanks to the wonders of multistage Docker builds, copy that venv to a new container and have a fully working minimal image in no time, with almost no effort.
been in the python game a long time and i've seen so many tools in this space come and go over the years. i still rely on good ol pip and have had no issues. that said, we utilize mypy and ruff, and have moved to pyproject etc to remotely keep up with the times.
uv solved it, it will be the only tool people use in 2 more years. if you’re a python shop / expert then you can do pip etc but uv turned incidental python + deps from a huge PITA for the rest of us, to It Just Works simplicity on the same level or better than Golang.
You're welcome to live in the 90s dark ages, I feel this attitude and the shape of the old linux distros like Debian that laboriously re-package years-old software have been one of the biggest failures of open source and squandered untold hours of human effort. It's a model that works okay for generic infrastructure but requires far too much labor and moves far too slowly with quite a poor experience for end users and developers. Why else would all modern software development (going back to perl's cpan package manager in 1995) route around it?
If not, do you develop software with source dependencies (go, java, node, rust, python)? If so, how do you handle acquiring those dependencies—by hand or using a tool?
Mostly no, sometimes I give up and still use pip as a separate user.
> If not, do you develop software with source dependencies (go, java, node, rust, python)? If so, how do you handle acquiring those dependencies—by hand or using a tool
I haven't felt the need to use Go, the only Java software I use is in the OS repo. I don't want to use JS software for other reasons. This is one of the reasons why I don't like Rust rewrites. Python dependencies are very often in the OS repo. If there is anything else, I compile it from source and I curse when software doesn't use or adheres to the standard of the GNU build system.
Thanks for explaining your workflow. It seems predictable, but like it really locks you into one of the few (albeit popular) programming languages that has many/most of its development libraries repackaged by your OS. There are plenty of very popular languages that don't offer that at all.
Go and Rust, specifically, seem a bit odd to be allergic to. Their "package managers" are largely downloading sources into your code repository, not downloading/installing truly arbitrary stuff. How is that different from your (presumably "wget the file into my repo or include path") workflow for depending on a header-only C library from the internet which your OS doesn't repackage?
I understand if your resistance to those platforms is because of how much source code things download, but that still seems qualitatively different to me from "npm install can do god-knows-what to my workstation" or "pip install can install packages that shadow system-wide trusted ones".
In general I agree with you. But not for software dev packages.
The package manager I use, apt on Debian, does not package many Python development repos. They've got the big ones, e.g. requests, but not e.g. uuid6. And I wouldn't want it to - I like the limited Debian dev effort to be put towards the user experience and let the Python dev devs worry about packaging Python dev dependencies.
What’s the point of constraining oneself to what is in the OS package manager? I like to keep my dependencies up to date. The versions in the OS package manager are much older.
And let’s say you constrain yourself to your OS package manager. What about the people on different distros? Their package managers are unlikely to have the exact same versions of your deps that your OS has.
> What’s the point of constraining oneself to what is in the OS package manager? I like to keep my dependencies up to date. The versions in the OS package manager are much older.
I favor stability and the stripping of unwanted features (e.g. telemetry) by my OS vendor over cutting edge software. If I really need that I install it into /usr/local, that it what this is for after all.
> And let’s say you constrain yourself to your OS package manager. What about the people on different distros? Their package managers are unlikely to have the exact same versions of your deps that your OS has.
This is a reason to select the OS. Software shouldn't require exact versions, but should stick to stable interfaces.
Geospatial tends to be the Achilles heel for python projects for me. Fiona is a wiley beast of a package, and GDAL too. Conda helped some but was always so slow. Pip almost uniformly fails in this area for me.
Finally someone competent to answer the crucial question. Taken into account the enormous amount of excellent work you did, and the fact that dev tools are hard to monetize, what was your strategy?
Are you going to join codex team as well? I am curious about how the codex code base will evolve after you guys joined. It is going to affect Python/Rust toolchains tremendously.
> Dev tools are among the hardest things to monetize with very few real winners, so good for them to get a good exit.
I'm on the fence about cancelling my JetBrains subscription I've had for nearly 10 years now. I just don't use it much. Zed and Claude Code cover all my needs, the only thing I need is a serious DataGrip alternative, but I might just sit down with Claude and build one for myself.
That was my feeling - more than 'owning' uv etc I could see this as being about getting people onboard who had a proven track record delivering developer tooling that was loved enough to get wide adoption
They were hyped here without any pushback. Maybe OpenAI thinks the Astral folks will now evangelize and foist Codex and ChatGPT onto the open source "community".
People need to be very careful about resisting. OpenAI wants to make everyone unemployed, works with the Pentagon, steals IP, and copyright whistleblowers end up getting killed under mysterious circumstances.
Every uv-related post here had a few people going "i don't want to use VC funded stuff!! what about rug pull!!", although perhaps they were drowned out eventually all the people (like me) going "uv is fantastic and solves 15 years of python packaging hell"
Not to mention their language server + type checker `ty` is incredible. We moved our extremely large python codebase over from MyPy and it's an absolute game changer.
It's so fast in fact that we just added `ty check` to our pre-commit hooks where MyPy previously had runtimes of 150+ seconds _and_ a mess of bugs around their caching.
This is so dystopian… they built something that worked and now are being “acquihired” into oblivion, and we’re supposed to be happy about it? I’m glad a few of the early people just got rich I guess, but it seems like a terrible system overall.
It's not any different from the launch of the FSF. There's a simple solution. If you don't want your lunch eaten by a private equity firm, make sure whatever tool you use is GPL licensed.
"Clean room" is doing a lot of heavy lifting. Having the entire corpus of knowledge for humanity and how LLMs work, how can you honestly argue in court that this is purely clean room implementation?
This is right up there with Meta lawyers claiming that when they torrent it's totally legal but when a single person torrents it's copyright infringement.
Far too many people treat AI as a way to launder copyright, it seems likely that a lot of the current state of outright plagiarism won't stand up in court
These cases will be settled out of court long before they ever reach a jury. Anthropic has agreed to pay $1.5bn in a class action suit [0]. Others will follow.
> If AI "clean-room" re-implementations are allow to bypass copyright/licenses, the GPL won't protect you.
Isn't that the same for the obligations under BSD/MIT/Apache? The problem they're trying to address is a different one from the problem of AI copyright washing. It's fair to avoid introducing additional problems while debunking another point.
Maybe I'm reading wrong here, but what's the implication of the clean room re-implementations? Someone else is cloning with a changed license, but if I'm still on the GPL licensed tool, how am I "not protected"?
3a. usually here BigCo should continue to develop Project One as GPLv3, or stop working on it and the community would fork and it and continue working on it as GPLv3
3b. BigCo does a "clean-room" reimplementation of Project One and releases it under proprietary licence. Community can still fork the older version and work on it, but BigCo can continue to develop and sell their "original" version.
2. BigCo owns ProjectOne now
3a. Bigco is now free to release version N+1 as closed source only.
3b. Community can still fork the older version and work on it, but BigCo can continue to develop and sell their original version.
well no, (clean room )reimplementations of APIs have done since time immemorial. copyright applies to the work itself. if you implement the functionality of X, software copyright protects both!
patents protect ideas, copyright protects artistic expressions of ideas
While the license is important, it's the community that plays the key role for me. VC funder open source is not the same as community developed open source. The first can very quickly disappear because of something like a aquihire, the second has more resilience and tends to either survive and evolve, or peter out as the context changes.
I'm careful to not rely too heavily on VC funded open source whenever I can avoid it.
The biggest scam the mega-clouds and the Githubs ever pulled was convincing open source developers that the GPL was somehow out of vogue and BSD/MIT/Apache was better.
All so they could just vacuum it all up and resell it with impunity.
I remember a somewhat prominent dev in the DC area putting on Twitter around 2012 or so something like "I do plenty of open source coding and I don't put a fucking license on it" and it stuck with me for all these years that it was a weird stance to take.
Dan Bernstein took that attitude back in the 90s - I think his personal theory of copyright went something like "if it doesn't have a license, then it's obviously public domain", which ran counter to the mainstream position of "if it doesn't have a license, then you have to treat it as proprietary".
And, sure, djb wasn't actually likely to sue you if you went ahead and distributed modified versions of his software... but no-one else was willing to take that risk, and it ended up killing qmail, djbdns, etc stone dead. His work ended up going to waste as a result.
I doubt the lack of license was the reason DJB's projects didn't take over the world. Most of them required heavy forking to break away from hardwired assumptions about the filesystem and play nice with the OS distribution, and DJB is himself notoriously difficult to work with. Still, qmail managed to establish maildir as the standard format and kill off mbox, and for that alone I'm eternally grateful.
Well, there were always plenty of patches available - it's just that lots of them conflicted with each other, and that was a product of the licensing.
Agreed with the rest, though. I relied heavily on qmail for about a decade, and learned a lot from the experience, even if it was a little terrifying on occasion!
These days one would just most likely create a fork on github. Vim was also maintained through separate patches for a long time, but Bram was a lot more accepting about integrating and distributing those patches himself.
> his personal theory of copyright went something like "if it doesn't have a license, then it's obviously public domain"
I mean philosophically and morally, sure, one can take that position ... but copyright law does not work like that, at least not for anything published in the US after 1989 [1].
The big cloud providers are perfectly happy to use GPL'd stuff (see: Elastic, MySQL). They don't need to use embrace-and-extend, they're content with hosting.
The ones pushing for permissive licenses are rather companies like Apple, Android (and to some extent other parts of Google), Microsoft, Oracle. They want to push their proprietary stuff and one way to do that in the face of open source competition is by proprietary extensions.
Since when has GPL stopped any company from doing so? I'd genuinely be happy to hear of a single time someone was actually pressed about their GPL compliance after FSF v. Cisco, it's like some immaterial sword of Damocles, the entire weight of which is the shame of losing face, which is fleeting in a society post-appeal-to-authority.
A GPL license helps but if support for a dependency is pulled you'll likely end up needing to divert more resources to maintain it anyways. There really isn't any guarantee against this cost - you either pay someone else to maintain it and hope they do a good job, build it in house and become an "also that thing" company, or follow a popular project without financially supporting it and just hope other people pick up your slack.
Preferring GPL licensed software means that you're immune to a sudden cut off of access so it's always advisable - but it's really important to stay on top of dependencies and be willing to pay the cost if support is withdrawn. So GPL helps but it isn't a full salve.
somebody looked at Claude Code's binaries and Anthropic is testing out their own app platform called antspace. Not sure why people are shocked, they've been cloning features of their API customers and adding them to their core products since day 1. Makes sense they will take user data and do it for Claude Code by copying features or buying up what developers are using so they can lock people into a stack. These are the same people that trained on every scrap of data they could get their hands on and now complain about distilling models from their output
Ironically this type of stuff really makes me doubt their AGI claims, why would they bother with this stuff if they were confident of having AGI within the next few years? They would be focused on replacing entire industries and not even make their models available at any price. Why bother with a PaaS if you think you are going to replace the entire software industry with AGI?
> they've been cloning features of their API customers and adding them to their core products since day 1
Is this not just the strategy of all platforms. Spy on all customers, see what works for them and copy the most valuable business models. Amazon does that with all kinds of products.
Platforms will just grow to own all the market and hike prices and lower quality, and pay close to nothing to employees. This is why we used to have monopoly regulations before being greedy became a virtue.
It is exactly the strategy of all platforms - they get greedy to the point of screwing over their own customers. I've lost count of number of times I've seen a platform get popular and then expand to offer the same services as its customers, often even undercutting market rates.
Just wait till they offer "Developer Certification" so you have to pay them to get a shiny little badge and a certificate while they go around saying no badge = you're shit.
> this type of stuff really makes me doubt their AGI claims, why would they bother with this stuff if they were confident of having AGI within the next few years?
because AGI doesn't grow in a cage, it requires a piece of software running somewhere. someone has to build both to get that happen. that is like a high school level question.
> One can argue that once we achieve the singularity, it could immediately scale on its own as it decides.
even if this is true, someone needs to build the platform and the software required to get to the singularity.
one can also argue that lots of $ is required to get to the singularity, taking control of how the world builds, deploys and operates the digital world is a proven avenue to get such $.
You know until today I dismissed all those concerns about uv being a commercial product but now I am very concerned.
Microsoft has been a reasonable steward of github and npm considering everything but I don't feel so good about OpenAI this makes me reconsider my use of uv and Python as a whole because uv did a lot to stop the insanity. Not least Microsoft has been around since 1975 whereas I could picture OpenAI vanishing instantly in a fit of FOMO.
In the many darker timelines that one can extrapolate, capturing essential tech stacks is just a pre-cursor to capturing hiring.
Once we start seeing Open AI and Anthropic getting into the certifications and testing they'll quickly become the gold standard. They won't even need to actually test anyone. People will simply consent to having their chat interactions analyzed.
The models collect more information about us than we could ever imagine because definitionally, those features are unknown unknowns for humans. For ML, the gaps in our thinking carry far richer information about is than our actual vocabularies, topics of interest, or stylometric idiosyncrasies.
As if there will be hiring in the fullness of time.
There will come a day when you can will an entire business into existence at the press of a button. Maybe it has one or two people overseeing the business logic to make sure it doesn't go off the rails, but the point is that this is a 100x reduction in labor and a 100,000x speed up in terms of delivery.
They'll price this as a $1M button press.
Suddenly, labor capital cannot participate in the market anymore. Only financial capital can.
Suddenly, software startups are no longer viable.
This is coming.
The means of production are becoming privatized capital outlays, just like the railroads. And we will never own again.
There is nothing that says our careers must remain viable. There is nothing that says our output can remain competitive, attractive, or in demand. These are not laws.
Knowledge work may be a thing of the past in ten years' time. And the capital owners and hyperscalers will be the entirety of the market.
If we do not own these systems (and at this point is it even possible for open source to catch up?), we are fundamentally screwed.
I strongly believe that people not seeing this - downplaying this - are looking the other way while the asteroid approaches.
There could be opportunities we haven't anticipated.
What if labor organizes around human work and consumers are willing to pay the premium?
At that point, it's an arms race against the SotA models in order to deepen the resolution and harden the security mechanisms for capturing the human-affirming signals produced during work. Also, lowering the friction around verification.
In that timeline, workers would have to wear devices to monitor their GSR and record themselves on video to track their PPG. Inconvenient, and ultimately probably doomed, but it could extend or renew the horizon for certain kinds of knowledge work.
Oh are compilers going away? Or personal computers for that matter?
If the barrier to button-pressed companies goes that high up, the cost to run/consume the product also goes up. Making hand-rolled products cheaper.
Slower paced to roll out things? Sure.
That's the precarious balance these LLMs providers have to make. They can't just move on without the people feeding it data and value. The machine is not perpetual.
It's even worse than that, Astral took over python-build-standalone and uv uses its Python builds on all platforms.
That means OpenAI will be able to do whatever they want to your Python binaries, including every Python binary in your deployments, with whatever telemetry that want to instrument in the builds.
If it ever goes bad, well I hope that that’s an impetus for new open source projects to be started — and with improvements over and lessons learned from incumbent technologies, right at the v1 of said projects.
I think the issue is that LLMs are a cash problem as much as they are a technical problem. Consumer hardware architectures are still pretty unfriendly to running models which are actually competitive to useful models so if you want to even do inference on a model that's going to reliably give you decent results you're basically in enterprise territory. Unless you want to do it really slowly.
The issue that I see is that Nvidia etc. are incentivised to perpetuate that so the open source community gets the table scraps of distills, fine-tunes etc.
You got me thinking that what's going to happen is some GPU maker is going to offer a subsidized GPU (or RAM stick, or ...whatever) if the GPU can do calculations while your computer is idle, not unlike Folding@home. This way, the company can use the distributed fleet of customer computers to do large computations, while the customer gets a reasonably priced GPU again.
The kinds of GPUs that are in use in enterprise are 30-40k and require a ~10KW system. The challenge with lower power cards is that 30 1k cards are not as powerful, especially since usually you have a few of the enterprise cards in a single unit that can be joined efficiently via high bandwidth link. But even if someone else is paying the utility bill, what happens when the person you gave the card to just doesn’t run the software? Good luck getting your GPU back.
New Strix Halo (395+) user here. It is very librating to be able to "just" load the larger open-weight MoEs. At this param count class, bigger is almost always better --- my own vibe check confirms this, but obviously this is not going to be anywhere close to the leading cost-optimized closed-weight models (Flash / Sonnet).
The tradeoff with these unified LPDDR machines is compute and memory throughput. You'll have to live with the ~50 token/sec rate, and compact your prefix aggressively. That said, I'd take the effortless local model capability over outright speed any day.
Hope the popularity of these machines could prompt future models to offer perfect size fits: 80 GiB quantized on 128 GiB box, 480 GiB quantized on 512 GiB box, etc.
The problem is even if an OSS had the resources (massive data centers the size of NYC packed with top end custom GPU kits) to produce the weights, you need enormous VRAM laden farms of GPUs to do inference on a model like Opus 4.6. Unless the very math of frontier LLMs changes, don’t expect frontier OSS on par to be practical.
I feel like you're overstating the resources required by a couple orders of magnitude. You do need a GPU farm to do training, but probably only $100M, maybe $1B of GPUs. And yes, that's a lot of GPUs, but they will fit in a single datacenter, and even in dollar terms, there are many individual buildings in NYC that are cheaper.
I refer you to the data centers under construction roughly the size of Manhattan to do next generation model training. Granted they’re also to house inference, but my statement wasn’t hyperbole, it’s based on actual reality. To accommodate the next generation of frontier training it’s infeasible for any but the most wealthy organizations on earth to participate. OSS weights are toys. (Mind you i like toys)
There's already an ecosystem of essentially undifferentiated infrastructure providers that sell cheap inference of open weights models that have pretty tight margins.
If the open weights models are good, there are people looking to sell commodity access to it, much like a cloud provider selling you compute.
Open-source models will never be _truly_ competitive as long as obtaining quality datasets and training on them remains prohibitively expensive.
Plus, most users don't want to host their own models. Most users don't care that OpenAI, Anthropic and Google have a monopoly on LLMs. ChatGPT is a household name, and most of the big businesses are forcing Copilot and/or Claude onto their employees for "real work."
This is "everyone will have an email server/web server/Diaspora node/lemmy instance/Mastodon server" all over again.
People do care about the privacy of these things though. It's one thing to talk about encryption, but users are pouring out their heart and soul to these things, and they're not all idiots.
unless they are also pirate LLMs, I don't see how any open source project could have pockets deep enough for the datacenters needed to seriously contend
If AI tools are as good as the CEOs claim, we should have no friction towards building multiple open source alternatives very quickly. Unless of course, they aren’t as good as they are being sold as, in which case, we have nothing to worry about.
What would the new open source projects do differently from the "old" ones? I don't think you can forbid model training on your code if your project is open source.
Also notable that after Anthropic’s acquisition of Bun, the vast majority of the communication and seeming effort from Jared on twitter seemed to shift to fixing issues with Claude Code.
I imagine many of these efforts benefitted the community as a whole, but it does make sense that the owners will have these orgs at least prioritize their own internal needs.
Hmm, from my perspective, an essential step to legitimize "vibecoding" in an enterprise setting is to to have a clearly communicated best practice - and have the LLM be hyper optimized for that setting.
Like having a system prompt which takes care of the project structure, languages, libraries etc
It's pretty much the first step to replacing devs, which is their current "North Star" (to be changed to the next profession after)
Once they've nailed that, the devs become even more of a tool then they're already are (from the perspective of the enterprise).
Honestly, for now they seem to be buying companies built around Open Source projects which otherwise didn't really have a good story to pay for their development long-term anyway. And it seems like the primary reason is just expertise and tooling for building their CLI tools.
As long as they keep the original projects maintained and those aren't just acqui-hires, I think this is almost as good as we can hope for.
Once you’re acquired you have to do what the boss says. That means prioritizing your work to benefit the company. That is often not compatible with true open source.
How frequently do acquired projects seriously maintain their independence? That is rare. They may have more resources but they also have obligations.
And this doesn’t even touch on the whole commodification and box out strategy that so many tech giants have employed.
> Once you’re acquired you have to do what the boss says.
Or quit, and take the (Open Source) project and community with you. Companies sometimes discover this the hard way; see, for instance, the story of how Hudson became Jenkins.
I think the good news here is that since OpenAI is a zombie company at this point this particular acquisition shouldn't be too concerning - and from what I've seen Anthropic has been building out in a direction of increased specialization. That said vertical integration is as much of a problem as it always was and it'd be excellent to see some sane merger oversight from the government.
What do you mean? MIT is essentially as open as you can get. The worst that can happen is that they will relicense, eventually, to force big users to pay, but when that happens everybody knows how it goes: some consortium of other big companies forks it and continues development as if nothing happened.
Step 1: discontinue the public repository, step 2: sell access to your GPL codebase.
The GPL (and even the AGPL) doesn't require you to make your modified source code publicly available (Debian explicitly considers licenses with this requirement non-free). The GPL only states you need to provide your customers with source code.
Sure, but it also allows your customers to modify the source code you provided, and distribute/sell it. With MIT they can simply relicence it and sell binary-only versions. The open-ness stops at that point.
But how does this work out in the long run, in the case of AGI?
If AGI becomes available, especially at the local and open-source level, shouldn't all these be democratized - meaning that the AGI can simply roll out the tooling you need.
After all, AGI is what all these companies are chasing.
These companies are telling us software development is over. They are positioning themselves as the means of production. You want to build anything you do it through them. And since ‘software is solved’ this is not a software but a user acquisition.
it never made sense to have devs all over the world doing the same task with tiny variation. Centralization was inevitable. LLMs might have been a step change but the trajectory was already set.
The exact opposite has started: every single developer with an LLM subscription now has 45 variations of any foundational tool and library, to cater to their weird use-case, because it was easier for the LLM to just modify it rather than adapting to it. Almost nobody upstreams such improvements (or they are too niche anyway).
The ecosystem will be this way for a while, if not the new normal.
I see it as:LLMs will solve the problems they create. They'll 10x your eccentric layout, modularity, monolith; they'll multiply your anti-DRY or try to make everything DRY. It's largescale copy/paste/find/replace in a glorious crescendo of idiosyncratic programming. `
Could you say the same about the Chrome browser? Google is using it to EEE the web (Embrace, Extend and Extend it till it's a monstrosity that nobody else can manage). That's pretty antagonistic. But did people change?
Sample size: 1 but I use Arc browser. It's still webkit under the hood (and in maintenance mode now), though it's actually pretty good and last I checked had most of the baked in google stuff toggled-off by default
It's open source. Forking it is an option. And with AI, one-shotting a replacement is an option as well. Or having it make changes to your fork. Just because you can, doesn't mean you should do that of course.
The point is that the value of accumulated know how and skill that lead to things like uv isn't lost even if the worst would happen to the company or people behind it. I don't think there are many signs of that. I don't think they had much of a revenue model around providing OSS tools. It's problematic for a lot of VC funded companies. An exit like this is as good as it gets. OpenAI now pays them to do their thing. Investors are probably pretty happy. And we maybe get to skip the enshittification that seems inevitable with the whole IPO/hedge funds circus that many vc funded OSS companies end up being subjected to. Problem solved. Congratulations to the team. They can continue doing what they love doing in a company that clearly loves all things python. And who knows what they can do next when freed from having to worry about making investors happy?
Big companies and OSS have always had symbiotic relationships. Some of the largest contributors to open source are people working in big companies. OpenAI fits this tradition beautifully. Most big software companies actively contribute to OSS projects that are relevant or important to them. Even very secretive companies like Apple or profit focused sharks like Oracle. Google, Meta, IBM. There are very few large software companies that aren't doing that. OSS without this very large scale corporate sponsor ships would just be a niche thing. Yes there are a lot of small projects. I have a few of my own even. But most of the big ones have some for profit businesses behind them.
The real meta question is of course if we still need a lot of the people centered development tooling when AIs are starting to do essentially all of the heavy lifting in terms of coding. I think we might need very different tools soon.
This is a logical conclusion of most open source tools in a capitalist economy, it's been this way for decades.
Equivalent or better tools will pop up eventually, heck if AI is so fantastic then you could just make one of your own, be the change you want to see in the world, right?
Of course they're trying to capture existing tech stacks. The models themselves are plateauing (most advancement is coming from the non-LLM parts of the software), they took too much VC money so they need to make some of it back. So gobbling up wafers, software, etc... is the new plan for spending the money and trying to prevent catastrophic losses.
Ah yes, it was impossible to write software before these companies existed, and the only way to write software is via the products from these companies. They sure do control the "means of production".
That's the problem isn't it. OpenAI have nothing to offer, neither does many of the other AI companies, nothing that's not replaceable at least.
They need to gobble up these other actors in an attempt to own something that someone might actually pay for. Anthropic seems successful with Claude Code and can drive sales that way.
OpenAI struggles to sell ChatGPT as a standalone offering, so they need products and services that will consume it, allowing them to push sales via that route. They also need to control those products, because LLMs are pretty much plug-able at this point, and that's no good. OpenAI needs a lock in.
As the cost of building software trends towards $0 I don't see how one can realistic own "the" means of production rather than "a" means. Any competitor can generate a similar product cheaply.
Explain to me how this is any different than Microsoft, Blackrock, Google, Oracle, Berkshire or any other giant company acquiring their way to market share?
If our corporate overlords are gonna buy up all that is good I’d rather it have been Anthropic and not that wierdo humans-need-food-and-care-for-inference-so-LLMs aren’t-that-power-hungry Sam Altman. Man that guy is weird.
Oh well. They’ll hopefully get options and make millions when the IPO happens. Everyone eventually sells out. Not everyone can be funded by MIT to live the GNU maximalist lifestyle.
What language is universally better than Python? I don't think Python is perfect, but it is definitely one of the best languages out there. It is elegant and it is has a huge ecosystem of libraries, frameworks and tutorials. There is a lot of battle-tested software in Python that is running businesses.
It's fast enough for many use cases. That doesn't mean that there is no room for optimization, but this is far less a deciding factor these days.
> it's not type safe
You can do static analysis with Mypy and other tools.
> it has not real concurrency.
There's different mechanisms for running things concurrently in Python. And there's an active effort to remove the GIL. I also have to ask: What is "real" concurrency?
Admittedly, the things you mention are not Python's strongest points. But they are far from being dealbreakers.
> cost of significantly better languages is essentially free
Is it? We still need meatspace humans to vet what these AI agents produce. Languages like C++ / Rust etc still require huge cognitive overhead relative to Python & that will not change anytime soon.
Unless the entire global economy can run on agents with minimal human supervision someone still has to grapple with the essential complexity of getting a computer to do useful things. At least with Python that complexity is locked away within the CPython interpreter.
Also an aside, when has a language ever gotten traction based solely on its technical merits? Popularity is driven by ease-of-use, fashion, mindshare, timing etc.
Yeah that's sort of fair today, although we have switched over most of our org to Rust and it hasn't been much of a problem. The LLM can usually explain small parts of code with high accuracy if you are unsure.
Overall the switch has been very much loved. Everything is faster and more stable, we haven't seen much of a reduction in output
Your stance is aggressive and provocative, but no less so than the challenge AI poses to software developers in general. I think what you say should be seriously entertained.
And as someone who loves Python and has written a lot of it, I tend to agree. It's increasingly clear the way to be productive with AI coding and the way to make it reliable is to make sure AI works within strong guardrails, with testsuites, etc. that combat and corral the inherent indeterminism and problems like prompt injection as much as possible.
Getting help from the language - having the static tooling be as strict and uncompromising as possible, and delegating having to deal with the pain to AI - seems the right way.
It's such a laughable take. First of all a language is never getting popular simply because it's good. Actually most used languages are usually terrible.[0]
Secondly it's non factual. Python's market share grew in 2025[1][2][3]. Probably driven by AI demand.
Yes most languages are terrible except the ones that are actually performant and good like Rust.
Do you really think AI agents of the future will be coding in Python??? What advantage would that possibly give them? That's the only laughable take here
I think there are many examples throughout history of better performing options not displacing counterparts. I think, really, the only "laughable" thing here is the ignorance on display that's riding atop the arrogance.
Rust is great. But AI isn't displacing Python anytime soon.
Moreso it sucks that Astral's been bought by a company with such a horrible leader at the helm.
That's an interesting take, but I'm not sure 'easy to write' is the only advantage.
There is also a really good ecosystem of libraries, especially for scientific computing. My experience has been that Claude can write good c++ code, but it's not great about optimization. So, curated Python code can often be faster than an AI's reimplementation of an algorithm in c++.
I feel like this is a relatively hot take. Python has advantages beyond being easy to write. It's simple. It can do just about anything any other language can do. It's not the most performant on its own, but it's performant enough for 99% of use cases, and in the 1% you can write a new or use an existing C library instead. Its simplicity and ease of adoption make python very well represented in the training data.
If I ask an LLM or agentic AI to build something and don't specify what language to use, I'd wager that it'll choose python most of the time. Casual programmers like academics or students who ask ChatGPT to help them write a function to do X are likely to be using Python already.
I'm not a Python evangelist by any means but to suggest that AI is going to kill Python feels like a major stretch to me.
EDIT: when I say that Python can do anything any other language can do, that's with the adage in mind. Python is the second best language for every task.
Let's see how it plays out. My current assumption is that degrees and CVs will become more important in the workplace. Things like good architecture, maintainability, coherence, they are all hard to measure. A true 10x developer without a college degree will lose to the PhD without any hard skills. And these types only speak python, so they will instruct the AI to type python. Or maybe they'll vibecode rust and elixir, I don't know. But the cynic in me strongly thinks this will make all our bullshitty jobs way more bullshitty, and impostors will profit the most.
Absolutely agree with this. I'm hoping via advent of agentic, Rust dominates in the next few years. It may even cause Wasm to be dominant as the new "applet" language.
More and more plainly, OpenAI and Anthropic are making plays to own (and lease) the "means of production" in software. OK - I'm a pretty happy renter right now.
As they gobble up previously open software stacks, how viable is it that these stacks remain open? It seems perfectly sensible to me that these providers and their users alike have an interest in further centralizing the dev lifecycle - eg, if Claude-Code or Codex are interfaces to cloud devenvs, then the models can get faster feedback cycles against build / test / etc tooling.
But when the tooling authors are employees of one provider or another, you can bet that those providers will be at least a few versions ahead of the public releases of those build tools, and will enjoy local economies of scale in their pipelines that may not be public at all.