When you notice that it's LLM prose, it's terrible, yes.
I have seen some examples of what I thought was very impressive writing that the "author" subsequently confessed was at least heavily AI-assisted. It does take quite a bit of prompting work, apparently. But a few people I've encountered (probably they trend neurodivergent) seem to feel like LLMs have given them a voice, at least in written communication, when they would otherwise struggle to write anything at all (despite adequate skill at English).
At any rate, overall I agree with this piece and it resonates with thoughts I've been having. Although I haven't really feel compelled to elaborate on them at this level. There are too many things I'd like to say about AI to give them this kind of love and attention.
That's pretty much the reason why. Raymond Hettinger explains the philosophy well while discussing the `random` standard library module: https://www.youtube.com/watch?v=Uwuv05aZ6ug
I feel like much of this has been forgotten of late, though. From what I've seen, i's really quite hard to get anything added to the standard library unless you're a core dev who's sufficiently well liked among other core devs, in which case you can pretty much just do it. Everyone else will (understandably) be put through a PhD thesis defense, then asked to try the idea out as a PyPI package first (and somehow also popularize the package), and then if it somehow catches on that way, get declined anyway because it's easy for everyone to just get it from PyPI (see e.g. Requests).
I personally was directed to PyPI once when I was proposing new methods for the builtin `str`. Where the entire point was not to have to import or instantiate anything.
> If the version shown is 4.87.1 or 4.87.2, treat the environment as compromised.
More generally speaking one would have to treat the computer/container/VM as compromised. User-level malware still sucks. We've seen just the other day that Python code can run at startup time with .pth files (and probably many other ways). With a source distribution, it can run at install time, too (see e.g. https://zahlman.github.io/posts/python-packaging-3/).
> What to Do If Affected
> Downgrade immediately:
> pip install telnyx==4.87.0
Even if only the "environment" were compromised, that includes pip in the standard workflow. You can use an external copy of pip instead, via the `--python` option (and also avoid duplicating pip in each venv, wasting 10-15MB each time, by passing `--without-pip` at creation). I touch on both of these in https://zahlman.github.io/posts/python-packaging-2/ (specifically, showing how to do it with Pipx's vendored copy of pip). Note that `--python` is a hack that re-launches pip using the target environment; pip won't try to import things from that environment, but you'd still be exposed to .pth file risks.
Yes (but not for a browser). My terminal windows are 80x24, pretty much always. I do this today on Linux, I've done it through multiple versions of Windows, and I did it in my childhood on a 9" B&W "luggable" Mac screen.
> uv is just a package manager that actually does its job for resolving dependencies.
Pip resolves dependencies just fine. It just also lets you try to build the environment incrementally (which is actually useful, especially for people who aren't "developers" on a "project"), and is slow (for a lot of reasons).
> I think the python community, and really all package managers, need to promote standard cache servers as first class citizens as a broader solution to supply chain issues. What I want is a server that presents pypi with safeguards I choose. For instance, add packages to the local index that are no less than xxx days old (this uv feature), but also freeze that unless an update is requested or required by a security concern, scan security blacklists to remove/block packages and versions that have been found to have issues. Update the cache to allow a specific version bump. That kind of thing.
If "my own curated pypi" extends as far as a whitelist of build artifacts, you can just make a local "wheelhouse" directory of those, and pass `--no-index` and `--find-links /path/to/wheelhouse` in your `pip install` commands (I'm sure uv has something analogous).
> It's unfair because it's a different algorithm with fundamentally different memory characteristics. A fairer comparison would be to stream the file in C++ as well and maintain internal state for the count.
The C++ code is still building a tally by incrementing keys of a hash map one at a time, and then dumping (reversed) key/value pairs out into a list and sorting. The file is small and the Python code is GCing the `line` each time through the outer loop. At any rate it seems like a big chunk of the Python memory usage is just constant (sort of; stuff also gets lazily loaded) overhead of the Python runtime, so.
I have seen some examples of what I thought was very impressive writing that the "author" subsequently confessed was at least heavily AI-assisted. It does take quite a bit of prompting work, apparently. But a few people I've encountered (probably they trend neurodivergent) seem to feel like LLMs have given them a voice, at least in written communication, when they would otherwise struggle to write anything at all (despite adequate skill at English).
At any rate, overall I agree with this piece and it resonates with thoughts I've been having. Although I haven't really feel compelled to elaborate on them at this level. There are too many things I'd like to say about AI to give them this kind of love and attention.
reply