Hacker Newsnew | past | comments | ask | show | jobs | submit | zahlman's commentslogin

When you notice that it's LLM prose, it's terrible, yes.

I have seen some examples of what I thought was very impressive writing that the "author" subsequently confessed was at least heavily AI-assisted. It does take quite a bit of prompting work, apparently. But a few people I've encountered (probably they trend neurodivergent) seem to feel like LLMs have given them a voice, at least in written communication, when they would otherwise struggle to write anything at all (despite adequate skill at English).

At any rate, overall I agree with this piece and it resonates with thoughts I've been having. Although I haven't really feel compelled to elaborate on them at this level. There are too many things I'd like to say about AI to give them this kind of love and attention.


> It's a bit cleaner.

That's pretty much the reason why. Raymond Hettinger explains the philosophy well while discussing the `random` standard library module: https://www.youtube.com/watch?v=Uwuv05aZ6ug

I feel like much of this has been forgotten of late, though. From what I've seen, i's really quite hard to get anything added to the standard library unless you're a core dev who's sufficiently well liked among other core devs, in which case you can pretty much just do it. Everyone else will (understandably) be put through a PhD thesis defense, then asked to try the idea out as a PyPI package first (and somehow also popularize the package), and then if it somehow catches on that way, get declined anyway because it's easy for everyone to just get it from PyPI (see e.g. Requests).

I personally was directed to PyPI once when I was proposing new methods for the builtin `str`. Where the entire point was not to have to import or instantiate anything.


> Perhaps the strongest argument nowadays for QA is that with AI, automated verification is a leverage maximizer.

That sure is a sequence of words.

It comes across to me like the emphasized part is arguing against what it's supposed to be arguing for.

Or is the premise that the developers somehow can't run the AI verifier?


> If the version shown is 4.87.1 or 4.87.2, treat the environment as compromised.

More generally speaking one would have to treat the computer/container/VM as compromised. User-level malware still sucks. We've seen just the other day that Python code can run at startup time with .pth files (and probably many other ways). With a source distribution, it can run at install time, too (see e.g. https://zahlman.github.io/posts/python-packaging-3/).

> What to Do If Affected

> Downgrade immediately:

> pip install telnyx==4.87.0

Even if only the "environment" were compromised, that includes pip in the standard workflow. You can use an external copy of pip instead, via the `--python` option (and also avoid duplicating pip in each venv, wasting 10-15MB each time, by passing `--without-pip` at creation). I touch on both of these in https://zahlman.github.io/posts/python-packaging-2/ (specifically, showing how to do it with Pipx's vendored copy of pip). Note that `--python` is a hack that re-launches pip using the target environment; pip won't try to import things from that environment, but you'd still be exposed to .pth file risks.


Nice thing about VMs is that it's easy to have a daily snapshot, and roll it back to before compromise event.

> Does anyone actually do this?

Yes (but not for a browser). My terminal windows are 80x24, pretty much always. I do this today on Linux, I've done it through multiple versions of Windows, and I did it in my childhood on a 9" B&W "luggable" Mac screen.

I just like it, okay?


> uv is just a package manager that actually does its job for resolving dependencies.

Pip resolves dependencies just fine. It just also lets you try to build the environment incrementally (which is actually useful, especially for people who aren't "developers" on a "project"), and is slow (for a lot of reasons).


> I think the python community, and really all package managers, need to promote standard cache servers as first class citizens as a broader solution to supply chain issues. What I want is a server that presents pypi with safeguards I choose. For instance, add packages to the local index that are no less than xxx days old (this uv feature), but also freeze that unless an update is requested or required by a security concern, scan security blacklists to remove/block packages and versions that have been found to have issues. Update the cache to allow a specific version bump. That kind of thing.

FWIW, https://pypi.org/project/bandersnatch/ is the standard tool for setting up a PyPI mirror, and https://github.com/pypi/warehouse is the codebase for PyPI itself (including the actual website, account management etc.).

If "my own curated pypi" extends as far as a whitelist of build artifacts, you can just make a local "wheelhouse" directory of those, and pass `--no-index` and `--find-links /path/to/wheelhouse` in your `pip install` commands (I'm sure uv has something analogous).


> and roughly 700,000,000,000 cubic meters of beach on Earth.

I wonder how they determine the average depth of beach sand?


I imagine you’d go digging on a few sample beaches and then make an assumption about how representative those beaches are.

Think where we’d be without sand!


> It's unfair because it's a different algorithm with fundamentally different memory characteristics. A fairer comparison would be to stream the file in C++ as well and maintain internal state for the count.

The C++ code is still building a tally by incrementing keys of a hash map one at a time, and then dumping (reversed) key/value pairs out into a list and sorting. The file is small and the Python code is GCing the `line` each time through the outer loop. At any rate it seems like a big chunk of the Python memory usage is just constant (sort of; stuff also gets lazily loaded) overhead of the Python runtime, so.


Hmm? Which code are you looking at?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: