Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The graves of the untold millions (billions?) of systems compromised by malware injected through statically linked zlib and libjpeg blobs grow restless...

The value of dynamically linking common system libraries isn't performance (well, anymore), it's that your friendly distro maintainers can do a far, far better job than you at maintaining that software for you as bugs are fixed over time.

Now, Go isn't subject to the kind of severity of bug that C is, and their package management may be slick enough to make straightforward recompilation into the default deployment mode. But if that's true it's true in spite of the drawbacks of static linkage, not because of it.



I think I have a (slightly futuristic and not-yet-common) worldview where this is ridiculous. I agree it's real in the present world though.

If you assume that:

- people stop using a full OS to deploy their apps (in a containerized world, for example), so there's only a tiny attack surface,

- that only the application's dependencies exist (no reliance on distro-provided too) and are full specified (eg in a Gemfile.lock or equivalent),

- that a dev team is responsible for knowing about security vulns in their dependencies when they happen,

- that things like libjpeg are run in separate services which are completely locked down and dont have the ability to compromise another system,

Then this isn't an issue.

Lots of big ifs there, I'm aware.


Proxying all image decoding to a separate process-per-image isn't likely to be practical anytime soon. Browsers already struggle with too many processes even with process-per-tab.


It's not so crazy. Proxying all video decode to a separate process is, in fact, the deployed architecture on the most popular mobile OS in the world.

Sure, there are performance implications. So you come up with complicated meta-streaming APIs to put as much of the intelligence into that "mediaserver" as possible. And on the other side you come up with complicated buffer sharing architectures and APIs to make sure that the output can go straight to the screen instead of back through the app. Oh, and there are (cough) "security" concerns too (which is the whole point to doing this all in a system process), so you need to drill those bits down not just through the userspace but into the driver and out through the HDCP pipeline nonsense below the kernel (which of course needs userspace helpers in most architectures, so up it all comes again through different drill holes...).

Er, rather, it is crazy, but for different reasons than you posit. You could totally make it run fast if you had to.


Sure, but by and large there aren't pages with 1,000 videos on them. There are, however, pages with 1,000 images on them.


You seriously think that if it were driven by DRM mania, they wouldn't have stuffed a DOM parser into a system server?

Again, I'm not saying it isn't crazy, just that it's possible, and that equivalent insanity is already afoot.


I think we live in different worlds. It seems all your examples are downloadable software. I live in a cloudy/SaaS/distributed systems world.


> - that a dev team is responsible for knowing about security vulns in their dependencies when they happen,

This is the part that falls down though. It sounds sane, but in the real world the "dev team" was a contractor hired for a one-off project six years ago, and the developers themselves were laid off last year when it folded.

Or the "dev team" is an open source website that hasn't been updated in three years, but hey -- they have this nice windows binary for you to pull and use and it still installs and works fine.

Just don't.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: