Hacker Newsnew | past | comments | ask | show | jobs | submit | bmitch3020's commentslogin

I've seen countless attempts to replace "docker build" and Dockerfile. They often want to give tighter control to the build, sometimes tightly binding to a package manager. But the Dockerfile has continued because of its flexibility. Starting from a known filesystem/distribution, copying some files in, and then running arbitrary commands within that filesystem mirrored so nicely what operations has been doing for a long time. And as ugly as that flexibility is, I think it will remain the dominant solution for quite a while longer.

> But the Dockerfile has continued because of its flexibility.

The flip side is that the world still hasn’t settled on a language-neutral build tool that works for all languages. Therefore we resort to running arbitrary commands to invoke language-specific package managers. In an alternate timeline where everyone uses Nix or Bazel or some such, docker build would be laughed out of the window.


As a Nix evangelist, I have to say: Nix is really not capable of replacing languag-specific package managers.

> running arbitrary commands to invoke language-specific package managers.

This is exactly what we do in Nix. You see this everywhere in nixpkgs.

What sets apart Nix from docker is not that it works well at a finer granularity, i.e. source-file-level, but that it has real hermeticity and thus reliable caching. That is, we also run arbitrary commands, but they don't get to talk to the internet and thus don't get to e.g. `apt update`.

In a Dockerfile, you can `apt update` all you want, and this makes the build layer cache a very leaky abstraction. This is merely an annoyance when working on an individual container build but would be a complete dealbreaker at linux-distro-scale, which is what Nix operates at.


Fundamentally speaking, the key point is really just hermeticity and reliable caching. Running arbitrary commands is never the problem anyways. What makes gcc a blessed command but the compiler for my own language an "arbitrary" command anyways?

And in languages with insufficient abstraction power like C and Go, you often need to invoke a code generation tool to generate the sources; that's an extremely arbitrary command. These are just non-problems if you have hermetic builds and reliable caching.


I mean, I guess at a theoretical level. In practice, it's just not a large problem.

Well, arbitrary granularity is possible with Nix, but the build systems of today simply do not utilise it. I've for example written an experimental C build system for Nix which handles all compiler orchestration and it works great, you get minimal recompilations and free distributed builds. It would be awesome if something like this was actually available for major languages (Rust?). Let me know if you're working on or have seen anything like this!

A problem with that is that Nix is slow.

On my nixos-rebuild, building a simple config file for in /etc takes much longer than a common gcc invocation to compile a C file. I suspect that is due to something in Nix's Linux sandbox setup being slow, or at least I remember some issue discussions around that; I think the worst part of that got improved but it's still quite slow today.

Because of that, it's much faster to do N build steps inside 1 nix build sandbox, than the other way around.

Another issue is that some programming languages have build systems that are better than the "oneshot" compilation used by most programming languages (one compiler invocation per file producing one object file, e.g. ` gcc x.c x.o`). For example, Haskell has `ghc --make` which compiles the whole project in one compiler invocation, with very smart recompilation avoidance (pet-function, comment changes don't affect compilation, etc) and avoidance of repeat steps (e.g. parsing/deserialising inputs to a module's compilation only once and keeping them in memory) and compiler startup cost.

Combining that with per-file general-purpose hermetic build systems is difficult and currently not implemented anywhere as far as I can tell.

To get something similar with Nix, the language-specific build system would have to invoke Nix in a very fine-grained way, e.g. to get "avoidance of codegen if only a comment changed", Nix would have to be invoked at each of the parser/desugar/codegen parts of the compiler.

I guess a solution to that is to make the oneshot mode much faster by better serialisation caching.


What if you set up a sandbox pool? Maybe I'm rambling, I haven't read much Nix source code, but that should allow for only a couple of milliseconds of latency on these types of builds. I have considered forking Nix to make this work, but in my testing with my experimental build system, I never experienced much latency in builds. The trick to reduce latency in development builds is to forcibly disable the network lookups which normally happen before Nix starts building a derivation:

    preferLocalBuild = true;
    allowSubstitutes = false;
Set these in each derivation. The most impactful thing you could do in a Nix fork according to my testing in this case is to build derivations preemptively while you are fetching substitutes and caches simultaneously, instead of doing it in order.

If you are interested in seeing my experiment, it's open on your favourite forge:

https://github.com/poly2it/kein



I use crane, but it does not have arbitrary granularity. The end goal would be something which handled all builds in Nix.

Reminds me of the “Electric cars in reverse” video where the guy envisions a world where all vehicles are electric and tries to make the argument for gas engines.

Link?

try searching for 'Rory Sutherland: What If Petrol Cars Were Invented In 2025'

Actual article was in the evening standard, but like all things Rory Sutherland, it’s worth to watch him tell the story: https://youtu.be/OTOKws45kCo?si=jbTdx3YCGkZv3Akb

For those who want more of him, check out his classic TED talk from decades ago: “Lessons from an ad man”

https://www.ted.com/talks/rory_sutherland_life_lessons_from_...


There is some truth to it, however in production it is simple: There is a working deployment or not.

Therefore I would rephrase your remarks as upside: let others continue scratch their head while others deploy working code to PROD.

I am glad there is a solution like Docker - with all it flaws. Nothing is flawless, there is always just yet another sub-optimal solution weighting out the others by a large margin.


Popularity of a technology usually isn’t perfectly correlated with how good it is.

> let others continue scratch their head while others deploy working code to PROD.

You make it sound like when docker build arrived on the scene, a cross-language hermetic build tool was still a research project. That’s just untrue.



There are some hurdles preventing that flow from achieving reproducible builds. As the bad guys get more sophisticated, it's going to become more and more important that one party can say "we trust this build hash" and a separate party to say "us too".

That's not going to work if both parties get different hashes when they build the image, which won't happen as long as file modification timestamps (and other such hazards) are part of what gets hashed.


Recent versions of buildkit have added support for SOURCE_DATE_EPOC. I've been making the images reproducible before that with my own tooling, regctl image mod [1] to backdate the timestamps.

It's not just the timestamps you need to worry about. Tar needs to be consistent with the uid vs username, gzip compression depends on implementations and settings, and the json encoding can vary by implementation.

And all this assumes the commands being run are reproducible themselves. One issue I encountered there was how alpine tracks their package install state from apk, which is a tar file that includes timestamps. There are also timestamps in logs. Not to mention installing packages needs to pin those package versions.

All of this is hard, and the Dockerfile didn't make it easy, but it is possible. With the right tools installed, reproducing my own images has a documented process [2].

[1]: https://regclient.org/cli/regctl/image/mod/

[2]: https://regclient.org/install/#reproducible-builds


> I've been making the images reproducible before that with my own tooling

I've been doing the same, using https://github.com/reproducible-containers/repro-sources-lis... . It allows you to precisely pin the state of the distro package sources in your Docker image, using snapshot.ubuntu.com & friends, so that you can fearlessly do `apt-get update && apt-get install XYZ`.


Does any of that matter if you’re not auditing the packages you install?

I’m more concerned about sources being poisoned over the build processes. Xz is a great example of this.


Both are needed, but you get more bang for your buck focusing on build security than on audited sources. If the build is solid then it forces attackers to work in the open where all auditors can work together towards spoiling the attack.

If you flip it around and instead have magically audited source but a shaky build, then perhaps a diligent user can protect themself, but they do so by doing their own builds, which means they're unaware that the attack even exists. This allows the attacker to just keep trying until they compromise somebody who is less diligent.

Getting caught requires a user who analyses downloaded binaries in something like ghidra... who does that when it's much easier to just build them from sources instead? (answer: vanishingly few people). And even once the attacker is found out, they can just hide the same payload a bit differently, and the scanners will stop finding it again.

Also, "maybe the code itself is malicious" can only ever be solved the hard way, whereas we have a reasonable hope of someday providing an easy solution to the "maybe the build is malicious" problem.


I'm not sure if I was just holding it wrong, but I couldn't create images reproducibly using Docker. (I could get this working with Podman/buildah however.)

The lack of docker registry-like solutions really does seem to be the chokepoint for many alternatives.

Personally I love using mkosi and while it has all the composability and deployment options I'd care for, its clear not everyone wants to build starting only with a blank set of OS templates.


There are lots of alternative container image registries: quay, harbor, docker's open sourced one, the cloud providers, github, etc.

Or do you mean a replacement for docker hub?


Nix is exceptionally good at making docker containers.

Yes but then you're committed to using Nix which doesn't work so well the moment you need some software not packaged by Nix.

Want to throw a requirements.txt in there? No no, why would you even ask that? Meanwhile docker says yeah sure just run pip install, why should I care?


LLMs are getting very good at packaging software using Nix.

Then you're committing to maintaining a package for that software.

Like all LLM boosters, you've ignored the fact that the largest time sink in many kinds of software is not initial development, but perpetual maintenance.


It's not materially any different from maintaining lines in a Dockerfile.

It is mateirially different compared to "maintaining" the line 'RUN apt-get -y install foobar'

Is it though? If the way that I’m going to edit those files is by typing the same natural language command into Claude code, and the edit operation to maintain it takes 20 seconds instead of 10, to me that seems pretty materially the same

Yes, it is

How so?

This. I wouldn't have touched Nix when you needed someone who was really good at Nix to keep it working, but agents make it viable to use in a number of place.

Packaging for nix is exceptionally easy once you learn it. And once something is packaged, it's solved for all, it's not going to randomly break.

If you care about getting it to work with minimal effort right now more thar about it being sustainable later, then sure.


> Packaging for nix is exceptionally easy once you learn it

Most of the complaints I've seen about Nix about around documentation, so "once you learn it" might be the larger issue.


I don't in ow if I'd say it's "easy". The Python ecosystem in particular is quite hard to get working in a hermetic way (Nix or otherwise). Multiple attempts at getting Python easy to package with Nix have come and gone over the years.

I use software from pretty much every language with Nix. And I package it myself too when needed. Including Python often :)

Packing software with nix is easier than any other system TBH and just seems to be just getting easier.

Nix doesn't make sense if all you're going to use it for is building Docker images. It only makes sense if you're all in in the first place. Then Docker images are free.

Does Nix do one layer per dependency? Does it run into >=128 layers issues?

In Spack [1] we do one layer per package; it's appealing, but I never checked if besides the layer limit it's actually bad for performance when doing filesystem operations.

[1] https://spack.readthedocs.io/en/latest/containers.html


This post has a great overview: https://grahamc.com/blog/nix-and-layered-docker-images/

tl;dr it will put one package per layer as much as possible, and compress everything else into the final layer. It uses the dependency graph to implement a reasonable heuristic for what is fine grained and what get combined.


That layering algorithm is also configurable, though I couldn’t really understand how to configure it and just wrote my own post processing to optimize layering for my internal use case. I believe I can open source this w/o much work.

The layer layout is just a json file so it can be post processed w/o issue before passing to the nix docker builders


Especially if you use nix2container to take control over the layer construction and caching.

I'm not sure if this is what you mean but in some ways it would be nice to have tighter coupling with a registry. Docker build is kind of like a multiplexer - pull from here or there and build locally, then tag and push somewhere else. Most of the time all pulls are from public registries, push to a single private one and the local image is never used at all.

It seems overly orthogonal for the typical use case but perhaps just not enough of an annoyance for anyone to change it.


[dead]


^ this account has posted nothing but AI generated spam since it was created 6 hours ago

> the Dockerfile has continued because of its flexibility

I wish we had standardized on something other than shell commands, though. Puppet or terraform or something more declarative would have been such a better alternative to “everyone cargo cults ‘RUN apt-get upgrade’ onto the top of their dockerfiles”.

Like, the layer/stage/caching behavior is fine. I just wish the actual execution parts had been standardized using something at a higher level of abstraction than shell.


> Puppet or terraform or something more declarative would have been such a better alternative

Until you need to do something that isn't covered with its DSL, and you extend it with an external command execution declaration... At which point people will just write bash scripts anyway and use your declarative language as a glorified exec.


If you have 90-95% of everyone's needs (installing packages, compiling, putting files) covered in your DSL, and it has strong consistency and declarativeness, it's not that big of a problem if you need an escape hatch from time to time. Terraform, Puppet, Ansible, SaltStack show this pretty well, and the vast majority of them that isn't bash scripts is better and more maintainable than their equivalents in pure bash would be.

The problem is, ironically, that each DSL has its own execution platform, and is not designed for testability. Bash scripts may be hard to maintain, but at least you can write tests for them.

In Azure YAML I had an odd bug because I used succeeded() instead of not(failed()) as a condition. I had no way of testing the pipeline without executing it. And each DSL has its own special set of sharp edges.

At least Bash's common edges are well known.


Docker broke out the build layer into a separate component called BuildKit (see HN discussion recently https://news.ycombinator.com/item?id=47166264).

However, Dockerfiles are so popular because they run shell commands and permit 'socially' extending someone else shell commands; tacking commands onto the end of someone else's shell script is a natural process. /bin/sh is unreasonably effective at doing anything you need to a filesystem, and if the shell exposes a feature, it has probably been used in a Dockerfile somewhere.

Every other solution, especially declarative ones, tend to come up short when _layering_ images quickly and easily. However, I agree they're good if you control the entire declarative spec.


I'd say LLB is the "standard", Dockerfile is just one of human-friendly frontends, but you can always make one yourself or use an alternative. For example, Dagger uses BuildKit directly for building its containers instead of going through a Dockerfile.

Declarative methods existed before Docker for years and they never caught on.

They sounded nice on paper but the work they replaced was somehow more annoying.

I moved over to Docker when it came out because it used shell.


Give https://github.com/project-dalec/dalec a look. It is more declarative. Has explicit abstractions for packages, caching, language level integrations, hermetic builds, source packages, system packages, and minimal containers.

Its a Buildkit frontend, so you still use "docker build".


The more you try and abstract from the OS, the more problems you're going to run into.

Bash is pretty darn abstracted from the OS, though. Puppet vs Bash is more about abstraction relative to the goal.

If your dockerfile says “ensure package X is installed at version Y” that’s a lot clearer (and also more easy to make performant/cached and deterministic) than “apt-get update; apt-get install $transitive-at-specific-version; apt-get install $the-thing-you-need-atspecific-version”. I’m not thrilled at how distro-locked the shell version makes you, and how easy it is for accidental transitive changes to occur too.

But neither of those approaches is at a particularly low abstraction level relative to the OS itself; files and system calls are more or less hidden away in both package-manager-via-bash and puppet/terraform/whatever.


Dockerfile has the flexibility to do what you want though, no? Use a base image with terraform or puppet or opentofu or whatever pre-installed, then your Dockerfile can just run the right command to apply some declarative config file from the build context.

And if you want something weird that's not supported by your particular tool of choice, you have the escape hatch of running arbitrary commands in the Dockerfile.

What more do you want?


The loose integration between the declarative tools and the container build system drags down performance and creates a lot of footguns re: image size and inert declarative-build-system transitive deps left lying around, I’ve found.

Why would terraform leave transitive steps around? To my knowledge, Docker doesn't record a log the IO syscalls performed by a RUN directive, the layer just reflects the actual changes it makes. It uses overlayfs, doesn't it? If you create a temporary file and then delete it within the same layer, there's no trace that the temporary file ever existed in overlayfs, correct?

I'd get your worry if we were talking about splitting up a terraform config and running it across multiple RUN directives, but we're not, are we?


Transitive deps, not steps.

Random examples off the top of my head: Puppet has a ton of transitive Ruby libraries and config files/caches that it leaves around; Terraform leaves some very big provider caches on the system; plan or output files, if generated and not cleaned up, can contain secrets; even the “control group” of the status quo with RUN instructions often results in package manager indexes and caches being left in images.

Those are all technically user error (hence why I called them footguns rather than defects), but they add up and are easy mistakes to make.


Oof, not terraform please. If you use foreach and friends, dependency calculations are broken, because dependency happens before dynamic rules are processed.

I'd get much better results it I used something else to do the foreach and gave terraform only static rules.


Say more about this?

Do you mean that if you use a dynamic output in a foreach, Terrafom can error? Or are you referring to “dynamic” blocks and their interactions with iterators?


You can pretty much replace "docker build" with "go build".

But as long as people want to use scripting languages (like php, python etc) i guess docker is the neccessary evil.


>You can pretty much replace "docker build" with "go build".

I'll tell that to my CI runner, how easy is it for Go to download the Android SDK and to run Gradle? Can I also `go sonarqube` and `go run-my-pullrequest-verifications` ? Or are you also going to tell me that I can replace that with a shitty set of github actions ?

I'll also tell Microsoft they should update the C# definition to mark it down as a scripting language. And to actually give up on the whole language, why would they do anything when they could tell every developer to write if err != nil instead

Just because you have an extremely narrow view of the field doesn't mean it's the only thing that matters.


My point was that 90% of "dockerized" stuff is just scripting langs

Go is just one language, while Dockerfile gives you access to the whole universe with myriads of tools and options from early 1970s and up to the future. I don't know how you can compare or even "replace" Docker with Go; they belong to different categories.

In some situations, yes, others no. For instance if you want to control memory or cpu using a container makes sense (unless you want to use cgroups directly). Also if running Kubernetes a container is needed.

You have to differentiate container images, and "runtime" containers. You can have the former without the latter, and vice versa. They are entirely orthogonal things.

E.g. systemd exposes a lot of resource control as well as sandboxing options, to the point that I would argue that systemd services can be very similar to "traditional" runtime containers, without any image involved.


Well, I did mention "or use cgroups" above.

And what I've said is that there are more options. You don't have to use cgroups directly, there are other tools abstracting over them (e.g. systemd) that aren't also container runtimes.

Wasn’t this the same argument for .jar files?

> You can pretty much replace "docker build" with "go build".

Interesting. How does go build my python app?


It obviously means you dont use a scripting language, instead use a real langauge with a compiler.

Calling "go" a "real language" is stretching the definition quite a bit.

Real languages don't let errors go silently unnoticed.


For any serious app you dont ignore errors, and enforce them with the vast go linting tool.

Ok yeah let me just port pytorch over that should be quick

It doesn't sound like Golang is going to dominate and replace everything else, so Docker is there to stay.

At the risk of stating the obvious, there's quite a lot of languages besides just scripting languages and Go that get run in containers.

4.4% is the headline number, but there are other measures of unemployment [1] that show we are closer to 8% when you include people that are discouraged from even looking and those working part-time but would prefer a full time job.

There's also a stagnation of salaries relative to inflation and a slow hiring market that has people locked into a job when they'd like to find something better. The K-shaped recoveries have people slipping out of the middle class. Combine with housing increasing faster than inflation, future generations having a lower quality of life than their parents.

The wealthy are doing what they can to try to direct the narrative elsewhere, by controlling media sources, blaming immigrants, blaming China, and blaming the government. But we really have far too much wealth concentration to be sustainable, not unlike the ending of a game of monopoly. If a more stable solution isn't found soon, I fear things will get much worse than they already are.

[1]: https://www.bls.gov/news.release/empsit.t15.htm


You can also look at the prime-age employment-population numbers, which show ages 25 to 54.

It shows nearly 20% unemployment rates.

I think this is a better number, personally, than the 4.4% one that conveniently skips out on so many. It's always felt like an "optics" number to me. Like asking themselves how much can they possibly massage the data to look as good as possible.

I think it's meaningful to consider the amount of people who are unemployed even if they're not looking for work or can't work. It better highlights that there are societal level problems that are preventing a lot of these people from working when I imagine most of them would like to be - they just can't because of childcare needs, disability, incarceration, lack of access to opportunities, domestic abuse, etc.

https://fred.stlouisfed.org/series/LNS12300060


Reaction 1: how would this even work with embedded systems that have no UI to input this data?

Reaction 2: it's open source, make the lawmakers do submit the changes.

Reaction 3: how would this ever be enforced? Would they outlaw downloading distributions, or even older versions of distributions? When there's no exchange of money, a law like this is seems like it would be suppression of free speech.

Reaction 4: Someone needs to maliciously comply, in advance, on all California government systems. Shutdown the phones, the Wi-Fi, the building access systems, their Web servers, data centers, alarm systems, payroll, stop lights, everything running any operating system. Get everyone to do it on the same day as an OS boycott. And don't turn things back on until the law is repealed.


While there are some enforcement questions here, especially around non commercial OSes, most of your reactions are clearly based on the headline alone.

It defines operating system in the law. This wouldn’t apply to embedded systems and WiFi routers and traffic lights and all those things. It applies to operating systems that work with associated app stores on general purpose computers or mobile phones or game consoles. That’s it.

Enforcement applies as civil fines per-child usage. So no suppression of speech by banning distribution.

(Also it’s not age verification really, it’s just a prompt that asks for your age to share as a system API for apps from above app store, no verification required)


> It defines operating system in the law.

No, it doesn't.

It defines the following terms: "account holder", "age bracket data", "application", "child", "covered application store", "developer", "operating system provider", "signal", and "user".

> This wouldn’t apply to embedded systems and WiFi routers and traffic lights and all those things. It applies to operating systems that work with associated app stores on general purpose computers or mobile phones or game consoles.

Presumably, this based on reading the language that in the definition of "operating system developer", and then for some reason adding in "game consoles" (the actual language in both of those includes "a computer, mobile device, or any other general purpose computing [device".

(I've also rarely seen such a poorly-crafted set of definitions; the definitions in the law are in several places logically inconsistent with the provisions in which they are applied, and in other places circular on their own or by way of mutual reference to other terms defined in the law, such that you cannot actually identify what the definitions include without first starting with knowledge of what they include.)


From the bill:

> "Covered application store” does not mean an online service or platform that distributes extensions, plug-ins, add-ons, or other software applications that run exclusively within a separate host application

There is a reasonable argument that a linux distribution is, itself, a host application. This is clearly an argument against their intention... but makes perfect sense to me. With this argument, the law does not apply to pretty much any environment where the applications are scheduled and run by a supervising process, at least by my reading.


No operating system (including windows, which uses a translation layer in userspace — “host application”?) provides a windows-compatible kernel API.

So I guess that excludes all windows apps and app stores.


In typical jury trials, the jury is instructed that any terms not defined in the relevant statutes are to have their common-sense, ordinary meanings as understood by the jury. The jury is usually also selected to be full of reasonable, moderate people, and folks who are overly pedantic usually get excused during voir dire.

Do you really think a pool of 12 people off the street is going to consider an embedded system, wi-fi router, or traffic light as an "operating system" under this law? Particularly since they don't even have accounts or users as a common-sense member of the public would understand them?


Not sure why you are appealing to the rule on terms that aren’t defined, since the actual question is whether or not thet consider the vendor of the software powering the device as an “operating system vendor” which is, in fact, defined in the law, and the answer there seems to be hinge on whether or not they think it is a general purpose compute device, which would seem almost certain to be no for a traffic light, and likely to be no (but more debatable and potentially variable from instance to instance) in the other cases you list.

> Particularly since they don't even have accounts or users as a common-sense member of the public would understand them?

Not sure what having accounts or users “as a common sense member if the public would understand them” is relevant to since, to the extent having a “user” is relevant in the law, it to is defined (albeit both counterintuitively and circularly) in the law, and having an “account” isn’t relevant to the law at all.


The jury is selected randomly. They try to weed out obvious kooks, but there is no attempt to make it either reasonable or moderate.

The hope is that twelve of your peers will at least avoid being able to persecute you for political goals. I hope neither of us ever has to find out.


Have you ever gone through jury selection? It isn't what you think it is.

I've gone through the process a few times. It does not instill confidence in the system. And that's not including the emotional manipulation tactics that typically take place in jury trials.

It was like something out of a parks and rec episode.

Don’t let facts get in the way of righteous indignation!

MOST cases don't make it to jury. They're more likely to be resolved via motions and countermotions and the decisions of a jduge.

To dumb down "operating system" for normies, they're probably going to say something along the lines of "the software that makes your computer work.. like Windows." If it stays at that level, we'll have a specific, discrete definition in play.

A broader, equally correct definition could be "the software that makes technology work.. there's an operating system on your computer, your cell phone, your Alexa, and even your car." Then yes, some people will think of their Ring doorbell, the cash register at the coffee shop, and other embedded systems, even if they've never heard the word "embedded."

The definition that shows up will depend entirely on a) the context of the case and b) the savviness of the attorneys involved.

Not a bet I want to take.


Defendants can always opt for a judge to rule on the case.

At that point, what the law actually says matters a lot (unless the judge is corrupt, which is becoming more common in the US, but with corrupt judges, it doesn’t really matter how good or bad the laws is).


Good call. What's this law's definition of "operating system"?

You'll be arrested for some weird law that doesn't make sense, but it's ok because a pool of 12 people off the street won't consider whatever random thing you did a real crime!

" It applies to operating systems that work with associated app stores on general purpose computers or mobile phones or game consoles. That’s it"

Everything is a general purpose computer. Just look at how many things have been made to run doom. I haven't read the law specifically but if it actually does say this then that language is useless and means practically everything.


Wood is edible when processed correctly, but it's not legally considered "food" because there are a bunch of nontrivial steps to get it into that state. Likewise, any reasonable interpretation of "general purpose computer" in this context by a judge would not include your microwave oven just because someone with skill and finesse could transform it into a cursed Doom arcade machine.

Laws are interpreted by people trained to fill in the blanks[1] with a best guess of the legislative body's intent. And the intent here seems pretty clear: to regulate computing devices that let end users easily install software from a centralized catalog.

[1] which we all do subconsciously in day-to-day speech, because all language is ultimately subjective


They exempt applications that run inside another “host application” though, which is ~ everything in any modern app store.

I guess Linux native games on GoG might be covered. All windows and wsl programs run in userspace compat layers. iOS might be covered. Snap, probably not (containers), AppImage? Maybe?

Nix, and brew? Probably not.


vague laws are put in place so that they can be used selectively to punish particular victims while letting friends through the nets

All laws are vague and interpreted, and in common law (as in the UK and US) interpreted based on precedent rather than the specific text of the original law.

If people with power over you want to "selectively punish you" they don't need new laws.

And if you want perfectly proscriptive, defined laws in all situations with no "human interpretation" you're in the wrong universe, and may as well be shouting at clouds. The world, and especially human society and interactions, just doesn't follow strict definitions like that.


"All laws are vague"

There are degrees of vagueness, but laws generally attempt to avoid being vague with many definitions and strict construction. If a law is sufficiently vague it may be invalidated, or it is at least required to be interpreted to the benefit of the defendant under lenity.


That’s where selective enforcement comes in.

Make it unambiguous that 100% of people are criminals, and all you have to do is control the prosecutor’s office.

This law seems to be in that category.


Vague laws are not required for selective enforcement. You can have strictly defined laws result in selective enforcement through law enforcement and prosecutorial discretion.

until you root out their friends and maliciously develop app stores for their products, then install them multiple billions of times on a docker and let them rack up charges ;) doom can run on -anything-

>doom can run on -anything-

Frotz and Zork/Tristam Island and tons of Z3 machine games cna run on a pen, on a FPGA based display and even under a PostScript file where the interpreter was done in PostScript. Heck, with Subleq and EForth some Z3 interpreter can be coded to run the games on simple hardware made with high school/advanced trade electronics kits.


But would Mark Zuckerberg have stopped there?? Nay. I think you could still weaponize it for profit if we only dream hard enough. Lol

I like the way you think.

Is a repository on a linux machine an app store? Are custom repositories app stores? Does this mean that now most automated deployments are now not automated? If they can be automated, does that mean that having the automation by default makes sense?

The law defines a user as a child running software on a general purpose computer.

> “User” means a child that is the primary user of the device.

It’s definitely more vague that necessary, but I’d imagine courts would readily find automated software deployment by an adult or corporation does not constitute a child using the device. Especially if done for servers or a fleet. Because then it’s pretty obvious that a child is not the primary user of the computer nor the software. Even if that software is a server that involves childish activities (eg game servers).

But I’d imagine that Linux package managers associated with a desktop operating system provider would fall under this law. And that raises questions about the software distributed by said package managers.


Flat packs are fucked…

What’s going to happen when there’s no UI, just a shell, and they pacman -S <mything>? This law is unconstitutional based on criteria of vagueness. If they want it to stick, they need to call out the commercial app stores of Microsoft, Apple, Google, etc where a credit card is attached. Otherwise it’s too vague a term unless they define “store”.


This doesn't follow. There are clear technicaö means to achieve complience in all of these scemarios. All those installers can, for example, check a file in /etc to determine the pirported user's age. If this does need external verification, this file can be signed by a third party identity checking service.

If the distros ship this mechanisms enabled in their binaires, but the users install circumvention tools (e.g. a package manager without checking mechanisms) from a thurd party, the distro provider should be off the hook.


Android systems use Linux as their operating system, and the law applies to operating systems.

Android has associated app stores, therefore Linux must follow this at account setup ..

(I'm mostly hoping I'm just jesting here, that they'd surely not enforce it in this way, plus, who "provides" my Linux OS?)

In any event, it does seem like a very silly overreaching law, that should be highlighted, pointed out, and laughed at.

PS I have not read the law in question. I have read a PC Gamer article though, which is surely much the same.


Linux isn’t really an operating system but more the kernel of the OS. In this case, Android would be the OS.

Do you remember this copypasta?

https://www.reddit.com/r/copypasta/s/3nonwfDeyX


I remember it when RMS was shouting it from the rooftops.

I'm not sure that ART/Linux is any more catchy than GNU/Linux, but just as GNU wasn't the OS, neither is ART.

Don't get me wrong, these are all very silly pedantic arguments in the face of such a law.


They are very non silly, because whoever is the actual OS vendor gets to implement this. It's relevant whether kernel developers or OS maintainers need to implement this.

Are you jesting? Honestly it could be. It's impossible to tell.

> (Also it’s not age verification really, it’s just a prompt that asks for your age to share as a system API for apps from above app store, no verification required)

It's not enough to adhere to the age signal:

> (3) (A) Except as provided in subparagraph (B), a developer shall treat a signal received pursuant to this title as the primary indicator of a user’s age range for purposes of determining the user’s age.

> (B) If a developer has internal clear and convincing information that a user’s age is different than the age indicated by a signal received pursuant to this title, the developer shall use that information as the primary indicator of the user’s age.

Developers are still burdened with additional liability if they have reason to believe users are underage, even if their age flag says otherwise.

The only way to mitigate this liability is to confirm your users are of age with facial and ID scans, that is why age verification systems are implemented that way: doing so minimizes liability for developers/providers and it's cheap.


> Developers are still burdened with additional liability if they have reason to believe users are underage, even if their age flag says otherwise.

This is true, but

> The only way to mitigate this liability is to confirm your users are of age with facial and ID scans,

This doesn’t follow. It says “if” the developer has clear reason, it doesn’t obligate the developer to collect additional information or build a profile.

I read this as - if you in the course of business come across evidence a user is under age, you can’t ignore it. For example - “you have to ban a user if they post comments saying they are actually underage”


That would have to be litigated in court, and the easiest and cheapest way to avoid litigation is to do what all platforms currently do: make sure the person using their system is who they say they are via face scans and ID checks.

As a developer, that is not the kind of liability I want to take on when I can just plug ID.me, or whatever, into my app and not worry if someone writes "im 12 lol" in a comment on my platform.


Post where?

> a developer has internal clear and convincing information

Internally from the perspective of the developer implies: on the platform in question.


The language in the bill says operating system “or” application store. Isn't that then implying any operating system that would download applications, even if it doesn’t come from a store. But IANAL.

Seems to me this would include TVs, cars, smart devices, etc. The Colorado version of this bill excludes devices used for physical purchase, so your gas pumps and POS systems would be excluded in CO. But I didn’t see that in the CA bill.

They’re both overly broad, ill-considered, frankly terrible bills that make as much sense as putting your birthday into a brewery site or Steam. Enter your birthday and we trust you. Now do that for every single one of those 100 VMs you just deployed…


Just the idea of requiring age verification to admin each VM in a fleet of VMs makes me chuckle.

> per-child usage

If the First Amendement is to prevent a government from letting you speak, shouldn’t that also concert a government from letting you hear that speech?

If so, then this seems to go against the Forst Amendment.

Sorry, Australian here so just speculating


> Also it’s not age verification really

Not yet, but it will be one day if it passes


Servers still kinda fit.

So, all of us-west-1?


By that logic, my NAS (TOS6) falls under that category.

It would just be unenforced for all platforms except windows, apple and android.

I doubt the california legislature knows what a Linux even is.


The big three will love this. They'll implement the feature, then they get to dob in Linux and friends and get them buried in regulatory lawsuits.

All three already have identity linked accounts. Windows practically shoves it down your throat on install, for example. They'll love the excuse to finally disallow web-free accounts.

Windows servers are so back baby!


It’s only enforced by the CA Attorney General, and I’d be surprised to see a threat, let alone a lawsuit, against Linux on this. Not to say this is ideal.

> I doubt the california legislature knows what a Linux even is.

All Congress critters have staff to help write the bills and fill out the policy. You can bet your sweet bippy that there are people on staff in the California legislature who know what a Linux even is.


>I doubt the california legislature knows what a Linux even is.

they would never need to know it once they learn what SecureBoot is. Any device with 1+ Gflop must have SecureBoot, and goodbye general computing.


It’s the V-chip and Clipper chip madness all over again. While they are at it can they start requiring the rich, famous, and powerful to get age verification before interacting with people to prevent another Epstein?

It’s political theater. “See? We did something. Vote for us again.”

[flagged]


It’s clear you last poked your head out of a hole in the ground 30 years ago. Check out the iPhone and the Internet while you’re up here, they will blow your mind.

Exactly. This is obviously targeted at these three, and in those cases will be a massive improvement over forcing every site operator to start collecting photo ID.

To small to be of any concern.

Continually surprised by politicians wanting an OS to do what a parent should be doing. Why not just mandate that all devices with access control capabilities implement parental controls, and then mandate that all adults enable controls before handing a device to a minor? For devices that are incapable of user access control, the same rules as a knife, chainsaw or gun apply.

>For devices that are incapable of user access control, the same rules as a knife, chainsaw or gun apply.

Well said. Parental controls are really nothing more than parents are root and kids are mere users. We shouldn't let the state or corporations be the parents, nor is it necessary.

Age verification by governments or corporations isn't a substitute for parenting. And with parenting age verification by governments or corporations isn't necessary.

They're doing this for some other reason than protecting kids, as usual.


This isn’t so heavy handed. The purpose of age signaling is so that a parent can set in one place an age, and then federal privacy protections under COPPA and state protections under the AADC kick in.

Only wealthy parents (upper middle class or better) have the time or energy to do anything other than work, put food on the table, and do basic child care.

Most parents lack the technical expertise to police digital devices.


> Reaction 3: how would this ever be enforced? Would they outlaw downloading distributions, or even older versions of distributions? When there's no exchange of money, a law like this is seems like it would be suppression of free speech.

That's not what will happen. We've already seen examples of what will happen. So let me just list them instead:

1. The Secure Boot chain for UEFI initially mandated that only OS that were signed by Microsoft would be allowed to boot on PCs where SB is enabled. This was partially rolled back after public backlash.

2. iOS devices and majority of Android devices already don't allow you to install an alternate OS or distro.

3. Platform attestation proposals like Web Environment Integrity and its Android version.

4. Mandate that every developer must register with and pay an MNC to be able to release any app on their platforms.

Basically, they'll just take away your ability to control your device in any way. Don't be surprised if it turns out that these MNCs were behind such legislations. But this legislation is especially dangerous in that it will effectively kill user-controlled general-purpose computing, even from vendors like Pine64, Framework, System76, Fairphone and Purism who are willing to offer those.

Considering the amount of damage caused by these sort of legislative BS, those who propose and vote for such bills should be investigated publicly for corruption, conflict of interests and potential treason. They should be forced to divulge any relationship, directly or indirectly, with the benefactors of these bills. On the other side, rich corporations should be banned from 'lobbying' or bribery more appropriately, in matters that they have a stake in. And they should have stiff penalties for any violations. Not those couple of million dollar slaps on their wrist. At least 5% of their annual global profits, incarceration of top executives and breaking up the company. There has to be a consequence that's uncomfortable enough, for any fairness to be reestablished. This should apply even more for those professional lobbying firms and 'industry advocacy groups'.

People also need to start strongly opposing, rejecting and condemning justifications like this that rely on the cliche tropes of CSAM, terrorism, public safety, national security, etc. None of those measures are necessary or even useful in preventing any of those. Insistence on the contrary should be treated as an admission of inability and incompetence of the respective authorities in tackling the problem. In fact, why do they assume that kids, especially teens, are unimaginative and incapable of working around the problem? They should at least be starting with awareness campaigns to get the kids and the parents on their side and empower parents to enforce parental controls, instead of reaching for such despotic measure right away. This is like banning drugs before the problem of drug addiction is addressed. Black markets exist, even for cyberspace. It will just make the problem a whole lot worse.

And finally, don't let people without clearly proven vested interests anywhere near such regulations. And choose professionals or at least competent people for taking such decisions. You can't rein in this attack on ordinary people without stemming the uncontrolled corruption in the public offices that deal with it.


On the enforcement point, I suspect it won't be enforced against Linux in the abstract, but against companies with a legal and commercial presence in California

March 1st is now officially malicious compliance day.

> how would this even work with embedded systems that have no UI to input this data?

Doesn't the bill explain all this pretty clearly? https://leginfo.legislature.ca.gov/faces/billTextClient.xhtm...

>> An operating system provider shall [...] provide an accessible interface at account setup that requires an account holder to indicate the birth date, age, or both, of the user [...]

>> “Operating system provider” means a person or entity that develops, licenses, or controls the operating system software on a computer, mobile device, or any other general purpose computing device.

Your hypothetical "embedded system" almost certainly neither has an account setup process in the first place, nor is it a general-purpose computing device, a mobile phone, or a computer.

> Reaction 3: how would this ever be enforced?

Pretty easily? They enforce it against the OS vendor for not providing such a process. They aren't enforcing the correctness of the age, nor are they claiming to.

> Someone needs to maliciously comply, in advance, on all California government systems.

...what? This is a law demanding compliance from OS vendors. Whose compliance is it even demanding in government systems for them to be malicious about it?


> general-purpose computing device

This term doesn't seem defined in the law at all. How general is general?

Graphing calculators that support apps and Python? Of course, they don't usually have "accounts" either. But to a technologist it's a "general purpose computer" insofar as it can run new code that the user loads into it, it can definitely run games that it didn't come from the factory with, etc. It's a tiny multipurpose computing device.


Laws in the US aren't taken as literal as in civil law systems. The intent and precedent is what carries much more weight in the end. Graph calculators are unlikely to be tested in court because it's irrelevant with respect to what this law is trying to accomplish.

https://en.wikipedia.org/wiki/Common_law

I often see laws discussed here and people finding some edge case and presenting this as a gotcha. The reality is that it's unlikely to matter.


Does your pocket calculator with Python have an account setup process?

I guess it's gonna have to have it, now.

What? Nowhere do they stipulate you have to add that. They just say if you do account setup, then you need to provide such an interface.

i see you're a problem solver

> Reaction 3: how would this ever be enforced? Would they outlaw downloading distributions

They can outlaw you from using those distributions and/or scare the maintainers so there won't be distributions anymore. And if you want to use a desktop computer rent one from an hyperscaler, tied to a credit card and access it from a tablet with age verification. I don't know if I should add /s


you're pointing out that it doesn't make sense

the point of laws like these isn't to make sense, it's to be annoying


I don't use buildkit for artifacts, but I do like to output images to an OCI Layout so that I can finish some local checks and updates before pushing the image to a registry.

But the real hidden power of buildkit is the ability to swap out the Dockerfile parser. If you want to see that in action, look at this Dockerfile (yes, that's yaml) used for one of their hardened images: https://github.com/docker-hardened-images/catalog/blob/main/...


i did include a repo example on how to create custom frontend as well https://github.com/tuananh/apkbuild

I agree on both fronts! BuildKit frontends are not very well known but can be very powerful if you know how they work and how BuildKit transforms them.

If you need the money in 2 years, I wouldn't leave it in the stock market. Find a money market or CD to avoid the gamble. You're going to get hit with capital gains taxes either now or in 2 years, so that shouldn't impact your decision.

My personal time frame is 4-5 years of emergency funds. You can adjust that for your own risk tolerance, but have a look at various past crashes to make an educated decision.

I'd only leave it invested if you don't actually need it, because college can be delayed or financed with student loans.





Two lessons for driving in snow/ice:

1. In a parking lot, clear behind the car, and just enough to get inside. Then back the car out of the space, clear the car off, clear the parking space out, put the car back in the parking space and clear everything you knocked off. If tried without pulling the car out of the space, you'd be trying not to ding up your neighbor's car, clearing in tight spaces under the car, and then doing it all again when you clear off the top of the car into that narrow gap.

2. Don't drive unless you absolutely need to. You may know what you're doing, but others almost certainly don't. But do make sure to clear out at least one car, just in case there's an emergency.


Rack scale computing, on both the software and hardware side. That means building custom network switching, power management, etc, in a turn key solution that drops in to a customer's data center. Unbox it, plugin a few connections, make a few configuration settings, and start deploying. It's the on-prem response to the cloud for companies running things at scale.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: