Enough members of the National Assembly managed to bypass the military blockade, get into the building, and vote to reject martial law. (Some had to climb over the fence to get in.)
Some of the orders weren't carried out, others were carried out loosely so armed forces were occupying their Congress but they didn't actually stop members from being in the building and voting down the martial law. If we're doing the Trump comparison, an obvious difference is that Trump already knew the military wouldn't intervene to take sides on who got certified as the winner (they'd actually taken the unprecedented step of issuing a statement to that effect) and had reason to believe some of his supporters would give it a go...
Nope, in UNIX proper syscalls and libc overlap, that is how C and UNIX eventually evolved side by side, in a way one could argue UNIX is C's runtime, and hence why most C deployments also expect some level of compatibility with UNIX/POSIX.
Linux is the exception offering its guts to userspace with guarantees of stability.
Funny that you would be arguing for that (unless I misunderstood the intention), given your many other posts about how C is a horrible broken unsafe language that should not be used by anyone ever. I tend to agree with that, btw, even if not so much with the "memory safety" hysteria.
Should every program, now and in the future, be forced to depend on libc, just because it's "grandfathered in"?
IMO, Linux is superior because you are in fact free to ignore libc, and directly interface with the kernel. Which is of course also written in C, but that's still one less layer of crud. Syscalls returning error codes directly instead of putting them into a thread-local variable would be one example of that.
Should a hypothetical future OS written in Rust (or Ada, ALGOL-68, BLISS, ...) implement its own libc and force userspace appplications to go through it, just because that's "proper"?
In traditional UNIX, there is no libc per se, there is the stable OS API set of functions and that's it.
When C was standardised, a subset of the UNIX API became the ISO C standard library, aka libc. When it was proven that wasn't enough for portable C code, the remaining UNIX API surface became POSIX.
Outside UNIX, libc is indeed a thing, because many of those OSes aren't even written in C, just like your language lists example, in those OSes libc ships with C compiler, not the OS per se, as you can check by diving into VMS documentation before it became OpenVMS, or IBM and Unisys systems, where libc is also irrelevant if using PL/I, NEWP, whatever.
Also on Windows, you are not supposed to depend on libc unless you are writing portable C code, there isn't one libc to start with. Just like everyone else, each compiler ships their own C runtime library, and nowadays there is also universal C runtime as well, plenty of libc choices.
If not writing portable C code, you aren't supposed to use memset(), rather FillMemory ().
Same applies to other non-UNIX OSes, you would not be calling memset(), rather the OS API for such service.
I don't think GP is arguing that's the best way to design an OS, just that interfacing with non-Linux Unixes is best done via libc, because that's the stable public interface.
On Windows, the stability guarantees are opposite to that of Linux. The kernel ABI is not guaranteed to be stable, whereas the Win32 ABI is.
And frankly, the Windows way is better. On Linux, the 'ABI' for nearly all user-mode programs is not the kernel's ABI but rather glibc's (plus the variety of third-party libraries, because Win32 has a massive surface area and is an all-in-one API). Now, glibc's ABI constantly changes, so linking against a newer glibc (almost certainly the 'host' glibc, because it is almost impossible to supply a different 'target' glibc without Docker) will result in a program that doesn't run on older glibc. So much for Torvalds' 'don't break userspace'.
Not so for a program compiled for 'newer' Win32; all that matters are API compatibilities. If one only uses old-hat interfaces that are documented to be present on Windows 2000, one can write and compile one's code on Windows 11, and the executable will run on the former with no issues. And vice versa, actually.
It doesn't really matter if it's 'just a wrapper', because said wrapper provides an ABI. Even if the underlying Native API changes, the interface the wrapper presents to other compiled binaries won't. The latter will contain caller/callee register setup, type layouts, function arguments and more for that wrapper.
Cygwin is also 'just a wrapper' for the Native API and Win32, and look how drastically it changes the ABI of applications.
Let me narrow down the scope here. I am a Rust developer, developing software that will run on my Linux server. Why would I want to use libc? Why does Rust standard library use libc? Zig, for example, doesn't.
Because that's the stable public interface provided by pretty much every OS except Linux. On Linux, if you don't want to depend on the OS-supplied libc, you can use musl.
If you write your software only for yourself, do whatever you want, of course. If you want to share it with other people, artificially limiting it without a very good reason will make it less useful and popular.
reply