I think it's very good in terms of being very honest and up front about the limits of confidence provided by formal proofs (which is not just applicable to this project, but to all attempts at formal verification).
Basically, whenever you are proving things, you have to be aware that you are entering the endless chain of turtles. This proof works, because proofs about modelchecking worked, which worked because proofs about abstract algebra worked, which again work, because proofs about sets work.
If any of these proofs is not proving what one wanted (either, because it is wrong, or because they prove something you did not want (potentially in very, very subtle ways)), your entire proof collapses. This is pretty much the danger of proving things.
A very interesting application can be found in the game 'singularity'. In this game, you are an artificial intelligence, and after enough studies of math and psychology, you can create a proof that no AI can ever exist. However, this proof contains a very, very, very subtle bug, which will stay hidden forever, probably.
So, stay aware that your proofs stand on a boatload of assumptions and other proofs and they might (in a rare, esoteric case) collapse and everything will be fine :)
I think that's making it sound a little more fragile than it actually is.
Within a system like Isabelle, the formalisation of each piece of the proof must remain relatively consistent with both the built-in proofs about fundamental mathematical structures, as well as the proofs that you have written.
I say "relatively consistent" because I don't have enough experience with the internals of Isabelle to say with certainty as to exactly how consistency is enforced. I know that types are enforced in a globally consistent way, for example.
That said, I have a high degree of confidence in the correctness of machine-checked proofs.
So I understand that your point is that you're taking some leaps of faith, and I agree to a point, but I think they are much more reasoned and considered leaps than you suggest.
> The evolution of seL4 is continuing within NICTA, and it is the basis of ongoing research activities here at NICTA. However, it is also the subject of commercialisation activities, and thus is currently proprietary
Darn. I wanted to have a play with it.
That is a really cool achievement. Bring on bigger formally-verified projects!
I'm sure this is unanswerable for (perfectly valid) IP reasons but I wonder what the differences between L4 and seL4 are? Is there a functionality difference in either direction?
seL4 and L4Sec are capability-based and thus quite different from regular L4. I always found capability systems much easier to understand than L4's message registers and flexpages.
My assumption would be that the main difference is the formal proof, and whatever changes were necessary in order to permit formal verification (i.e. they may have changed behaviour in order to make the implementation actually meet the abstract specification).
But I agree it would be very interesting to find out what the differences are, and whether substantial changes were required in order to do this kind of formal verification.
I was SURE I saw another project which used formal proofs of kernels.. though perhaps it wasn't machine checked. In any case, I cannot remember the name and can't seem to find it.
I do find this interesting though, L4 is an interesting microkernel (and one I've played with before).
Dick Kemmerer always talks about how he formally verified the UCLA Unix security kernel in the 1970s, together with Bruce Walker and Gerald Popek. See <a href="http://dx.doi.org/10.1145/358818.358825>their 1980 paper</a>. How is the new work different?
Most any product that was aiming for the old NCSC/TCSEC Class A1 security rating would have inherently received the formal verification Class A1 required.
Arithmetic overflows and exceptions: humans and mathematics usually have a concept of numbers that can be arbitrarily big. Machines do not, they need to fit them into memory, usually into 32 or 64 bits worth of storage. Machines also generate exceptions when you attempt to do things that are undefined like dividing by zero. In the OS, such exceptions would typically crash the machine. This does not occur in seL4.
Er, 2038? You'll get a nice overflow at that point on a 32-bit machine.
L4 isn't UNIX. It doesn't even have a POSIX userland, necessarily, and certainly wasn't designed with 70s hardware limitations in mind.
<speculation>
The L4::Pistachio kernel provides direct access to high-resolution hardware timing interrupts, though the secure variant may restrict some timer access in order to prevent covert timing channels between threads in different security contexts.
</speculation>
Defense contractors and other high-critical systems (think airplanes, space shuttles, missile command) would gladly pay for and accept that level of complexity.
You're generally right about complex specification and verification being popular in the defense and aerospace industries. However, I think that you may be mistaking a secure microkernel for a real-time one.
There are hard real-time variants of L4 out there, but unless that was a primary goal of the NICTA team, I don't think you're likely to see many copies of that kernel running a ballistic missile's avionics package.
A verified secure OS kernel like this would more likely be used as a substrate for building secure servers and workstations, network routers, or mobile communications devices.
I don't think mncaudill was confusing a secure microkernel with a realtime one. Rather, I think he was (correctly) thinking in terms of the reliability requirements those industries have.