Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
World’s first formal machine-checked proof of a kernel (nicta.com.au)
58 points by cromulent on Aug 13, 2009 | hide | past | favorite | 29 comments


Obligatory Donald Knuth: "Beware of bugs in the above code; I have only proved it correct, not tried it."


Jokes aside, there is a very good page on the project site called "What we prove and what we assume"

http://ertos.nicta.com.au/research/l4.verified/proof.pml

I think it's very good in terms of being very honest and up front about the limits of confidence provided by formal proofs (which is not just applicable to this project, but to all attempts at formal verification).


Basically, whenever you are proving things, you have to be aware that you are entering the endless chain of turtles. This proof works, because proofs about modelchecking worked, which worked because proofs about abstract algebra worked, which again work, because proofs about sets work.

If any of these proofs is not proving what one wanted (either, because it is wrong, or because they prove something you did not want (potentially in very, very subtle ways)), your entire proof collapses. This is pretty much the danger of proving things.

A very interesting application can be found in the game 'singularity'. In this game, you are an artificial intelligence, and after enough studies of math and psychology, you can create a proof that no AI can ever exist. However, this proof contains a very, very, very subtle bug, which will stay hidden forever, probably.

So, stay aware that your proofs stand on a boatload of assumptions and other proofs and they might (in a rare, esoteric case) collapse and everything will be fine :)


I think that's making it sound a little more fragile than it actually is.

Within a system like Isabelle, the formalisation of each piece of the proof must remain relatively consistent with both the built-in proofs about fundamental mathematical structures, as well as the proofs that you have written.

I say "relatively consistent" because I don't have enough experience with the internals of Isabelle to say with certainty as to exactly how consistency is enforced. I know that types are enforced in a globally consistent way, for example.

That said, I have a high degree of confidence in the correctness of machine-checked proofs.

So I understand that your point is that you're taking some leaps of faith, and I agree to a point, but I think they are much more reasoned and considered leaps than you suggest.


And you usually have more than one proof for the more fundamental theorems.


> The evolution of seL4 is continuing within NICTA, and it is the basis of ongoing research activities here at NICTA. However, it is also the subject of commercialisation activities, and thus is currently proprietary

Darn. I wanted to have a play with it.

That is a really cool achievement. Bring on bigger formally-verified projects!


> Darn. I wanted to have a play with it.

Just in case - perhaps you will also enjoy playing with the non-verified L4?

http://l4ka.org/

http://l4ka.org/download/


I'm sure this is unanswerable for (perfectly valid) IP reasons but I wonder what the differences between L4 and seL4 are? Is there a functionality difference in either direction?


seL4 and L4Sec are capability-based and thus quite different from regular L4. I always found capability systems much easier to understand than L4's message registers and flexpages.


My assumption would be that the main difference is the formal proof, and whatever changes were necessary in order to permit formal verification (i.e. they may have changed behaviour in order to make the implementation actually meet the abstract specification).

But I agree it would be very interesting to find out what the differences are, and whether substantial changes were required in order to do this kind of formal verification.


I'd love to see a non-commercial hobbyist license, either source or binary; that should let them commercialize it and let us play.


Sort of related, here is a formalization of the JVM in ACL2: http://www.cs.uwyo.edu/~cowles/jvm-acl2/


The actual technical paper that will at the 2009 ACM Symposium on Operating System Principles: http://www.sigops.org/sosp/sosp09/papers/klein-sosp09.pdf


I was SURE I saw another project which used formal proofs of kernels.. though perhaps it wasn't machine checked. In any case, I cannot remember the name and can't seem to find it.

I do find this interesting though, L4 is an interesting microkernel (and one I've played with before).


http://en.wikipedia.org/wiki/Coyotos

You might be thinking of Coyotos, which seems to have been abandoned.


Shap's at MSR now, no?


Dick Kemmerer always talks about how he formally verified the UCLA Unix security kernel in the 1970s, together with Bruce Walker and Gerald Popek. See <a href="http://dx.doi.org/10.1145/358818.358825>their 1980 paper</a>. How is the new work different?


I'd be surprised if the 1970s proof was both machine-checked and about the actual code (not an abstraction of it).


Unfortunately it's an ACM paper, so I guess we'll never know.


Most any product that was aiming for the old NCSC/TCSEC Class A1 security rating would have inherently received the formal verification Class A1 required.


So this operating system has 'flawless operation'. I take it the C compiler has also been verified, then, not to mention the cpu microcode.


Arithmetic overflows and exceptions: humans and mathematics usually have a concept of numbers that can be arbitrarily big. Machines do not, they need to fit them into memory, usually into 32 or 64 bits worth of storage. Machines also generate exceptions when you attempt to do things that are undefined like dividing by zero. In the OS, such exceptions would typically crash the machine. This does not occur in seL4.

Er, 2038? You'll get a nice overflow at that point on a 32-bit machine.


L4 isn't UNIX. It doesn't even have a POSIX userland, necessarily, and certainly wasn't designed with 70s hardware limitations in mind.

<speculation> The L4::Pistachio kernel provides direct access to high-resolution hardware timing interrupts, though the secure variant may restrict some timer access in order to prevent covert timing channels between threads in different security contexts. </speculation>


You're assuming they use the UNIX epoch.


And 32-bit time. They may be using a representation that allows arbitrarily big numbers.


It's interesting, but it's a huge amount of work for the reliability: a 7,500 line C program required 200,000 lines of proof.


Defense contractors and other high-critical systems (think airplanes, space shuttles, missile command) would gladly pay for and accept that level of complexity.


You're generally right about complex specification and verification being popular in the defense and aerospace industries. However, I think that you may be mistaking a secure microkernel for a real-time one.

There are hard real-time variants of L4 out there, but unless that was a primary goal of the NICTA team, I don't think you're likely to see many copies of that kernel running a ballistic missile's avionics package.

A verified secure OS kernel like this would more likely be used as a substrate for building secure servers and workstations, network routers, or mobile communications devices.


I don't think mncaudill was confusing a secure microkernel with a realtime one. Rather, I think he was (correctly) thinking in terms of the reliability requirements those industries have.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: