Aussie boffins have developed an operating system micro-kernel mathematically established as free of many types of errors. The development points the road toward "safety-critical software of unprecedented levels of reliability" for applications such as aircraft and cars. The software - called the secure embedded L4 (seL4) …
There is quite a bit of that technology used in industry. Extensive theorem-proving scaffolding and refactoring to manage security relationships has been applied to the Windows kernel in the last few years.
One step closer....
...to computers coding and debugging by themselves. We already have self modifying code, of sorts. A few more years and we'll only have to ask the computer what sort of program we want, or behaviour desired.
"Today the Government announced that it had successfully completed the migration of all civil and military operations to the latest generation of server operating environment, programmed with the input of just 3 simple rules...."
My immediate thought was how are they modelling interrupts in their proofs? Those have caused mighty headaches to mathematically proved processors in the past - I can only imagine the same issues will arise with kernels.
Hmm, puts me in mind of a Don Knuth quote
"Beware of bugs in the above code; I have only proved it correct, not tried it"
Mathematically verified, like RSRE Viper processor?
And what happened to the RSRE Viper processor when the proof was proved to be not a proof ?
How to get your grant money and spend it too
Let me get this straight: they developed a mathematically correct kernel by analyzing C code, and then wrote the final kernel in C.
I trust I'm not the only one who sees a flaw with this....?
Will their product be part of the Obfuscated C contest? Under the "mathematically correct" category?
One of my math teachers once proved to a class that 1=2 using algebraic formula. None of us could spot the flaw (a concealed divide by zero). I hope these guys ran a subroutine to check all the divisors.
'c' code ?
are the bonking mad ? That language is flawed by design ... There is a reason it is called 'c' : that was the final grade it got when they handed it in ...
have they tried analysing the bartender problem with that tool of theirs ?
for people that don't know the bartender problem : 3 men walk in to a bar. and order a beverage,
Beverage costs 10 pence. the bartender receives 30 pence. The innkeeper says : these are good customers give them a pence discount on their tab.
the bartender can't split the 5 pence. so he gives each customer 1 penny back and keeps 2 for himself.
from the customers perspective : they only paid 9 pence each.. 3x9 = 27 , add the two pence the bartender kept. total : 29 pence... where o where did the thirtied one go ...
1. The compiler compiling this code?
2. The CPU it will be executing on?
Unfortunately a mathematical proof of correctness may prove that some set of known types of bugs don't exist and it may prove that the program actually matches the specification. What it doesn't prove is that the program is what the customer wanted (i.e. the specification is never complete and will change over time so insisting on it being complete and static is a very good way to get a disappointed customer).
Does proof of correctness result in code that is optimally able to be maintained (oops, sorry - if it starts out life correct then it never needs maintenance does it?).
More importantly, our happy user needs to use this kernel to do some real work so they install a web server on it, along with php, then hire a cheap programmer that has read a book on PHP to write applications for it.
The eventual end user knows nothing about any of this and compromises the integrity of the system by writing down passwords on sticky pieces of paper or surfing pr0n sites that use have bonus cross site request forgeries embedded in them....
It may be nice to have a more robust Kernel but I think the money would be better spent on researching how to fix the real problems that plague computer systems.
HAL far from foolproof!
As I remember it, all you had to do was give him two contradictory orders at once at the same level of priority and he'd undergo a massive psychotic breakdown.... Using the Cretan paradox in one form or another to make a computer overload and blow up is the oldest trick in the sci-fi book.
Similarly, in the context of this story, the fact that they've mathematically proved it to be formally correct really just means that now they're half-way to proving the overall formal system's Gödel statement, and thereby demonstrating that it is either incomplete or inconsistent. Well done them! I look forward to the inevitable future moment when this proven-mathematically-correct system goes completely haywire and starts spewing out random garbage!
So, a team of clever people, over years, has recreated a light version of secure BSD *nix.
But, mathematically proven, like the heavier than air machine cant fly, radio coils must be over a specific (big) size...
Now we start checking the hardware BIOS, silicon logic in the CPU and eventually the firmware in the attached devices.
Paris, because even she knows when enough is enough, sort of.
"Extensive theorem-proving scaffolding and refactoring to manage security relationships has been applied to the Windows kernel in the last few years."
Oh shit. It clearly doesn't work then.
But it's not Linux
The current by-line on the front page is "Computer scientists claim to have built a Linux kernel for embedded...", but nothing here says anything about it being Linux related; indeed, it's a microkernel. It could possible run linux inside the proven correct microkernel, but let's not confuse the issue.
Maybe it's just me, because I'm not usually the first to think of these sorts of things, but...
Isn't the idea of a formally proven kernel written in C nothing more than an academic exercise without a formally proven C compiler?
Not to mention the CPU and chipset as well, while we're at it.
@ Math mirth
Oh you crazy Math people!
Your life must be so full.
The olde: concealed divide by zero trick eh?
An oldie but a goodie!
@The Other Steve; @Brian Miller
@The Other Steve: Nice one, Dijkstra on the other hand got angry when you suggested testing something he had proved.
@Brian Miller: Not a flaw at all. It is possible to write secure C, just as it is possible to write secure assembly (after all, any secure code must be secure machine code at the final crunch). So long as you have a secure C compiler (mathematically correct) you are OK. Exactly because C is so small, as a language, it is less hard to prove correct.
Ah.. now I get it...
Researchers "forge" secure kernel from maths proofs
IIRC VIPER hit trouble due to the team assuming the implementation tools (turning the absctract register based machine desing into actual gates on chip) had no errors in them. This was incorrect. When the design had demonstrated bugs the project hit the skids.
Lesson. You need to verify the *whole* toolchain. Just because *most* users of the (commercial) design tools have no problems does not make them problem free.
Going from memory (V. old Byte article early 80s) The architecture was basically a 16 bit 6502. The 1st implementation was a gate array using a series of finite state machines using a 26Mhz clock to get 1MIPS. I seem to recall some fairly simple changes upped the speed quite a lot but made the verification harder (verification being the actual project focus)
I think RSRE had a buy British policy so when to their chums at GEC for that stuff. Who knows what would have happened if they had used VLIS logic instead (company that fabbed the ARM series).?
'......a HAL-like system that's "foolproof and incapable of error".'
Yeah, 'cos an AI that cannot resolve conflicting orders is definately foolproof and that it then goes on to attempt to exterminate its meatsack colleagues is merely proof of its incapability of error.
Wht the Tux
As far as I can make out this isn't a Linux kernel.
It depends on the model
I suppose the proof starts of a bit like this:
Suppose there is ...
and then goes on to use base assumptions to mirror physical events controlled by the kernel.
Which is ok, fine and dandy - it really is a brilliant start but kernels tend to operate with voltage changes effected at very high frequencies yes?
So there will always be a physical element that the proof cannot completely cater for?
So, good model = good proof.
First flaw identified in 3.....2.....1.....
Proven, for some value of "proof".
The article mentions C, but that is known to have several internal contradictions and *that* would make a mathematical proof impossible. However, one could (with some effort) go through all those inconsistencies and resolve them one way or the other and end up with a C-like language that was well-defined, and then use *that* to produce a kernel that was proven. That would be a useful exercise if it yielded new techniques. (I don't want to mock, too much.)
For example, you might write a compiler for the new language. Of course, then you'd need to prove the compiler. And you'd only be able to run the compiler on a proven operating system, which you don't have yet. And you'd only be able to run either on proven hardware, which I suspect also that you don't have yet. (OK, just a little.)
But all that is for a *mathematical* interpretation of "proven correct", which is a boolean attribute, whereas engineers recognise a continuum of...
Compiles, but I haven't tried it out yet.
I'm quite proud of this and willing to publish it.
I'm quite proud of the test suite, and willing to publish that too.
I'm quite proud of the original requirements spec, and willing to publish even that.
I have professional indemnity insurance that covers this.
(That's not mockery by the way. Outside of the maths department, no-one ever *proves* anything, but people sometimes put their money where their mouth is and you can tell a lot by just how much money they, or more accurately their insurance companies, put up.)
What no Razor
mad. A write only portable Assembler. Hardly a decent High Level Language.
Occam, Modula-2, Oberon, maybe C# or C++, not C
Are you sure it's a Linux kernel?
Linux is a monolithic kernel not a micro kernel. GNU Herd is a micro kernel.
Also following the links, the only reference to Linux is that it will run on top of this kernel, not that this kernel is derived from Linux.
I wish them luck but .....
"All this heavy mathematical lifting means that the kernel ought to be immune to common types of attack, such as buffer overflows."
And the first half hearted assault which results in a buffer overflow will render their testing model fundamentally flawed and their promises just so much hot air/commercial posturing/spin?
Go get-em !
Im sure they didnt think of that.
Give this man some grant money.
Or maybe they did look for the completely fu&*ing obvious stuff before they went on to the more tricky stuff.
Australia's Information and Communications Technology Centre of Excellence (Nicta).
He might have fooled you into thinking 1=2 but he didn't prove it mathematically - its the same technique used by city types to prove they're useful. Works on politicians but not reality.
@Don Mitchell - you need to apply it to the whole 'kernel' to be of any use, and then be prepared to fix the system when you find problems - but as that would stop a lot of it functioning usefully....
"foolproof and incapable of error"
But, as the old quote has it: "It is impossible to make anything foolproof because fools are so ingenious."
Erm...... remember that whole bumblebee thing?
Whilst I'd love to stand up and wave the flag for Linux, I get a bit worried whenever a boffin stands up and says something is "mathematically proven". Maths is just modelling tools that allow us to describe real world events in a manner some can understand, and often include plenty of assumptions. Of course, if your mathematical model is flawed or just wrong, then you get some fun scientific gaffes such as the so-called proof that bumblebees can't fly. Actually, if you use the standard aerodynamic models the maths says they can't, but if you use the special models for helicopter flight then it all looks much better for our fuzzy friends. It's probably a good thing bumblebees put a lot more faith in empirical testing rather than mathematical theorems, and so do I - until it is thoroughly tested in a real environment, nothing is proven.
The proof are being done with Isabelle, which is an "interactive theorem prover". Essentially this means that the computer is doing the number crunching and humans are directing the proofs and doing the clever stuff.
The main problem there was folks making overly bold claims. The proofs themselves were valid, just not as far reaching as some thought. Things have progressed since then but there are still boundaries with every formal verification.
Your comment makes no sense. They used a theorem prover, called Isabelle, which is written in Standard ML. This supports doing proofs in higher-order logic (a formal mathematical logic). They used this logic to model the semantics of C code. This semantics is then used to verify correctness properties for the source code of the Kernel.
This is the entire point of using an interactive theorem prover. It should guarantee that there are no holes in your proofs. The main problem is making sure you've proved exactly the right thing i.e. that your models are right and that they are detailed enough.
"Computer scientists claim to have built a Linux kernel for embedded mission-critical systems that maths proves is free of many types of errors"
Eh, L4 isn't Linux, it's Mach.
how do you model faulty hardware or suffering a power cut?
Sounds promising, but that is where it ends...
... the comment on interrupts is principally one I would agree with. However, it implies far more than the author realised. Theoretically, you would have to model an interrupt state for every single assembly language instruction generated by the compiler (you would have to know timings, registers, etc) - and I do not see how simple C source gives you enough information to do this. You would need to do this with the COMPILED output. And even that isn't enough, as I will explain below.
Of course, what nobody has mentioned so far is that if you use a different compiler or even a different round of optimisations, your mathematical proof goes up in a cloud of smoke. How can you prove the code, if you have not proven the compiler? Sometimes, you need to KNOW how the thing will be compiled before proving it to be correct. We've already seen one Linux flaw that could easily (in some cases) have been caused by excessive compiler optimisation. The situation is slightly more complicated by endianness, but what will bring some higher-level people out in a cold sweat is the concept of branch delay slots on some architectures (e.g. SPARC.) Sometimes, it is NOT safe to assume that code beyond a branch will not be executed, once a branch has taken place. You need to look at the specifics for EACH platform. And, of course, the compiler implementation.
To be honest, I do not see how this can be done accurately at C-level. It is simply not precise enough, or low-level enough. As also mentioned above, it is too prone to interpretation by compiler implementation / optimisation level / both. Some compilers (e.g. gcc) even compile a program that does not work as intended WITHOUT explicit optimisations, and I have personally proved this by compiling the same source with another compiler - and then testing it successfully! (Richard Stallman makes good open source licenses, but I think his C compiler is a steaming pile of turd. I do not use it anymore: I stick with the Sun Studio compiler, which works under Linux and Solaris. Or, failing all else, the Intel compiler - if I want Windows and Visual Studio support.)
The fact of the matter is, by studying C, you muddy the waters by trying to analyse the product of the behaviour of the compiler (which is certainly guaranteed to change in the future), and the intrinsics of the code. If you study one compiled version, then you can at least reduce the burden of proof to just proving that your code works successfully. However, I cannot see open-source advocates being happy with that idea; therefore, assembler would be better in this case, as you at least have the possibility of studying source code which assembles, predictably, 1:1 (except macros), to machine code - which, in the end, is what matters. You do not, after all, run C on your microprocessor. You run machine code.
I admit to having an already-biased (and low) opinion of people who take a high-brow attitude to proving code mathematically, who then fail to take into account the physical environment their code runs in, or how their code is actually executed by the machine. My fiancée works in such a department, and they do not even have machines that run reliably all the time. Most of the people there also only know very high-level languages (such as Java), and do not even know enough to write code in C (even worse, they see it as unimportant.) To me, they are like plumbers who do not understand how water flows in a pipe (and no, before you ask, it is not quite so straightforward as you might think.)
The only projects I know of that have really gotten proofs like this down to the fine detail are the military, who wrote code for guidance systems in assembler - and they wrote it so that they knew exactly how long, in clock cycles, a particular algorithm would take to complete. That is the sort of code you can then try to pin down and analyse with mathematics. Anything else is a moving target.
To err is human. To screw up completely requires a computer.
Remember this old logos: In theory, theory and practice are the same, but in practice they are not.
If you work in this game long enough you realise that a real-world OS running on real-world hardware is not deterministic. The theory says that it is and if everything runs well, it is; but nothing runs well all the time and, alas inevitably, red faces all round when lots of great mathematics go up in a puff of "alpha particle induced memory error" smoke (or something similar).
Cast back to LTCM Ltd. and the summer of 1998. Hubris before the fall!
(Of course there is nothing wrong, and everything right, with a whole bunch of real smart guys carefully poring over kernel code, automated or not, and shaking out as many bugs as possible. Should happen more often.)
Although reliability of missile guidance systems is important, I'd have thought that the military would happily trade a 1% chance of post-launch system failure for things like a bigger payload or a better ability to deal with enemy countermeasures. (And 1% is orders of magnitude worse than today's commercial operating systems would give them).
If this kernel really is a major step in upwards reliabiliy, I would have thought the most likely applications would be civilian. Things like self-driving cars, or civil aviation autopilots and fly-by-wire systems, or automated railway or air-traffic control.
Headline - "Computer scientists claim to have built a Linux kernel for embedded mission-critical systems that maths proves is free of many types of errors"
Um, L4 is _not_ Linux guys .. although OKL version of L4 can run virtualised linux in a process...
"It's probably a good thing bumblebees put a lot more faith in empirical testing rather than mathematical theorems, and so do I - until it is thoroughly tested in a real environment, nothing is proven."
Nothing is ever proven. Because A has happened n times, you can never be 100% sure that A will happen on the n+1st occurence. You just gain more confidence. Not that I think bumblebees are too worried about their tried and tested technique failing them tomorrow.
Daaaaaiiiiiissssyyyyy daaaaaiiiiisssyyyy, ggiiiiiiveeee meeeee youuurrrrr annsssswwweerrrrr trruuuuuueeeeee. From what I can tell, HAL was never built on the 3 laws of robotics. RoboCop's Prime Directives, maybe. Dick? You're fired!
I dont get it, admittedly I've never heard of the bartender problem, but essentially you are saying
30 - 3 + 2 = 29 but you think it should equal 30?
Why am I always scared when mathematicians play at programming computers?
That tool of theirs would laugh in the face of your puny bar-man problem with its misdirection subroutine. You don't impress me (but fwiw it's a point well made).
For me, it's only secure if Paris says so.
Ok, time to stop laughing.
Ok, stop. There's a truism that applies in situations like this. If I, as a lay-person, can spot a problem with some expert's (or group of expert's) work within 30 seconds of hearing about it, it's fairly likely that they will have thought of it too. This makes nay-saying on that basis pretty pointless. Apart from anything else, it's boring and makes you look like an idiot.
Everyone should really read http://ertos.nicta.com.au/research/l4.verified/proof.pml to see what the researchers are actually claiming, and see if you still think that a kernel which is, for instance, *completely immune* from buffer overflows, pointer errors and memory leaks (given the assumption constraints) is a waste of time.
For my money, *anything* that raises the security bar is a good thing.
@bartender problem / vincent himpe
You are confused.
You're making the mistake of putting numbers on the wrong side of the equation.
One beer is 10p (I wish!)
Bulk discount, three beers for 25p.
Three men each put in 9p, thus giving the bartender 27p for three beers - so they get 2p change.
Bartender gets to keep the 2p change (as a tip, presumably). Nothing has gone anywhere.
@ The Commenter formally known as Matt
Don't be confused, vincent is just an idiot that seems to think that a mathematical problem used to trick high school kids is somehow relevant to formal mathematical proofs.
He even suggests C is fundamentally flawed, which is interesting seeing as like 99% of computing systems around today run on it.
C isn't fundamentally flawed, there are just bad programs who can use it incorrectly allowing issues to arise because they simply didn't understand how to program properly.
Can you provide a reference for the aeronautical engineer that proved mathematically that bumble bees cannot fly? No you can't it's an urban myth.
I think the problem is really that mathematical proof is indeed proof ... within certain limits. The proof it the easy bit it's the limits that are the bastard.
Not a Linux kernel
The summary on the front page says that this is a Linux kernel; it isn't (there's a clue in it being a microkernel). It can run a para-virtualised version of Linux and - in principle - other operating systems in this way. However, only the functionality provided by this secure kernel is "proven".
- iPad? More like iFAD: We reveal why Apple ran off to IBM
- +Analysis Microsoft: We're making ONE TRUE WINDOWS to rule us all
- Climate: 'An excuse for tax hikes', scientists 'don't know what they're talking about'
- Analysis Nadella: Apps must run on ALL WINDOWS – PCs, slabs and mobes
- Yorkshire cops fail to grasp principle behind BT Fon Wi-Fi network