And people give Microsoft a hard time over dicking around with their own alternatives to web standards... C'mon Google, get a clue!
Google has officially launched Native Client – a means of securely running C and C++ code inside a browser – as part of a new stable version of its Chrome browser that activates this rather controversial sandboxing technology. Mountain View turned on Native Client, aka NaCl, in the Chrome beta last month, and on Friday, it …
SPDY, WebM, WebP, Dart, NaCl etc. etc. Google are shoving out these half baked implementation cum specifications and doing more to fragment the web than Microsoft ever did.
They've just re-invented the browser plug-in. :-|
A plug-in runs with the same privileges of the browser and has the same ability to screw with the OS as the host does. In theory NaCl is sandboxed so even though the code is native it can't do nasty things. Problem is we've all heard that one before. The second issue is in a world where architectures are becoming hetrogenous (ARM, x86, MIPS etc.), the last thing we need is something that is tied to one architecture. PNaCl would be a far more suitable technology.
Actually there is an established sandboxing tradition nowadays, thanks (!) to Adobe.
They could gather as browser makers (Apple, Mozila, Opera, Google) and they could establish a way more secure, future ready AND backwards compatible plugin standard.
Of course, each (except Opera on this matter) has own evil agendas so it doesn't happen.
Worse than that
They've created another way for the web to become tied to a specific OS and browser.
You didn't even need to read the article (the sub head says it all) to know that this has nothing to do with the web.
It is about being able to offer native code browser based apps via an app store, primarily in ChromeOS.
Whether browser based apps and appstores is a good idea is another question entirely.
Re:-inventing the plug-in
Er, no. They've just re-invented the kind of process separation seen in every operating system worth of the name within my lifetime. Obviously their version requires a special compiler, but I believe there are mainframes (Burroughs?) that enforced OS security in software (restricting access to compilers and linkers) rather than hardware (the way everyone else does it).
In theory, now that they've felt compelled to re-invent "running apps locally" we should hear less shit from them about how "in the future" every will be done in the cloud. In practice, I'm not holding my breath.
Interesting and Fun
The pool game is addictive, but makes my laptop get hot.
More stuff here:
The C and C++ languages are not safe. It's their binding to low-level operations that lets them be fine tuned for performance. To suggest that a compiler can make C/C++ both safe and efficient is absurd. Lots of people would love to get their hands on such a piece of magic.
C/C++ can't be safe?
You assume C/C++ can't be safe because usually they're traditionally turned into native machine instructions. But they don't have to be turned into native instructions and indeed compilers like gcc-llvm and clang can spit out virtual bitcodes..
This is a very good video if you can spare 40 min:
Basically it's about efficiency, which matters most when running on battery and in data centers.
PNaCl makes sense for portability.
As for safety, there's nothing stopping the programmer from writing crap in any language of choice. The Mars Polar 'Lander' would've crashed even if its control unit ran Java. And btw. all your car's ECUs which you wouldn't want to slip, run on C.
Personally, I'd rather see them split Web *applications* off into their own URL "protocol", and make plain old http mean "Not Turning Complete" - let there be a "watp://" (web application transport protocol) that means "Turing Complete - Be Careful!" Then I can go to http://theregister.co.uk secure in the knowledge that the damage is limited, and if it matters, follow a link to watp://theregister.co.uk and have my browser be able to then warn me "Here Be Dragons boss!"
In fact, you don't need to solve halting problem to prove safety. Plugin stops responding, so what? Just kill it and go on.
There's another show stopper though: you cannot execute native code safely without some kind of VM, and there are still problems for braindead architectures like e.g. x86 (AFAIU secure virtualization is only possible on AMD-V Intel VT-x processors (and not possible on Atom)).
"Not Turing Complete", eh?
What, considering that you don't want a while loop in your program, what are you gonna do while it runs? Pick nose while listening to bleeping sounds?
Well, not sure why you are worried about that. Most of the apps on your PC have the same problem. None of them have been functionally proved in any way whatsoever.
Why single out net apps?
And that is why it will fail.
Fugly never stopped people from buying Windows or NASCAR shirts.
It's nice to know that it's fully sandboxed.
Completely isolated. Absolutely safe. Nothing can possibly go worng.
"PNaCl will translate native code into bitcode"
Err, sort of like Java then?
Well, no, jave uses bytecode, it's *completely* different! Buggrit!
So you mean Java is 8 times bigger?
If web Apps are to succeed they need to be able to run faster and on offline situations. Bundling C & C++ capabilities into the browser would definitely improve offline usage of Apps.
Segment registers - wow
So the stack, BSS and code will be disjoint now. What a concept! Mark BSS and Stack non-executable and the code non-writeable. What a concept! I wonder why M$ never figured it out, even though separate segments have been around since the 60s on various machines. I guess one must keep that legacy self-modifying code running.
Those segment registers aren't "rarely used"; OSes make heavy use of them. It's how programs get their own bit of space to play in. Back when before 386 you *had* to use them lots and lots because 16 bits of addressing space limits you to 64k addresses. The protections aren't that great though because combinations like "execute but not read" have or at least had trouble with working as advertised on x86. So I don't think we'll have to wait very long before the first exploits pop up.
This also means platform lock-in, something intel must love, and is something strange for google to shoot themselves in the foot with, like regarding that netbook hardware thing they're kicking around. Also something I think is a poor show because it's way past time we moved away from x86. It wasn't a particularly good idea back in the 70s and it's gone a bit stale in the meantime. But moreso, leaving anything but x86, like ARM or MIPS or even SPARC, out in the cold is not a "don't be evil" thing to do, google.
All in all, this native client is something to take with a grain of salt, then.
"Those segment registers aren't "rarely used"; OSes make heavy use of them."
Did I misunderstand that MMUs today do paging?
Yes: The MMU does use segmentation registers
The segmentation registers in "386 protected mode" contain descriptors into tables for the MMU to use rather than old-style "segment addresses". But they *are* used to indicate which table entries apply. In fact, the 386 got *more* segmentation registers added (FS and GS) and instruction prefixes to override page table contexts.
The original glossed over that 286 protected mode already allows the descriptor table approach instead of linear addresses added with a 16B offset, but it doesn't have a pager unit. Even so, 16bit wide offset addresses (what you'd get on a <386 without segment registers) isn't quite convenient for flat address space assuming compilers like gcc. Unless the entire program (or with some tweaking just all the program's data use) fits into just 64kByte, of course.
It's still doomed to fail because...
Native code could be used for games, but then again, all games want to have like hunders frames per second anyways, and they will miss the half frame per sec that native code is slower then native OS code. So they will download there games through steam anyways.
Google needs to start lookup up into the feature rather then down on there keyboards and code away. To many Google projects have been failed this one will be one of them.
You forget mobile space
Where although they are getting faster, the limits introduced by limited memory and CPU speed are very real indeed.
They may be right it's a bad idea, but the arguments they give are absurd idealist BS of no interest to real-world users.
C++ can be sandboxed and has been for years. Programming competitions do exactly this for example, like Code Jam and TopCoder. Even at the Os level, you can't access arbitrary RAM using C++ on Windows like you could in the old days.
"always with the negative waves"
Seems like the IT world is so used to f*ck ups that many commenters have forgotten that:
difficult != impossible
Where is the spirit that cracked those U Boat codes?
If you want to crack U-Boat codes, you can hack something up on the quick that may work under certain very specific circumstances and as long as you have a motel of geniuses, a large box of duck tape and a rosary to do your daily hail marys.
If you want to push out some browser thingamabob to the consumerist unwashed masses, you better move the f*ck off the "I do difficult because I can" arms akimbo posture because down that path lay tears, botnets and class action lawsuits.
Salt (NaCl) and pepper.
Ha. ha. ha. ha. ha.
variety is the spice of life... how ironic...
Segment registers were used in the 16-bit days to get around memory addressing limits, but for 32-bit applications they're basically all set to 0 to give a flat address space. Intel's implementation was always a bit crappy anyway since it only ever allowed you to specify the start of a segment and not the length.
Trying to use them to bodge a 32-bit protection system is not only a bit stupid in this day and age, but it's causes programming to become more cumbersome and leaves you hopelessly constrained to the x86 architecture.
Google should abandon this monstrosity right away, apologise for accidently allowing a 'research' project to get into the wild and do something about improving the HTML5 implementation in Chrome instead. They're in danger of making themselves largely irrelevant if they start trying to turn Chrome into the new IE6.
Intel supports segment lengths
"Intel's implementation was always a bit crappy anyway since it only ever allowed you to specify the start of a segment and not the length."
Baloney. In 286 and 386 protected mode, you set both the start address and length for each segment. The 32-bit protected mode in the latter allows up to 4G byte segments.
You can indeed sandbox code very effectively with 386 (and later) protected mode if you don't mind some performance hit. For example, put each memory allocation (malloc) to its own precisely sized segment. Overrun an array, and you get an exception (unfortunately this works only up to about 4000 segments because of some limited field sizes, so in practice you have to pool multiple allocations in one segment and get less precise protection). You can also make it absolutely impossible for the sandboxed code to execute any data.
But I fully agree that tying browser plug-ins to x86 architecture is a very bad idea, even if it works. Something one would expect to come from Intel, not Google.
Re: Intel supports segment lengths
"...put each memory allocation (malloc) to its own precisely sized segment..."
That's more or less what the debug version of the memory manager I wrote for Age of Conan did. Every memory allocation was placed at the end of its own memory pages with a guard page directly after. Memory that was 'freed' was merely uncommitted but never unmapped from the virtual address space to also catch post-free-writes.
It worked a treat too. But only for the 64-bit version of the client, and only for about 10 minutes or so until it ran out of virtual address space. Performance wasn't exactly top notch either, given the amount of paging Windows ended up doing while trying to keep up.
Re: Intel supports segment lengths
Your technique sounds like the ElectricFence, a debugging malloc library by Bruce Perens popular on Unix and Linux (or at least was, until Valgrind came along; it blows out just about every other memory debugger with its bit-precise tracking of data status).
The technique of using segments is a bit different. It allows fencing each allocation precisely in both directions. But it requires a compiler that understands far pointers.
32 bit ?
Using the segment registers might work in 32 bit mode - a long model program can access 64 terabytes of virtual memory even on a 386. In 64 bit mode segment limits are not checked by the processor.
I wondered about that, too. On the face of it, Google's shiny new platform won't run on ARM and won't run on x64. But these are "niche" CPU architectures, right?
Segment registers? Really?
Is the article actually right, or is it based on a misunderstanding?
Is the segment register concept really really being resurrected to try to do something that, as already noted by others, correctly implemented memory management units and OSes have been doing for decades (but not under Windows/x86)?
This way lies madness, surely?
Chrome: The new IE
Becomes popular simply because it's fast.
Has rendering bugs.
Gains market share.
Gains proprietary extensions that only work in that one browser, fragmenting the web.
Still to come, the inevitable security issues of such an approach. I thought Google were trying to replace IE and it's shitty brethren, but it turns out they just wanted to *control* IE...