But a write-once, run anywhere binary represents a worthwhile effort
Or a security nightmare.
Mozilla this week announced a project called WASI (WebAssembly System Interface) to standardize how WebAssembly code interacts with operating systems. If the project succeeds, it will do what Oracle's Java Virtual Machine does, but better and more broadly. WebAssembly, or WASM, is a binary format for a virtual machine that can …
"Heterogeneity can be your friend." Said by me the day before this piece.It has the probability of increasing difficulty in approaching the attack surface. You really, really have to take a tough, considered approach to security when there is only one standard compiler. And, you still have the problem that WebAssembly has to implemented for each and every binary machine model, which is pretty much the same problem as having n code bases, just as with any abstract language to machine implementation as before. Very much like a safer (I use that word advisedly) clang, methinks.
I do believe this can be a net positive and as an example, I cite Rust. Same group of people. I've looked at it and it is a decent system programming language worthy of being the successor to C as I use C.
"But a write-once, run anywhere binary represents a worthwhile effort
"Or a security nightmare".
Actually both. Really good security militates powerfully and successfully against every desirable aspect of computing. Speed, flexibility, ease of use, user friendliness, cost effectiveness... security mows them all down pitilessly.
That's why so few decision-makers really want good security, no matter how much they feel obliged to jabber about it.
Sorry, precisely the reverse and for a few reasons: (1) use of Bog-standard models, data structures and algorithms; (2) two-thirds or more of the engineering was spent in the design phase with thorough validation of the logics and maths (yes, there are more than one of each) used; (3) documented to death as per any engineering manual you'd ever run into and nothing was ever allowed to get out of sync; (4) whatever I came up with had to be supported by others who would come after me, or replace me in case I developed a severe case of being dead, this being the military/government service.
I was asked over the years to do these safety-critical designs as something aside from my regular job which was keeping safety-critical equipment in operation. The Navy spent an awful lot of money teaching me that. I was damned good at it with the evaluations to prove it. The side jobs were done because they were fun, not because I was paid for them or for any other reward. I take serious pride in building, it's everything about what I am.
Oh, the other third of the time, less about a twentieth, was spent observing and interviewing (in the anthropological sense, yes, really) the people that would be doing the work and in acceptance testing and tweaks to suit the new work flow. Their work flow. Funny but true, often they'd end up operating faster once they go accustomed to the security. They had assurance that mistakes would be caught, not allowed to propagate.
> Or ANDF
Exactly: This sounds just like ANDF warmed over. I guess the major difference is in licensing. ANDF was specified in the days of the "Open Software Foundation", where "open" meant "anyone can license this for the same fee". Linux, and Free and Open Source software ate that particular lunch long ago.
Long long time ago, (1980s) IBM and Apple ITIRC, had a project for compile once, run on any unix. Died to willful non-cooperation. Probably too hard at time but VM approach may allow it to work now. Any similarity in concept the the Rich Binary concept M$ is trying out ? Somehow I feel history is repeating. Orackle, M$ and now Moz all have similar competing approaches, no-one wins. Unless one of them kills Java, then all win. Orakle excepted, but who would care ?
I would be happy to se any nonOracle alternative to.
Java in the browser died a long time ago.
Write once run anywhere is no longer important to Java. Java on the serveside, think Hadoop, does not need to be portable. I think Java will continue to exist for other reasons.
Im not sure why Mozilla thinks a good cross platform browser runtime would necessairly make waves serverside just because Java did.
One reason Java is big on the serverside is because it never crashes due to developer bugs & NPEs. If something open, C like, and typesafe came along with the same no-crash guarantee. I think it would stand a good chance agaist Java right now, even it required compilation on each platform.
"Java in the browser died a long time ago."
"That was because of Quality of Implementation issues that led to it being a security nightmare -- that doesn't mean it was a bad idea."
In addition, this is one time when licensing is a really important point. I do not plan to have a massive JRE and, most likely, a JDK as well to fix other people's mistakes on every system to browse the web when I need to individually license them with oracle. Nor am I willing to accept every application coming with its own JRE and JDK blobs taking hundreds of megabytes each because they need to handle their own licenses and only work with a very specific version that was released 27 months ago but also includes API headers from releases from 28, 29, 30, 31, 33, 36, 41, and 46 months ago. On that topic, it would be fun to see the sites specifying their functioning java versions so you could go retrieve the JREs from the java.com and oracle.com mazes. If you browsed enough, eventually you'd get the full collection.
There are a couple newish languages that are both C like and much more type safe than C: Go and Rust. The main reason Java is still big is the enterprise code base and Android apps. Java has notoriously bloated, verbose code which offends many programmers' sense of elegance.
Also, many conflate the JVM and development environment with Java the language. The JVM will probably be around for sometime as many languages use it. Java the language may start to wane as developers get familiar with other JVM languages such as Kotlin.
"Java has notoriously bloated, verbose code which offends many programmers' sense of elegance."
Don't know what you are referring to. I programmed professionally in Java for 20 years and never felt offended. Java has the same control structures as all C-like current languages. When you say bloated, are you referring to the library load pulled in by programs of any significance? Then you should compare to the full set of shared objects required to run, e.g., C/C++ executables.
He's probably refering to the stupidly long names of java library functions. Eg in C the posix function to return the current directory is getcwd(). In Java it would probably be something like
Its as bad as COBOL frankly. Then there's verbosity like "static public void main()". Why not just "void main()" and default the rest given that its almost ways the same?
Spectre likes this
and we are still reading about ideas for a "universal" code ?
I have a feeling in 2049, we'll be reading about a new idea for "write once, run anywhere" code ...
Surely, given the power of modern electronics, it's possible to define a rigid von Neumann architecture that can be implemented as a single VM, and code to that ?
In 2049 we'll probably be back to valve computers, because you can make thermionic valves, carbon resistors and paper capacitors in a small factory, which after the fall of civilisation is all we'll be able to manage.
(I won't be around to see it so this is purest speculation.)
An x86 VM running on anything with QEMU probably fits the bill and there's loads of existing software out there, even complete operating systems.
The *real* problem is that "run anywhere" simply isn't possible as long as "anywhere" is taken to include all possible hardware from phones to supercomputers, 4-inch screens to multi-monitor or headless setups, and available storage varying from MB to TB. And that's before we consider the presence or absence of all the third-party software services that might define your "stack".
So ... you narrow down your platform definition to something that exists on all phone-like devices or all desktop-like devices, and you find that there is nearly always something missing that stops you writing interesting apps, so the only apps that can use your new universal platform are toys.
No, I think the point is that this targetting browsers. You can run C++ code in it, Rust.
Python has a proof of concept port to implement Python,’s VM on Rust (there are already C, Java and Python implementations of Python).
Now, they’ve turned around and put that RustPython on WASM and ... it works.
So a universally available portable VM, but the JS execution engines have barely grown in code size to accomodate WASM (<5% IIRC) so not that much new attack surface.
And... no more JS if you don’t like it. Static languages are, almost by definition, a better fit for WASM bytecode. Fast - this is a direct evolution of the JS asm tech that was running crosscompiled Doom C-code in a browser 5-6 years ago.
This is hot stuff.
Itll be a seriously cut down version of C/C++ without much in the way of useful system functionality. Also itll be interesting to see how they deal with pointers, particularly function pointers and accessing the stack, the sort of low level details other sandboxed languages dont have.
you use C for the typing, syntax and precision. and also for the speed WASM, which needs a static typed language, buys you. videogames, fast math...
Python on Rust on WASM isn’t, mostly about getting Python. It’s that you can support Python’s complex VM on WASM, in a useful fashion. which kinda shows that pointers (to WASM space) and esoteric “metal language” features can happen. even if system stuff is sandboxed. but this is still only for the browser.
the Java comparison is misleading. WASM is not a language at all. it’s _fast_ bytecode for the existing JS engines like V8 or Spidermonkey. but you get that bytecode via writing C++ or Rust. and the browser’s multimedia/compute/graphics and networked data is your OS/platform. but without using JS.
could you write Doom in JS? no. but C -> WASM -> Doom running well, yes.
"buys you. videogames, fast math..."
Sure, you use C/C++ for speed - but you're going to lose that advantage if the code is simply compiled down to the same p-code as everything else that runs in that sandbox. And if you don't get the speed or the system level access then C really isn't a good choice of language (C++ maybe) , and I say this as a C++ dev.
"could you write Doom in JS? "
Actually you probably could these days, but I suspect the resulting code would be hideous.
blah blah blah
seriously, didn’t downvote you but you sound very certain of yourself.
now, neither of us need to take this as gospel, but this is early days, both in terms of the VM as in terms of bytecodes the transpilers feed to it
my knowledge of the subject matter is admittedly limited, but you seem to be operating at the “gosh, cant happen” level.
point is only that it’s potentially much faster than JS-in-browser. not that it’s faster than LLVM/GCC on bare metal. too bad both you and CB are too thick to grasp that no one’s contesting that bit.
but it’s also way _safer_ than random from-internet C/Java/Haskell code executing ouside of a browser sandbox. did you miss that too???
"point is only that it’s potentially much faster than JS-in-browser. not that it’s faster than LLVM/GCC on bare metal. too bad both you and CB are too thick to grasp that no one’s contesting that bit."
Says the muppet who clearly doesn't understand the concept of virtual machines and why C/C++ is usually faster than other languages (clue - its because the binaries DO run on bare metal). I suggest you get a clue next time before you make a fool of yourself.
"but it’s also way _safer_ than random from-internet C/Java/Haskell code executing ouside of a browser sandbox. did you miss that too???"
Bloody hell Sherlock, there's no fooling you is there. Please, share with us more of your profound insights!
"Also itll be interesting to see how they deal with pointers, particularly function pointers and accessing the stack, the sort of low level details other sandboxed languages dont have"
For a useful real world example you can see the implementation of EOS blockchain (https://github.com/EOSIO/eos). The blockchain has a webassembly virtual machine in which the smart contracts run. The smart contracts are written in standard C++17. The compiler is clang 7 that targets wasm (see the SDK: https://github.com/EOSIO/eosio.cdt).
that's because as with all such projects, the original projects gets about 60% feature complete before veering off into tangential features that no one uses and only the Devs ever care about. They then post proudly in what passes for documentation that implementing anything non-trivial is "left as an exercise to the reader"
Write once run anywhere is a failed idea for general purpose applications, and here's why:
1) You are always limited to the lowest common feature set for all supported platforms.
2) Some genius will still shoe horn stuff in forcing you to add a bunch of per platform exception testing at run-time.
3) Any change to any of the platforms can break the system, and potentially the WHOLE system.
4) Nobody tests all the supported platforms sufficiently, so what you wind up with is something that probably only runs on the Devs machine.
5) All above not withstanding, even if you manage to get it working, future requirements will probably drag you screaming into one of the above, or your customers will hound you asking for features or a user experience like your competitors native app.
Limiting it's usefulness for me to simple and short term projects. YMMV. If I gotta support it for a year or more, I'll spend the three extra days to break it to a native front end for the platforms I plan to test and support, and a back end library in Rust.
The "Anywhere" part became a nightmare that destroyed many APIs. GUIs, audio, video, graphics bitmaps, and filesystems don't have any standardization between systems. Sun tried to fix this by abstracting the hell out of everything. This made APIs bloated, confusing, and incredibly slow. Just try rendering an affine transformed photo using Java - you have your choice between looking awful , rendering at about 100 pixels per second, or needing 20 pages of complex code to bypass the bad APIs.
"It is not worth while to try to keep history from repeating itself, for man's character will always make the preventing of the repetitions impossible".
- Mark Twain
In this specific instance, man's character tends to dictate that people hope to invent a Philosopher's Stone that turns lead into gold. They think mostly about the immediate problems of creating a system of universal portability, and hardly at all about the inherent problems - such as those of security - that will only become evident 10, 15 or 20 years later.
The point about a stone that turns lead into gold is that once you've got it, gold is worth no more than lead.
This is true of software, I think. Every improvement is met with new buggerations that reduce the utility to not much different from what it was before.
The only real magic is that the hardware gets better and cheaper.
If we're going to try to build such a stone for computing, I suggest we try to unify hardware so you can run more things across it. We already know how to compile and run things on the operating systems that they made it to run on. Instead of trying to run the same thing on another operating system without alteration, let's make it easier to run the tested system on our devices. Either through less restrictive boot managers or better virtualization, we've proven that it can be done. If we could just distract the WORA people for a bit, we could make some real progress on write once, run system on anything [edit: most things] [edit: many things] [edit: so we only got as far as "some things" after all because hardware manufacturers pelted us with SOCs containing locked boot code until we ran away, but it's still wasted less time than WORA].
> "WebAssembly has been designed to scale from tiny devices to large server farms or CDNs; is much more language-agnostic than Java; and has a much smaller implementation footprint."
Hmm, Java does scale: is used on anything from mobiles to big iron. And WebAssembly will no longer have smaller footprint after it has caught up with JVM features...
Users already have to "install" apps, for some definition of "install", and some installers already take a very long time for no obvious reason, so users are already used to running an installer and going to make a cup of tea. (Well, this one is, at any rate. YMMV.)
The Java sandbox was breached several times, but given the complexity it was relatively safe, all the equivalent technologies proved to be much weaker. Nonetheless a drumming campaign and a strong vested interests managed to kill the applets in favour of HTML 5. The users ended up being plagued by tracking cookies, super cookies, cross-site scripting, spectre plus other scripting attaks, code injection and so on.
Now even though they say they want to do something better (which I doubt given the amount of resources available) they are admitting that that one was the best approach.
What could possibly go wrong? This is a security problem, especially if the libraries included with the package are out of date, or worse, hacked. This is a solution looking for a problem and it creates a raft more. Even worse, it's another example of trying add in yet another layer to kill performance.
Biting the hand that feeds IT © 1998–2019