"lack of openness" from Oracle?
Who would have thought.
Oracle has suffered an embarrassing setback in its plans for a modular architecture in Java 9. The database goliath has lost a Java Community public-review ballot by 13 to 10 that was to have approved its Java Platform Module System (JPMS) specification as a final draft. Executive Committee members ignored dire warnings from …
Yeah Oracle think they are being nice by pretending to be somewhat open but all they need to do is look at how on the DB side for example Postgres and Hadoop are progressing to realize what happens if they think they can go back to the world of everything closed and proprietary. Closed, lock in and constant large yearly increases isn't a business model that will last forever. Snoracle JAVA already missed the boat with Android for these very reasons.
Yes it is...
13 people decided that it wasn't a good idea, 10 did. The 13 carried the day.
I note that Oracle thought "consensus was not necessary". A nice (or not so nice) way of saying "Bugger you, the management committee is wrong and we'll do it anyway". As per normal with Oracle, they only play by the rules when it suits them.
Whilst the technical details of all of this are way beyond my comprehension, the very fact the Oracle wanted it and somebody else opposed it, makes me like the people who opposed it. It usually turns out that anything Oracle wants strongly enough, tends to be bad for the rest of the IT world, so based on that I applaud the 13 refuseniks.
Your point is again...
I am reminded of the story earlier this year where Oracle made an employee sign a binding arbitration clause and their arbitrator ruled against them in a pay dispute and then Oracle sued the employee. Their corporate motto appears to be "heads I win, tails you lose".
There are few companies more evil than Oracle. If they are for something, then you know right away that you will be screwed somehow, some way.
If Oracle had deprecated all the half-baked Sun kludges in Java several years ago, then removed them or had a hard coded security policy to deny access to non-JRE code, this could have been done with far less fuss now! Because this was not done, some products are a pain to upgrade, because developers could still use the kludges, so short sighted management were able to put off upgrades...!
I think Oracle _finally_ accepted that JME was crippled junk (Google saw this for Android), so needed some way modularise and shrink the JRE for more restricted environments.
My rules of thumb are as follows. I expect many will disagree and (semi-ironically) downthumb me. Nevertheless, I continue to follow these rules of thumb because they work (for some values of work).
1) If Sun invented it, it is buggy and full of security holes. See NFS (all major versions, despite each major version being slightly less full of security holes than the one it replaced), Sun's variant of RPC, Java.
2) If Sun bought it, they broke it, at least a little. See MySQL, Star Office.
3) Oracle took whatever Sun did badly and made it far worse. See everything.
And yes, I know, the downthumbs will mainly come from Java lovers. Because Java is different (they think) from all the rest of Sun's children in that it isn't a buggy, bloated, slow pile of shit. And then they'll ooh and aah at how Java SE 10 is going to add all the things that will finally allow it to deliver all the programming advantages of OOP that the very first version was supposed to (but didn't) deliver. Oh yes, SE 10 is actually going to be useful and usable, just like the prophets foretold back in the dim history of time (back when they were talking about JDK 1.0).
Did Scott pinch your lunch from the fridge or something?
Like any company Sun had its faults. However, their hardware (pre-Oracle) was mostly fine, and in the early days pretty good. I wasn't thrilled about Solaris after SunOS 4.x (what can I say, I prefer BSD to SysV) but to be fair they did fairly decent job with that in the end and it turned out to be usable and dependable OS. Backwards compatibility also is pretty solid so that some old binary for Solaris 2.5 for example will probably run just fine on Solaris 10.
Yeah you could easily beat single thread performance with cheap x86, but as a more heavily loaded multi-user, heavy I/O system it was a lot better than most x86 based alternatives.
I shall not comment on post-Oracle situation as it is no longer Sun.
I can't downvote you because I despise Java in all its incarnations. Not that I was going to anyway mind.
Ooops, meant to add the qualifying term "software" in my rant. As you said, the hardware was quite good.
And you're right about Solaris too - not too bad, although it was considered to be slow enough that it was often called Slowlaris. But, like all the proprietary *nixen, it suffered from "we're going to do this our way, to differentiate us from the competition," which actually meant "we're going to fragment the market so Microsoft can dominate it." Short-term gain, long-term loss for all those proprietary *nixen.
And you're right about Solaris too - not too bad, although it was considered to be slow enough that it was often called Slowlaris.
It depends on what aspect of it you think is slow. As far as I recall, the memory allocator in the C runtime on Solaris is of the old school - every malloc and free maps on to an OS call. Very BSD. But there's nothing preventing use of a GLIBC style memory allocator.
If one does that there's no particular reason why code running on Solaris would be slower than anything else on the same hardware.
If one considers the wider system aspects, I know that the Linux world has worked very hard on getting mutexes working faster, and Linus has always steered the philosophy of the scheduler towards throughput over everything else. That's pretty good. However it's only comparatively recently that Linux got rid of the big kernel lock. There's also a growing acknowledgment that the Linux network stack is a bad idea speed-wise, but it's such a massive change to do anything about it I can't see it happening. The BSDs of this world, which put the stack in user land, is the way to go. AFAIK Linux is the only OS to put a network stack in the kernel. Windows? No. Mac OS? No. *BSD? No. VxWorks, INTEGRITY? No. QNX? Dunno, probably not. See what I mean?.
So if one's code is heavy on the mutexes, threads and IO one would see, or would have seen, a difference.
"every malloc and free maps on to an OS call" - not true. No version of unix has ever done that. It would be unusably slow.
Well, maybe not for a long time. A long time ago every new allocation needed a call to sbrk...
OpenBSD makes OS calls for anything over page size. Anything under a page size is drawn from a pool of already allocated /recycled pages. It uses mmap instead of sbrk. They get some very nice benefits from doing so, e.g. most allocations have unmapped pages adjacent - free buffer overrun protection. And it allows ASLR to apply to data too, so the layout of data within one's program is randomised. Freed memory does not come back to haunt the program. And realloc stands a good chance of not requiring a copy to be performed.
Ok, so it's perhaps not as fast as jemalloc, but for some it has desirable properties.
What exactly do you mean by "network stack in the kernel" that means that FreeBSD doesn't do this?
Context: my company's main product uses FreeBSD as an OS. I have intimate knowledge of the flow of network packets through it, and in normal circumstances, packets do not leave kernelspace. The netinet components are, in fact, normally compiled into the main monolithic kernel file (more or less like Linux usually does).
The Windows network stack is exclusively in loadable modules(1), but still runs in kernelspace once loaded.
(1) There's almost nothing in the core Windows kernel, aside from enough disk modules and module management to load the rest of the kernel from what are merely specialised DLLs on disk.
Don't forget ZFS. Now that really is a tremendous piece of software, and Sun had the kindness to give it away. It's certainly one of the jewels in Sun's crown. You don't even need to run it to know it, a good indicator is the extent of the row in the Linux world as to whether it can be included in distros or not, or replicated.
I do have sympathy with the need to be able to override committee members that are merely representing their own narrow interests. I'm not qualified to comment on the merits of the arguments in this particular case, but from reading the article I suspect that Red Hat are trying to defend an investment in their own code that kinda achieves something similar. Have I got that right? It sounds like there's something about what they've got that is going to cause problems (ie it's a developmental blind ally, or is encumbered in some way, or is simply incompatible with what everyone else wants to). If so then the Java world does need a way of putting Red Hat in their place.
Perhaps Oracle could have gone about things differently, but if there is a burning need to correct some deep structural problem then there's little sense in delaying matters. Even if that breaks a few things along the way. Fragmentation will do no one any favours, and the Java community really should strive to avoid that. Stagnation won't help either. Doing one's own thing outside of the prevailing consensus runs the risk of making it difficult for everyone.
Sounds like they need some face to face meetings.
The sun.* packages are used because there are no good alternatives. The alternatives need to be provided well in advance. And with back portable .jars. Some things are just convenient, like base 64. Others like sun.misc.unsafe are essential for some advanced usages.
Java itself was godsend. It made Lisp-style programming popular, introducing garbage collection to the unwashed masses. It spawned .net. If it did not exist we would still have to use archaic rubbish like C++ and PHP.
I'll bite. :)
"It spawned .net. If it did not exist we would still have to use archaic rubbish like C++ and PHP."
Sadly, despite all that prior archaic art to improve upon, Java still lacks first-class unsigned integer types, still doesn't play well with the OS leading to poor performance, unpredictable run-time behaviour and folks having to leverage third-party code to emulate/replace the functions that the host OS has already provided for decades.
Java isn't a "godsend", it's just a tool - and not a particularly elegant one at that. I say that having used Java on a regular basis since 1.1 (1.0 was missing too much to be valuable alternative to anything else at the time IMO).
I'll bite :)
"If it did not exist we would still have to use archaic rubbish like C++"
Oi, stop dissing C++!
Ok, so there's a decades long history behind C++ that has not been removed, some of the template and stl stuff is 'orrible. However if one makes careful and disciplined use of shared pointers and runs it on top of a memory allocator like GLIBC's ptmalloc, it's pretty hard to beat.
I've written really quite large programmes in C++ that don't use the "delete" keyword anywhere at all and have zero memory leaks (according to valgrind), everything is done with shared pointers. And it's fast.
One of my favourites blends stl queues with ZeroMQ, it pushes shared pointers through the stl queue but uses ZMQ for it's distribution patterns (PUSH/PULL in this case) to decide which thread is going to read the shared pointer off the queue. Hmmm, I think I can hear people curling up in horror...
Personally speaking though I think the days of C++ are passing. Rust in particular stands a very good chance indeed of replacing it. Being a completely new language means they can throw away all the decades of cruftiness that a language like C++ has to support, and build a nice language with some very high level ideas (things like automatic memory management) without the need for a bloaty horrible thing like a garbage collector thread. There's even rumblings of a project to re-write Linux in Rust!
Unsigned integers are used in a wide variety of protocols and file formats.
If you need to talk to anything else at all, you need both signed and unsigned integers, or to waste a lot of memory, human effort and CPU on much larger signed versions with manual range checks and fun bit-manipulation to turn the uint16 into an int32 so you can use the file format.
"What are you using Java for where that one bit is crucial?"
Strictly speaking not crucial, but it's harder and uglier than it needs to be to do a basic thing such as parsing binary input. For some folks latency is killer, pissing a few kiloCPU cycles up the wall to parse a few bytes of XML or JSON just doesn't cut it.
"Java ... still doesn't play well with the OS leading to poor performance, unpredictable run-time behaviour"
Your Java skills are antique and outdated. In theory, adaptive optimizing compilers are faster than static compiling. Say you have Java code that runs a certain operation on a large list of a type of objects, the JVM will optimize for that type. In the next iteration, the large list contains another type of objects, the JVM will adapt and optimize for that another type. C++ can not do that kind of optimizations.
When you compile C++, you target a least common hardware denominator (no vector instructions, etc) so you can not use fancy hardware instructions. But the JVM will adapt from cpu to cpu, turning on vector instructions or what not. C++ can never do that kind of optimizations. So in theory Java is faster than C++.
In practice, all the worlds fastest stock exchanges with sub 100 micro latency, and huge throughput are developed in Java or C++. If Java had latency problems with garbage collection, then the stock exchanges such as NASDAQ on Wall Street would not use Java. Many ultra low latency high frequency traders are using Java, or C++. So you are wrong, Java is among the fastest platforms out there, rivaling C++.
The secret to get Java low latency, is to preallocate lot of objects and keep on reusing them. That way, garbage collection is never triggered. In effect, you turn off GB. This is used a lot in trading.
""Java ... still doesn't play well with the OS leading to poor performance, unpredictable run-time behaviour"
Your Java skills are antique and outdated."
Quite likely in fairness, I too have worked with Java in low latency environments with fairly severe space constraints, albeit a while ago. I have also seen the JVMs improve too. IMO the *best* argument for Java is the tooling around it, YMMV. There are some interesting side-bits to Java such as tools that take Java code and compile it down an FPGA as well.
In practice I see the vast majority of Java code run on ancient JVMs which are significantly older than the Intel, GCC or LLVM alternatives on offer. So I'm not convinced by the argument that JVMs are intrinsically more up-to-date than anything else.
I am aware of "The secret to get Java low latency", but I have been doing the same thing at the same or lower cost with C/C++ for decades. I can do either, but I figure it's easier to use a hammer on a nail rather than a screwdriver. Still, if all you have is a screwdriver, fill yer boots with my blessing.
I think the downvotes are a bit harsh Plinker, but on re-reading your post the following caught my attention:
"In theory, adaptive optimizing compilers are faster than static compiling. Say you have Java code that runs a certain operation on a large list of a type of objects, the JVM will optimize for that type."
That appears to describe some form of "lazy vectorization", in the C/C++/FORTRAN world the penalty would be paid at compile time rather than runtime... Plus in the C/C++/FORTRAN world DLLs have also been leveraged (with varying degrees of success, natch) to accomplish the same goal at runtime.
C & C++ have virtual machine targets too - LLVM for example.
I am glad to have the choice to use all that good stuff plus we've got JVMs and anything else. It's 1's and 0's at the end of the day. :)
"we would still have to use archaic rubbish like C++ and PHP."
"archaic rubbish"? Seriously? You actually believe that?
Which brings up why _I_ am _VERY_ happy that 'modular' wasn't "just adopted": because the '.Not' that 'aberglas' apparently thinks was 'spawned' by Java, resulting in C-pound and a _LOT_ of pure ugliness, is *JUSTIFICATION* for *WHY* we must "put the brakes on" for "yet another new, shiny" being EXCRETED from the IMMATURE minds of INEXPERIENCED MILLENIAL CHILDREN.
Just because you CAN, does _NOT_ mean you SHOULD. In the past it was ".Not" and C-pound. In the present it is UWP, Win "Ape" and Win-10-nic. In the future it *WILL* *NO* *LONGER* *BE* Java "the Modular version".
(Thank whatever dieties and demons were involved in making THAT *NOT* happen!)
and a BIG down-thumb for calling C++ and PHP "archaic rubbish"
I'm not a fan of Oracle or their business practices but...
I think they have been a good steward of Java. They actually managed to get 7.0 finished - which Sun couldn't. They introduced Streams in Java 8 (and a nice solid implementation too IMO) without causing backward compatibility problems.
Java is thriving as an enterprise class language, I know that I can install it and my stuff will run without change and in this respect they are like Microsoft (who also take a lot of flack). I have Java code from years ago that just works without change - except it runs a lot faster. I have old programs from 20 years ago that still run under Windows 7 and Windows 10. I can't say the same thing for Apple or Android..
In the case of modules, I don't think many developers actually care that much, it's a bit like Java EE in that respect. The problem they are trying to solve has gone away in many cases: machines have a lot more memory or we use containers or we build microservices or we use frameworks such as Spring Boot or Dropwizard.
OSGi has been around for years (as an example of one approach to modules) but had very little traction indeed, it's too fiddly and doesn't really help that much.
I'd rather that Oracle invest in a combined Java/OS Container to make it easy to deploy my applications than solve a problem we only cared about 12 years ago.
Biting the hand that feeds IT © 1998–2019