107 posts • joined Friday 7th August 2009 09:31 GMT
Re: They also provide installers without potentially unwanted extras...
... you mean, they really bother creating a no-op installer, something that just pretends to install java but doesn't ?
(can imagine Oracle charging for that one ...)
Gift horses ...
Apples and oranges. Even if either came for free, only one can turn into orange juice.
It's not a bad thing at all that it isn't GPLv2 licensed; but that it's not licensed GPLv2-compatible (as BSD or LGPL would've been), and/or not dual-licensed, that limits applicability.
The result is that there's a great opensource filesystem with far far less traction than it deserves. If Sun wanted to make Linux developers envious by dangling all these nice technology carrots, they surely succeeded to a degree. But if you want to motivate contribution and/or use, inciting envy is more likely to result in the opposite.
Re: Switching from big iron to x86 virtualisation
Not called you any names, really, apologies if you misunderstood me there. Nor have I ever claimed Linux scales to "ludicrous" (in the spaceballs sense) "numbers of CPUs" (measured by whichever gauge). 32 CPU cores on Linux are "pretty much ordinary" these days, and such systems perform "good enough".
I also agree both that there are workloads which exceed what you can do with "whitebox HW running Linux", as well as there being hardware/software vendors providing systems capable of running such workloads. Interesting where you need it - if you need it. Technically fascinating ? You bet !
I've merely made observation that the "good enough" frontier has continued to steadily encroach into that terrain. The pond full of "stuff Linux / x86 hw doesn't do [ well ]" hasn't dried out completely yet, altough, neither, have I personally seen signs of the water level there rising.
There's the first mass-market available product with a 64bit ARMv8 processor, yet noone considers it a breakthrough ... on the other hand, like cars, having a sports car with a flat-12 engine doesn't mean acceleration / speed are any different from the competitor using a v6 one. To be proven ...
Must admit that I'm otherwise sort of underwhelmed as well; good design/looks and well-though-out usability are selling points for sure. Apple made their brand on that for decades, and haven't failed on that front with the new iPhone either.
But visible, in-your-face-cool features which weren't available in either Apple's own or its competitor's older models, where are those ? Apple hasn't even played catch-up here, much less leap-frog. And there are even unique Apple facilities the new baby could've used, like, how about a thunderbolt interface to connect the phone to accessories, like, gasp, external storage ... How about a geeky change to the camera app to do manual focus / aperture, say, using the volume control buttons ? Or camera RAW, another first (?) in a smartphone, definitely if you did it for HD video using your own new codec ? Maybe a well-working OCR app that'd allow me to create my own ebooks from the old paper collection ?
It's an expensive development board to get your hands on ARMv8, if that's what you want ...
Re: Switching from big iron to x86 virtualisation
I apologize to the readers for not having given the proper icon first way round. After all, the orig comment I replied to was about 32 CPUs, not 32 CPU sockets ... in any case, I agree that "more" of some sort will give you bragging rights amongst a certain audience.
Highly profitable the high end may still be, but I'd stand by the assertion that the number of sharks in that pond hasn't changed much, while the pond is drying out. And those who dip their feet in there are either very brave, very foolish, or so desperate as to have no other choice.
Re: Switching from big iron to x86 virtualisation
Refresh your tech knowledge. There's quite a few options in the x86 space these days that offer 16 or 32 CPU cores. Even if you count chips / sockets, you have a range of choice in the 8-socket space (remind me there, how many CPU sockets did a T5-8 have, again ... ?).
Yes, there's not that many x86 servers out there that can have 32 CPU sockets. If that's what counts for you, go IBM / Fujitsu / Oracle.
No, there's not exactly a huge number of usecases for iron of that size. A bunch of two-socket boxes behind a load balancer does, these days, often beat the "big fat box" approach not just on latency/throughput but much more so on price, and/or price/performance.
Or, to put it differently, CPU-performance-wise, what a Sun E10k did in 1997, a Samsung Galaxy 4 smartphone chip does today. Probably more. You just no longer need 64 CPUs for this.
Enterprise workloads have grown nowhere near as much as have the abilities of "cheap" (x86) hardware.
That's the bane of the non-x86 vendors; same number of sharks in an ever-shrinking pond. Are those "big" boxes faster / can they process more data in shorter-a-time ? All yes, but the deciding question really is: "what box do I need for my workload ?".
new year's greetings from the ActiveX zombie ?
... and there was me thinking HTML5 succeeded killing it finally, once and for all.
Foiled again ! The spectre never dies ...
Time to buy Microsoft stock. They must've plastered that path with patents thrice over.
... let's see if a proper flamewar on units of measurement and/or the value of welsh oppression measured in metric will prove once and for all that nerdy engineering people have just as strong feelings as everyone else !
Does the paper at the very least provide properly error-corrected measurements for emotional suppression in units of mmHg ?
Political correctness vs. honesty
Reading the comments, I can't help but wonder - what has happened to honest, outright, direct communication ? Can one no longer name a turd for what it is, and call out stuff that stinks for its smell ?
I don't get this sensitivity thing. Being bullied behind your back is far worse than being called names outright and face-to-face. Yes, conversations can turn into shoutfests, but with those, at least the release of anguish and emotion prevents the buildup of resentment and desire for revenge.
What usually happens, though, is that the project manager you p*ssed off will stop talking to you, but give devastating feedback to your boss' boss, and your boss, and nine months later you'll be told that for all your great work, you need to improve on your interpersonal/communication skills before you can "reach the next level" (read: no extra peanuts for you, monkey, and definitely no promotion to chimp). Of course, all feedback is confidential, so good chance you won't even know who the guy was that blackmailed you.
Sod it. It's Monday, and I'm thinking of beer. Or read this: "Go, Linus !".
You can't assign anything to Torvalds, so for correctness, this has to be:
BOFH = Torvalds;
remember, C requires statements to be terminated by semicolons.
Re: The answer's in there somewhere
Don't know your perl operators ?
"." doesn't multiply but concatenate. If anything, "6 x 9" is 666666666.
(is that in the 0.5% usefulness or in the 99.5% "stuff" ?)
"textbook perfect code" ...
... is rarely found in textbooks, from my experience.
Why is that so ? Because textbook code is, by virtue of its objective - teaching coding - too simple and too small for the issues associated with paid-for / commercial software development to pop up. It avoids covering issues encountered in developing software not strictly associated with the technical act of coding itself.
Amongst those, in no particular order:
Teamwork - the absolute necessity to either work with other programmers or work with project managers, business requirements, your boss and her/his budget - is usually absent from textbooks. If no framework for this exists in the software project at all, it'll end up with interesting "warts" - think of massive code diffs in series of subsequent commits for no other reason than two developers having set their IDE's to "auto-indent" with different rules. Or imagine the odd little key component in an otherwise-pure-Java project pulling in Jython plus a bunch of 50 python modules including a few binary components written in C++, because one developer got away with writing a 50-line piece in Python using those modules since that was so much simpler and faster than the 500 lines it'd have taken in Java. Think about the "write to spec" warts that occur because specs and tests are done first for "validate known-to-be-valid input only", the code is written that way, and half a year someone using it is terribly surprised about the security holes blown into it by obviously-invalid input ... never spec'ed, never tested, never requested ... and no two-way communication anywhere between requirements mgmt, technical architects, project managers, testers and coders. And where in a textbook have you ever seen an example covered so big that you _couldn't_ implement it on your own in a month (Knuth's level 3.5+ exercises notwithstanding) ?
Complexity - textbooks often are extremely brief on this one, and C.Hoare's "The Emperor's old clothes" is quoted nowhere near as often as it should be. Abstraction, Encapsulation, Specialization, Generalization and other "layering" patterns are often covered extensively from the "how to do it" perspective but warnings about the dangers of extensive layering, respectively guidelines on "good usage" (do you really need to subclass if all you want is to have an object instance that differs from the default-constructed one in a single property ?), or, to put it differently, "when not to use it", are often absent. The coverage of interfacing with pre-existing libraries is often scant. As a consequence, have bunch of new interface layers here, a new generalization there counteracted by an additional specialization elsewhere, and have multiple sequences of conversions resulting a chinese whisper chain of data transfers. Every layer on its own might be a great, well-written, perfectly documented and admired piece of code ... but the combination screams at you just as much as the result on the floor of a night drinking great wine after a michelin-starred ten course meal.
Libraries - face it, you're not going to write everything from scratch. But which Java programmer knows even a third of the Java 7 classes by name ? Which C++ programmer can use a third of Boost without the manual ? Which C/UNIX programmer knows how to use a third of Linux syscalls ? And that's only _standard_ libraries. Not to talk about the vast worlds of things like Perl or Python's component archives. Effective programming often means effective research into what existing library already-in-use-elsewhere in the project will provide the functionality that you've been considering reimplementing from scratch. Which book teaches this ?
Legacy - textbooks like to advertise the latest-and-greatest; of course it's a necessary of work life to keep your skills up, and to follow changes and enhancements in the technologies touching your area of expertise, but even more so is it important to develop "code archaeology skills". That is, go backward in time and check out what was considered good practice in 1990, what was cutting edge in 1995, which code patterns and styles were considered best in 2000. Go to the library and read a book on J2EE from 2002 as well as the newest one on Java 7, then grab a 1992 and 2012 Stroupstrup. Read and compare the Documentation/ subdirectory of the Linux kernel sources in the 2.2 and 3.4 versions. Much of this reading will trigger "duh" moments of the same sort that you'll inevitably encounter when you start to look at existing code. Nonetheless, textbooks (or, even more so, lessons/lectures) that emphasize how mistakes are identified, rectified and avoided in future projects are barely existant, else F.Brooks "mythical man month" wouldn't still be so popular more than 40 years after it was written.
Evolution - coding, and I object to adherents of "software design" here - is never intelligently designed. In the beginning, it's created, and from that point on it evolves. It grows warts, protrusions, cancers that are hindering its adaptibility to the tasks it's set to just as much as new nimble limbs and telepathic powers to help it. A wart for one usecase can be the essential sixth finger for another. Code Veterinary can be a dirty job, and the experience of humility involved with dragging the newborn out past the cr*p of the parent is not something you're prepared for by reading about it, because books don't smell. Often enough there'll be the revolting ethical challenge of becoming a Code Frankenstein as well - when you'll stich a few dead protrusions dug out of a code graveyard by someone onto the already-mistreated beast. It's like textbooks that tell you about how to have fun making babies but not about how to change their nappies or how to deal with teenagers having a fit.
All of these one can learn to live with; none of these are necessarily perpetuated ad saeculum saeculorum.
Thing is, "textbook perfect" isn't the same as "beautiful" and that isn't the same as "best for the usecase". The grimy bit about software development is that a really beautiful solution can be built from frankensteinish code, and to develop the skills as well as the thick skin to be able to do this, no textbook I've seen will prepare you for.
(I still like coding, but sometimes I think I should'be become a vet)
Re: But why did it take until the 386
Educatedly guessing there, but it might be that the 386 was the first one manufactured at structure widths on the order of magnitude visible light wavelengths (i.e. not significantly more than a micron). That'd give color effects because the structures will work like diffraction gratings then, and the whole die looks like areas of color. Larger structure sizes don't cause this effect, at least not at close to perpendicular angles of incidence, so they would look largely grey - apart from intrinsic coloring of the material used.
If you look closely enough, you'll notice some parts look reddish on the 4004/8008 (probably copper contacts), and the 286 one has that little red coil-like structure on the right edge. I'd contest all these pictures are color.
Re: Nein! Nein! Nein! Nein! Plan 9!
configure / autoconf doesn't make me wonder - it makes me curse, swear and use sewer language of the worst kind. Nuking it from orbit is too kind a death for it.
It's not a tool, it's a non-tool. Full agreement with the *BSD ranter there - noone bothers understanding autoconf input / setting it up properly; it gets copied-from-somewhere and hacked-to-compile; if "development toolkits" provide/create autoconf files for you, they're usually such that they check-and-test for the world and kitchen sink plus the ability to use the food waste shredder both ways.
The result is that most autoconf'ed sources these days achieve the opposite of the intent of autoconf. Instead of configuring / compiling on many UN*X systems, you're lucky today if the stuff still compiles when you try on a different Linux distro than the one the autocr*p setup was created on.
It had its reasons in 1992, but the UN*X wars are over; these days, if your Makefile is fine on Linux it's likely to be fine on Solaris or the *BSDs as well. Why bother with autoconf ? Usually one of: "because we've always done it that way, because we've never done it otherwise, and by the way, who are you to tell us !"
Amazing how much Symbian must be left after the torching ...
... given they sold 2.9M WinPhone-Lumias but 6.3M smartphones in total, does that really mean more than 50% of what Nokia sells as "smartphone" are still Symbian devices ?
Wow. I'm truly impressed. All the backstabbing, burying-(half-)alive, torching, butchering, burning, throwing-off-platforms etc. of Symbian, but the zombie just won't go away.
Somehow, other ex-Symbian-licensees like Samsung, Sony, LG, ... didn't need two years to get their product line to over to Android. Wishing all (ex-)Nokianites good luck!
Re: From the article
That man begs to rant. Granted, there might be differences hidden in the sheer amount of drivel. I usually tire before I find them though. But then, Tomi Ahonen writes like a real Nokian (Nokianiac ? Nokiate ?)
Re: Take no bets
when Larry says "just double it", then consider that doubling the length of a yacht won't double its speed, but will more than double its cost.
It all depends on what exactly you double, and there are many ways of doubling "performance". As there are even more benchmarketeering ways of measuring doubled performance.
Sun sailed that course for many years with this idea of "sum(more cores) > sum(fast cores)". It seems Larry got converted, like if there's not much else to show then at least show you can double "it" - find a suitable "it".
I don't doubt there's some market for these, just it's shrinking not growing; what you can do these days with a 20k$ x86-based server you couldn't do with a cluster of ten Sun E10k's fifteen years ago. "High-end" computing is becoming a commodity and that's not a trend which will reverse any time soon.
Solaris, on the other hand ... give me more. Site licenses for Solaris, for example, would be a great way of knocking crimson headwear peddlers out in places. Relay that to Larry if you would ? Something like, we'd be happy to more than double our use of Solaris - though, if and only if we can cap the license costs ...
Re: Take no bets
well, "you don't get that from either Intel or IBM" - not quite so. IBM maybe, but Intel's roadmaps as far as one of their tick-tocks ahead are usually very well published (by Intel, in fact) and much talked about. Who cares that you can't buy Haswell-based Xeons till mid-2014 ? Everyone knows they're coming, the instruction set enhancements as well as throughput/latency figures for the CPU as well as some chipset details are out there, not just in leaked NDA presentation slides but pretty much all over the tech press.
On the other hand, maybe you meant "Itanium" when you said "Intel" ?
... when my brother-in-law used to ask me "how much exactly again did you spend on that astroboffinry stuff ?" I only asked him back "how much exactly are you putting into your Harley-Davidson fund?".
I don't think for my astro-spending I could've gotten a Harley quite yet, nor an Audi TT instead of a VW Golf, or a luxury Mauritius holiday, or even, gasp, a pint a day for a decade. Still, had a lot of enjoyable nights out, of the other kind. Everyone to their own.
Re: Agile development works well ... if done properly
Why is it that whenever the topic of "Agile development" comes up, someone feels compelled to state that it needs to be done properly ?
The answer, I guess, must be that doing it properly is a) hard and b) anything but obvious ?
Or maybe it's that it's much easier to specify "Agile is not ..." than "Agile is ..." ?
Re: The only way to be sure...
whenever someone comes up with this "prodded with a finger [ from orbit ]" thing, the image of the opening scene for "The last remake of Beau Geste" is conjured up with my mind. Is it just me ?
(Icon for obvious reasons, at least once you've seen the scene ... not downvoting anything here)
If anything, they're "copying" (shall I say: "improve on", because the Samsung thingy does the photography stuff right, good lens, optical zoom) the form factor of the Nokia PureView / N808. I'd be quite interested in an image quality comparison between the two.
Re: Just two?
you mean a good dirty dozen like they used to have in their S60/Symbian days ? One "flagship" and some 25 satellites of odd colors, shapes and sizes ? And possibly fries with it ?
I'm not so sure that Nokia did itself a favor with that "variety" (rather, varieté).
Sometimes, less is more. Even if they're not going all the way to Fruitycorp-style product line clarity. The "dozens and dozens" Nokia still has in their S40 lines.
I've done both - applied to some jobs where I've met all the criteria, and applied to some where I didn't. In cases where I got the job, I ended utterly bored out whenever I seemed "the perfect fit", but if there was an element of the unknown in it, I enjoyed and stayed in the job for many years.
Can only speak for myself there, but I guess the boost from someone else saying "we believe you can do it" is better for my motivation and willingness-to-strive-for-it than the intrinsic "I believe I can do it" thing. Whether this externally/internally-induced motivation thing is gender-biased I don't know, though.
Re: Pulsed power...
Otto Hahn got his Nobel prize (on the discovery of uranium fission) for chemistry not physics. "Nuclear chemistry", alchemy's finest hour. Well, until these "elementary" particle physicists came along and got all the spotlight ;-)
Re: Only hope
You mean like use the toilet paper to mop up a little mess, the wellies to remain clean if the mess is a bit deeper, and the gear to climb up phone mast once the mess level rise significantly ?
By those standards, right now, Nokia is probably three quarters up the phone mast. Toilet paper and wellies surely have outlasted their usefulness for Nokia. Seems a bit like Nokia's leadership have dreamt about building Icarus wings for too long ... and while the mess levels are still rising, the parching Sun is melting those winglets Nokia hoped to take off with.
Re: Matt B...
As an ex-Sun(shiner) I take offense at the statement that the Solaris/Itanium port _failed_.
It ran just fine in the lab. Noone wanted to have it, not a single then-Sun and then-prospective-Sun customer asked for it (very different from "canning" Solaris/x86 a few years later).
Sun simply decided that there'd be nothing to sell, so it never was pricelisted / made available.
Re: So let me get this right ...
... are you saying they've recreated a shell version of sysadmin doom ?
Re: I'm actually long on Nokia stock
So are you or aren't you ?
You start your post with "I'm actually long on Nokia stock", but finish it with "Too early to jump in but I'm watching closely". Sounds like either a very quick reaction to a turn in the markets, like someone having gotten cold feet over the writing of a forum post, or maybe just an enormous amount of confusion about Nokia taking its toll.
Anyway, good luck, you'll need it.
Re: @fch (was: What do they expect?)
@jake: Full ack on what you say, particularly on the two sentences "go with the cheep this quarter" (nice wordplay, cheep - sheep & cheap ...) and "youth & glitter" bits. That's what I've been trying to point out - it _does_ work that way ... as you say. As said, in full agreement that this isn't good, but also in agreement that it's happening, ok ?
Re: What do they expect?
Hint: It does work that way - and did work quite exactly that way in the SF Bay area as little as ~13 years ago.
If you were around in the .com boom phase, you'd have seen people hired on the basis that they pretended to know the difference between a keyboard, computer and screen. The hiring manager wouldn't quite admit it to that exact wording, but neither would (s)he admit to their superiors noone could be found for the job. You'll learn ... and if/while things are booming, employers don't mind the time that takes.
Re: An example
Table lookups on GPUs ... are supposedly highly effective by design - after all, texture mapping is nothing else but exactly that. The above isn't unlikely to still be fastest on a GPU (though there, it probably makes little difference whether you do integer or FP adds; depends whether interpolation is useful/desired or not).
Re: Damn Larry, so close...
He probably said:
Heuristic, Adaptive, In-Memory - Cash!
For him, anyway. Logical.
Re: I don't really understand this ploy
The "rebootless kernel updates" aren't a new technology - Sun shipped something akin under the name "Dynamic kernel updates" with Solaris 8, and later withdrew it because it was too cumbersome - both for customers to use as well as for Sun developers to create such a "dynamic patch". That was, like, a decade ago ? Someone must've thought it a good idea to spawn a few more patents in the area ...
The problem is complex not because code modification at runtime is hard (SystemTap, DTrace and various other frequently-used tracing/instrumentation/monitoring utilities and - gasp - all Hypervisors - do that all the time), but because different kernel versions combined with different module/driver revisions and possibly (a series of) compounded "hot" updates makes determining all necessary patch points / updates a very difficult exercise to get guaranteed-right.
Snapshot boot environments, even as simple as "patch cold side of mirror, reboot into that, if ok re-sync, else reboot again into old config", have a far more predictable behaviour. Hypervisor snapshots / system+app live migration allow you to live-split patched/unpatched envs if you really wanted to. Whatever I'd build my reliability proposal around these days, ksplice it ain't. I agree that hardware is cheap enough these days that reliability-by-redundancy ("the cloud") makes much more sense.
That might be different for "nine-nines" environments, which are still rumoured to exist, the use cases where a server is installed, configured, powered on and never rebooted till decommissioning five years later. Never worked with this, would love to hear more about it.
"light from the beginning of the universe"
... isn't visible in any wavelength of the electromagnetic spectrum - the primordial fireball, as bright and impressive the term may sound, actually was 100% opaque, because the universe started in thermal equilibrium and whatever photons there were constantly got absorbed and reemitted - up to the time when radiation and matter decoupled; only after neutral hydrogen atoms finally formed and no longer were reionized since the temperature / average background radiation energy had dropped sufficiently, said radiation became free/visible - the cosmic background. It's considerably _younger_ (a few hundred thousands of years) than the universe itself.
The only way to look "through" that "wall of fire" are indirect methods (density waves imprinting themselves on the radiation) or, if it ever becomes possible to detect them at those low energies, the cosmic neutrino background (which decoupled a few minutes after the big bang).
"true, working solution" ...
... tends to be what the customer has already.
They're just short of storage. Hence trying to get more storage for less-more money ...
I'm in full agreement with you that in this there's shortsightedness and an a-priori approach to application / workload design which structures data and avoids "copy&paste-referencing/subclassing" can easily bring down storage / bandwidth needs by orders of magnitudes.
Unfortunately, many software stacks are "working" but are old and rigid; retrofitting a profound architectural change such as this into existing software is, not always but very often, either so daunting or so expensive as to be prohibitive.
Structured data, in that sense, is not necessarily using less storage / does not necessarily dedupe better. XML is a curse, really; copy & paste an XML file into another shifting it around by a few bytes in the process, and the dedup potential is gone. The usually-identical console logs from a server bootup are preceeded by unique timestamps/hostnames and again, the dedup potential evaporates. Just as examples.
These problems notwithstanding, storage that compresses and/or deduplicates (if only the twenty copies of the renamed CEO powerpoint memo which got stored into the DMS by twenty different departments) provides savings, and therefore has its place.
These savings are not as great as the ones realizable from a "context switch", but very tangible and achievable at significantly less risk. Like, treat a cold with lots of camomile tea instead of a 1000$/dose not-yet FDA certified breakthrough antiviral medication with as-yet-unknown side effects. Treat symptoms not cause. One of those cases of "good enough" ?
... open, as in ?
Yes, the architecture specifications (and even some of the chip designs) are public and downloadable without having to pay a fee.
The use of the SPARC name/trademark, though, is licensed and requires a SPARC International membership as well as passing your implementation though the "compliance test", again as administrated by SPARC International. See "how much does it cost" paragraph on http://www.sparc.org/aboutFAQ.html
So it's kinda open-as-in-OpenJDK.
When you create and talk about the homunculus, you probably need to say that while it quacks like a duck, walks like a duck, looks like a duck and mates with ducks, it's pure chicken DNA.
Now which pocket did I stuff that testing kit into ...
Since you ask to be corrected ...
... just use the omniscient garbage pile that is Google to find out about what's called the "solar constant". No, I'm not going to gtfy.
Not the visigoths ...
.. but rather the Vandals were the temporary occupiers of what at the time of their invasion was the roman province of Africa. They in turn were thrown out by the Byzantines (Greeks / eastern Romans) a hundred years before the Arabs appeared.
And all that time, there were those people who lived "inland". With various degrees of ruling / meddling by whoever ruled the mediterranean coast, and with little interest in identifying themselves with a "state". Apologies for using the term "Berber" as the origin of the name is not entirely clear but often seen in the Latin "barbarian".
Anyway, it must be intriguing to historians / archeaologists that much more is known about the history of the "veneer" (mediterranean coast) rather than that of the significantly larger area inland.
Soviet union technology ...
... might not be sexy nor flashy and more often than not lacks the patent "protection" seen in "the west" if for no other reason than its age.
Yet, much of what got developed there has been developed with the intent to last - reliable, rock-solid, hard-wearing, widely applicable, ... - things that one would wish to be more prevalent in a forward-looking (sounds less cerealy-greenish than "sustainable") society.
There's a lot the Soviet Union(s) of this world are to answer for, and no I don't think it's a good idea (though probably patentable) to bring it back. But with respect to the soviet approach to technology (maybe crude to use, but well-working, possible to build from limited-availability resources, lasting for longer than a communist party chairman, produced-with-pride), I can't help but see some value in it.
Both Power and SPARC servers have Forth built-in
... and they don't even need an operating system for it. They both use OpenFirmware, and the "ok" prompt there will give you a full-featured Forth interpreter right at your fingertips.
auctioning off HP ?
... she's probably qualified. HP's parts by now must be worth more than HP as a whole, and anyway, given the size of the leviathan, selling it whole is impossible.
Who's better at the helm of HP than someone familiar with auctioning off things ?
SPARC features ...
... were reasonably well thought-out when the architecture was designed; but since you're comparing it with ARM (not a good comparison,what do electric bikes and humvees have in common ?), I'd like to comment a bit on that.
ARM have continuously evolved / improved their core instruction set (ARMv5 -> v6 -> v7, all adding quite generically-useful things), and that even though few people would call ARM's initial instruction set design anything else but "great". ARM and its licensees also take hardware advances (caches, builtin ram, close-coupling between CPU and devices) and continuously incorporated such into the implementations. Anyone can _see_ how ARM leads when it comes to CPU instruction set design / improvement, or, to phrase it differently, achieving the "speedy potential".
SPARC is the monolith of instruction sets, though - set in dark, menacing, impressive stone since sparcv9 was concocted (1994 ?). Some of the things SPARC does (branch delay slots, /dev/zero register, ASIs, the instruction set extensibility) are indeed useful. Others - fixed-size register windows, or even the windowing mechanism as a whole have proven to be more of a burden; they make SPARC programs use larger stacks and therefore require comparatively larger caches to achieve full potential. Then there's Sun insistence over decades not to consider out-of-order implementations, even at a time when Fujitsu's SPARC64 had already proven the usefulness (fortunately, the T4 finally addresses that). Also, again in comparison with ARM (or x86), SPARC machine code isn't very dense - larger code footprint, again needs bigger cache / cache bandwidth. There would be ways to address that (like x86's micro-ops caches), so agree with your assessment "speedy potential", yet a lot of that remains unexploited.
Also, very unlike ARM where there are a variety of widely-used instruction set extensions that ARM keeps on improving regularly, SPARC has not had any updates on that front for a while either. Yes, the T-series have the crypto accelerator as closely-coupled device, but that's as far as it went. Not every design considered "great" will keep that label over time; at least from my point of view ARM has done better there than SPARC.
SPARC is great because it's reliable, got very predictable behaviour, runs all your old stuff.
But SPARC has had great potential in the 80's, had great potential in the 90's, has been having great potential in the naughties and still has great potential ... and it will always have great potential.
Fingers crossed for the T4; some achievement instead of potential would be wonderful.
are there descriptions of how the dodo call sounded ?
... given it got exterminated way before audio recording devices were available, what actually is known about the dodo's song ? Any references it sounded somewhat like Nokia's ringtone ?
spec change != price hike
Let me see ... where exactly in the article was a spec change mentioned ?
Well, I guess if you're considering Oracle's pricing a feature, you may have a point. Go buy more Oracle stock.
- Lightning strikes USB bosses: Next-gen jacks will be REVERSIBLE
- OHM MY GOD! Move over graphene, here comes '100% PERFECT' stanene
- World's OLDEST human DNA found in leg bone – but that's not the only boning going on...
- Beijing leans on Microsoft to maintain Windows XP support
- Google's new cloud CRUSHES Amazon in RAM battle