20 posts • joined Wednesday 19th December 2007 00:53 GMT
Now Brother can start telling me a month in advance that I need ink right at the edge of my eyeball. Just what I always wanted.
Yeah, SOP. Also...
I guess this is worth an article because it eventually will affect actual people. But one giant messing with another - that's just SOP.
More important, for me, is the title's use of CAPS. REALLY? Seems to be a trend, and that's bad. Your RSS feed is starting to look like Fox news'.
Your last example - using quantum computers to do quantum calculations - turned me off completely. If the most important thing we can do with these is use them to understand themselves, the whole exercise seems pretty much like climbing down a rabbit hole.
Little Boxes, on the pages...
...little boxes, full of ticky-tacky, like "click this box to indicate you agree with the 50-page 3-point Ts&Cs you just scrolled by and then you can start using our stuff that you paid for"
Now we know why those are there. "Affirmative action" take to say we agreed with the Ts&Cs.
Huevos Rancheros... Wha??
I don't know where you got that grossly complex version of huevos rancheros. Something extremely strange must have happened crossing the pond. Here in Colorado -- and also in Texas, New Mexico, Arizona, and the whole western US, as well as places I used to eat it in Mexico, huevos rancheros is universally made like this:
- Cook up one or two eggs - your choice of style: sunny side up, over easy, scrambled, or whatever.
- Smother the cooked eggs in green chile.
- Serve with (mexican-style reddish) rice, refried beans, and your choice of corn or flour tortillas.
That's it. Fast. Easy. Tastes great, assuming you have good green chile.
Now, making the green chile is a whole issue of itself. That can take upwards of a day, starting with browning a roux...
What you're describing sounds like some kind of mutant quesadilla. Also good, and yours very possibly tastes good, but it ain't huevos rancheros. What rancher would have time to do all that? (The green chile gets made in batches; it improves in cold storage.)
Eliminated? or Made Obese?
Eliminating the start button, he sez, let them use a "whole screen" for what it used to do.
Whatever else they were, Marx and Engels were Victorians. It is unsurprising that they reflected that era's general prejudices. While I personally do *not* agree with their politico-economic views, I feel it's inappropriate to mix the general attitudes of their era with their leftist principles. Further reading: http://goo.gl/jiWkc
"After all, the mere fact that Wikipedia pages took a bit of thought to construct and edit was a barrier to the thoughtless."
Or was it a barrier to non-geeks? Experts in, say, Hittite sculptures probably don't have the time or inclination to deal with non-WYSIWYG editors.
I look at this as a way to encourage the real experts to take a hand. That existing barriers certainly wasn't much of a barrier to people with axes to grind.
Texas Allows this too. And...
The TX concealed carry law originally banned weapons in churches. It was modified at the request of a pastor, who said he didn't feel safe.
He (it was a "he") must have a heck of a congregation.
At the risk of being considered a self-promoter...
I've been blogging about this for quite a while, preparatory to writing a successor to "In Search of Clusters" about the issue. See http://perilsofparallel.blogspot.com/ .
Servers are no problem. They'll just get smaller and more efficient. They use huge numbers of cores already, just in separate machines. Virtualization rules.
Clients are the problem, and they're a big one because they have the combination of volume and high price that funds a good part of the industry. Most microprocessors are $5 units, like the one that runs your dishwasher. Intel gets multi-hundreds of $, sometimes $1000s, per chip, and AMD too, for first-run chips.
And programming... see my posting about 101 parallel languages, all current, absolutely none of them in use.
Oh, and by the way...
John Cocke was the person who got the ACM Turing Award for inventing RISC architecture. See http://awards.acm.org/citation.cfm?id=2083115&srt=alpha&alpha=&aw=140&ao=AMTURING
Dave Patterson is a great guy, a really smart guy, an acquaintance of mine, and a great namer; He came up with the term RISC. But he's not the original daddy. That's John Cocke.
This misunderstanding brought to you by (*humph*) zealous IBM security fostered by people in IBM Research who thought keeping it secret made it seem more important. (The mainframe guys weren't buying that, but there's another tale.) But the ACM got it right.
This is a black hole for the industry
There are applications other than the ones listed in the article that are fine. For example, as a broad brush, almost everything run on servers in IT shops can work just fine, either because they're already converted to multicore decades ago (database systems) or by using the VM technique of the article. (MS Azure doesn't even believe in multicore except via virtualization, for example.)
It's clients that are the problem. And clients that provide the high-price volumes that fuel a lot of this industry. Handheld is cool, and high-volume, but the price of the chips is too low for that to make up the difference.
Went through all this kind of thing in my blog. E.g., see:
According to the spec sheet for the Unisys ES7600R, a cell is the basic rack-mountable unit you buy -- each has up to 16 processors (presumably with 4-core chips), its own memory, and IO. Bigger systems are made by gluing multiple of these together. Not small. Each is 7" high, rack-wide and deep, weighs 95 lbs., and consumes 0.77 KVA.
Apparently, by the way, to answer my own question above, the glue connecting them is memory access spanning cells: A processor in one cell can access memory in other cells. And do IO to devices attached to other cells. So it's NUMA, not a cluster. But with that optical interconnect, there must be a big latency hit going to other cells.
NUMA Cluster? Say what?
A cluster is a bunch of separate computers that communicate by IO and messages. A NUMA system's big big thing is to maintain cache coherence. Which is it? Sequent's definitely NUMA, not cluster. But optical links? I associate those with latency that's way bigger than desirable for memory accesses (orders of magnitude lower than network latencies).
Vists vs. Crysis
All those posts about Vista. *sigh*
Am I the only one who found the Crysis comment hilarious?
Even though he spelled it wrong.
(Paris, because she's the one with the question mark.)
Maybe not just "allow," "promote"?
I'm thinking this may explain why the Quicktime updater, which keeps trying to foist iTunes on me, started last week to also try to shove Safari down my throat.
The system being propositioned is of course not Apple, unless Apple just started running Vista.
Nothing to be alarmed about
Go home. Have a cup of warm milk. Calm down. This is just a manifestation of how the US Patent system really works, as opposed to what most folks think.
The usual assumption, and quite possibly the way it is supposed to work or used to work, is that by the time a patent is granted some real vetting has verifyied that the submission is new, and works. Nope.
What happens: After submission you receive from the patent office a short list of potential prior art that is obviously the result of a pure keyword search, using keywords chosen by someone with no real understanding of what you submitted. So you spend about a half hour writing up why nobody in his or her right mind would think the examples cited have anything to do with what you sent in, feeling like a complete lunatic and trying hard to remain polite while writing things like this: “It is true that, like my invention, case 28703910776 is also eaten. However, it is an orange. My submission is a fish. This case has orange-colored, pebbly skin, and grows as fruit on a plant on dry land. What I submitted is an animal, not part of a plant, has skin that is slippery and scaly, and lives in water, not on dry land.”
You send that back. You get a patent. Simple.
Oh, occasionally there’s a true hit that requires some thought to rebut, or even genuine prior art invalidating your submission. But that happens so seldom it’s clearly a random occurrence.
The only real test of patent validity is whether it stands up in court after someone sues for license fees, and the target of the suit decides putting up a fight is worth it. That a patent has merely been granted, or even produced some license fees, is not a terribly meaningful event.
I’m told this has changed or may change relatively recently, due to the US Patent office posting at least some submissions on the web for others to look for prior art. This could be a good way to get the wisdom of mobs applied to the problem, and may work well. I hope it works for a while, anyway, before getting trashed by spammers and by shills posting bogosity to protect their employers’ interests.
Why should this fix things now? $20M is peanuts. The issue is apps.
The HPC crowd has been trying to make parallelism simpler for about 40 years. It can't be argued that those guys are dumb. It's also obvious that the total amount already invested in this quest completely dwarfs the $20M Intel & Microsoft are putting up now. There have already been multi-year projects at UIUC, MIT, Berkeley, Stanford, almost any other big research CS department anywhere, and numerous national labs.
The result? Fortran and C/C++ is what's used, augmented with MPI and OpenMP. No breakthrough in usability there.
My conclusion from the historical failure of all those efforts is that if the algorithms used for an application are parallel, you can do parallelism really well -- witness responding to a jillion user transactions (the canonical big-iron SMP app), or serving a jillion web requests (server farms are parallel systems), or doing image manipulation (for some manipulations, not all, lots of bit-level parallelism as in SSEx and CUDA). For such cases, parallelism is there, and gets used, in huge quantities. That's not to say that it's easy even then; it's not, although the complexity mostly gets buried in subsystem code (like OLTP monitors, Apache, Java beans). But at least it is there.
It's just not there for the total breadth of applications on commodity systems.
So I agree with Linus, with one caveat: Is there a new parallel killer app out there somewhere? Intel thinks virtual worlds are it, by the way, and I'm not sure they are wrong.
But it won't be heard
If it's not publishable as a scary sound bite, the media will never pick it up -- because it won't attract eyeballs.
This story itself is no exception.
Explain something to me, please.
What's the difference between Cloud Computing and the Grid?