So who did they design for previously? Orangutans?
2814 posts • joined 24 Apr 2007
team on crack?
exactly how many iPads may they claim on expenses?
If only it were a one in a million chance, we would see them every time
Mine is the one with "The Truth" in the pocket
I can just hear a couple of surviving dinosaurs having a conversation
"When I was a lad, we proper fleas, not these miniature little things that cannot bit through a piece of paper if they wanted to!"
"Right you are! I remember fleas that could bite right through a Triceratops's scales, he could"
"That's nothing, I saw some fleas that could drill straight through an Ankylosaur's club, no less"
"Rubbish, we had fleas which could drill for oil, they could, bite so strong it would go a mile through solid rock, it could!"
"And the problem with kids these days is that when you tell them they don't believe a word you say!"
Came with better screens and nVidia graphics. Light machines which run CUDA stuff. Not cheap though. My even older Vaio SZ is still working, and even it can run more (older) CUDA stuff. No Ultrabook ticks that box.
Fortunately, there has been a spate of very decent 13" and 14" laptops with nVidia 520 and 540 graphics on board. Cheaper than the Ultrabooks too. So, guess what I will get to replace my crumbling SZ.
The stars spell
So long, and thanks for all the fish
Ah, the well-known cloud magnet effect. My scope is 15-16 years old, but every time I buy a new eyepiece clouds rush in.
What Wirth means is that for a given task, the current software needs vastly more resources (CPU and memory alike) than similar software years ago.
Why is this worrying? Because it suggests that we could get by on much leaner compute capacity for many mundane tasks. It means machines that still work fine have to be replaced when the software is updated, and the minimum specs are upped again. This is ultimately wasteful. It also means that bigger server parks are needed for a given workload. If you can make code more efficient, less hardware is needed, and less energy is wasted. Mobile computing (like embedded) can give an impetus to leaner programs, simply because cutting clock cycles can cut battery usage.
As Niklaus Wirth says: Software is getting slower faster than hardware is getting faster.
Word 2.0 was a very capable word-processor, and ran happily on my 25 MHz 80386 machine with 4 MB of RAM (I really splashed out on that machine :-) ). Word 2010 would require rather more. More in fact than the processing power and memory of the good old Cray Y-MP. That is worrying.
GUIs of course do need more resources, but the above example suggests you can run a full-blown GUI-based word processor in 1,000 times less memory than we seem to need today. If you look at the memory footprint of something like FLTK, which is so small you can afford to link it statically for easier distribution, and compare that to some other tools, you cannot help but question the efficiency of some code.
Much of the best coding is going on in the embedded-systems world. You really have to conserve resources in that arena.
I must say I was a bit miffed at not being able to take part. I have several ideas of what to do on a petaflop machine, but as I am not a US or Canadian resident (excluding Quebec (mais pourqoui?)) I cannot send them in. AMD had similar rules for their "what would you do with 48 cores?" competition. There may be some law in the US requiring this, so nVidia may be obliged by law to include this rule. Pity.
"FFS - get yourself a high end graphics card! 256 cores + 1TB memory (or more) + proper parallel coding using CUDA. £300 will get you the dogs-danglies in the commercial world. If you want mil-aero specification - GE-IP have them for reasonable prices.
Image processing - visible, infra-red, microwave, radar, or all combined is exactly what CUDA on these high end graphics cards was designed for."
I assume you mean 1GB not 1TB of memory. If you can supply me a 1TB memory graphics card for £300, I am happy to give you a £300 tip :-). My images start at about 1GB, and end at 1.5 TB (for now). So they will not fit in my graphics cards (with the additional data needed during processing.
Regarding image processing and CUDA: For quite a few image processing problems you are right, in this case you are wrong. Connected filters that we use have a strictly data-driven processing order, which does not work well in CUDA. Indeed, because the outcome for every pixel depends on every other pixel (potentially), parallelization itself is hard (see the first method with can be found here (warning, pdf)). On 64 cores I am now getting about 30x speed-up. I am trying to adapt this to CUDA (or OpenCL) in collaboration with the CSIRO in Sydney, Australia, but we have not got it running yet.
I am just testing a new compute server for processing remote sensing images (64 cores 512 GB of RAM). My first runs already use up to 480 GB of that, and chug through a detailed analysis of 3.5 Gigapixel image in just over two minutes (was nearly an hour). The new images will be 2.2x larger for the same area covered.
I WANT A TERABYTE of RAM!!!
that was my optional title which was somehow removed (did I offend? ;-) )
These are not often mentioned, but they had quite a following. It was a neat machine, with Z80A and 128kB of memory expandable to 4MB, two asics controlling memory, sound and graphics. The memory worked at twice the clock frequency of the CPU, so the controller and CPU did not interfere (one had the even clock ticks, the other the ODD. Nice machine to play around with. Decent Basic, and word processor on board, very expandable. Linked it up to my Dad's daisy-wheel typewriter (what a racket that was
in which case you do not need to introduce a virus to have data and files going AWOL
Let the government handle it. That seems the most sure-fire way of messing things up.
Bespoke zombie processes?
Sorry couldn't resist
It suggests bucks can be passed arbitrarily.
We just did our first experiments with a 64 core machine with 0.5 TB of RAM. Great days back then, but boy are we spoiled now.
A scan will be needed
Might set these papers as study materials for our students.
"Yes, but thats like saying "I've had a really clever idea - the sun is a large energy emitting source".
Its completely damned obvious.
The only people "making money" are those who actually aren't counting costs in the first place - ie someone writing something in their evenings or whilst sat at their desk being paid to do something else ;-)"
Agreed, it is obvious, but politicians need people to point out obvious things to them. Frequently.
And of course the pointing out the bleeding obvious must be done by qualified people like professors and engineers, or else the politicians might look silly.
Was it not exactly the point of the professors and engineers that you have to sell a lot of apps just to break even?
Furthermore, what they appear to be saying is that you need a lot of investment, and that the idea of getting rich quickly from an app slapped together in a few weeks/days of coding is a pipe dream. That seems to make sense.
I do not think this will make a good mother's day gift
I have liked the format from the start. I will give this a serious look indeed.
Do they just google for them?
Sorry, couldn't resist.
Mine is the one with "Turn Left at Orion" in the pocket
Conceding you are stupid (or that you do not know something) is a clear sign of intelligence.
Politicians never make such concessions
why do you think we have fast caches for chips? Imagine the entire memory working at the speed of the CPU. That would be awesome.
At the moment, I have to think hard about cache friendly processing orders. Getting it wrong can incur a 10-fold speed penalty, easily. If have a set of for-loops to traverse an image, having the x-coordinate loop outside the y-coordinate loop is tens times slower than the reverse, because of the way images are stored (row by row). A step in x moves to the next element in memory (= cache hit with standard read-ahead), whereas a step in y steps a whole row of data further, yielding a cache miss.
Such simple cases are easily sorted out, but some image processing has data-driven processing orders, very frequently requiring odd memory jumps. In these cases getting rid of latency is a godsend. Also, think of multi-core: ensuring cache coherence is a pain. Older Cray machine had no cache, and the memory worked at the speed of the CPU. This is much simpler and yields much better parallelization.
Clearly none of you have read the opening page of "Good Omens" by Pratchett and Gaiman properly!
Nobody welcomes our XNA-based overlords?
After all, we all know how this ends in SF films
knowing you aren't getting out?
Good point. I sometimes wonder if even for the likes of Khadafi or Saddam Hussein, being put in prison for the long haul and being treated as if you were ordinary would not be the worst possible punishment for those types.
I must have an unerring instinct for avoiding these bad movies. I have not seen a single one of them (so I will have to take other peoples' words for the lack of quality), except a few dislocated fragments of Highlander II, whilst flicking around channels as it was on one of them, couldn't be arsed to stay on that channel for obvious reasons. Even in my state of boredom at the time that film did nothing to make me want to see it.
Not that I have not seen some cringeworthy films in my time. I saw "Once upon a time in the west" in a showing at my student union donkey's years back. It had some good bits, but so many silent, LOOOONG drawn out scenes, and pointless close-ups of people looking constipated as they are waiting (endlessly) for the other to draw first. I felt like shouting "Come on, shoot the guy, get on with it!" but I kept my peace for the sake of the other people watching. Afterwards, it turned out about 80% of the audience felt the same way.
You can rely on him for comedy at least, ans politicians you can rely on on any topic at all are rare enough.
Were are getting a compact little monster for giga-pixel and later tera-pixel image processing coming Monday: 64 cores, 512 GB RAM, 6 1TB disks in RAID, and a 320 GB Fusion-IO card. I do not think I will let my students loose on this API just yet. First let them code in a transparent portable way, and only then (maybe, just maybe) check whether this API has any benefit (which I doubt).
I prefer the Politicion. It comes in three quantum "flavours": Labour, Conservative, and Libdem. When observed before an election, the three flavours are distinctly observable. After an election Conservative and Labour turn out to be identical in terms of greed, incompetence, and all other observable quantum properties, and Libdem vanishes.
Yes, but creatively. We might of course find that politics is nothing but spin, and taking spin away leaves nothing but emptiness (or a residue of hot air at most).
Well, are they?
Cue planetary racism: we do not like your type here, you do not belong here really!
But maybe planets of a certain age have gained wisdom, rather than becoming stupid with more authority (as Lu Tse said in "Thief of Time")
In the Zillertal to be precise. At least they are honest, one might say.
Au contraire! Some criminals do regularly get their nose bridges realigned. Not necessarily voluntarily, I'll grant. As I recall a usual start of such a procedure is "What yer lookin' at?!!"
In true style, you should wear a Guy Fawkes mask, shouldn't you?
Conveniently, a criminal is defined as anyone recognized by the system as being a criminal. Hey, look! Our system makes no mistakes!
What? Do you say our system makes mistakes? That is criticism of the Party! You must be a criminal!
Unfurl the solar sails and pick up a tail wind!
If only it could ;-)
A toast to Elon Musk and his team! They will be first on Mars, I bet.
and those that haven't grown up
Unfortunately, politicians only know how to take financial incentives away by taking the finances away. This does change the scientific outcome, in that there is less science. Please do not give the powers that be any ideas
Unfair! The Predator's brains would implode after watching just one episode of teletubbies