The boffins who run two big supercomputers on behalf of the UK government and academic research institutions - as well as one smaller machine aimed at industrial users - have converted those machines into an HPC utility called Accelerator. And they want you to buy core-hours on their machines instead of wasting your money …
I'm from 1970....can I have my paradigm back?
Ah - you beat me to it :-).
It is funny to have seen things revert back to purchasing time on a University mainframe again in one lifetime. Pretty fast circle going on there....
Don't know from what moon you people are coming from but buying processing power on educational/governmental supers (excluding those from the military-surveillance complex of course) has always been the norm. It's not like you are going to run your Fluid Dynamics on Amazon ECC. Well, you can, but there are some drawbacks.
Cloud computing has seen tremendous growth, particularly for commercial web applications. The on-demand, pay-as-you-go model creates a flexible and cost-effective means to access compute resources. For these reasons, the scientific computing community has shown increasing interest in exploring cloud computing. However, the underlying implementation and performance of clouds are very different from those at traditional supercomputing centers. It is therefore critical to evaluate the performance of HPC applications in today's cloud environments to understand the tradeoffs inherent in migrating to the cloud. This work represents the most comprehensive evaluation to date comparing conventional HPC platforms to Amazon EC2, using real applications representative of the workload at a typical supercomputing center. Overall results indicate that EC2 is six times slower than a typical mid-range Linux cluster, and twenty times slower than a modern HPC system. The interconnect on the EC2 cloud platform severely limits performance and causes significant variability.
"but buying processing power on educational/governmental supers"
Agree entirely, we had ( I'm now retired) several in-house Linux clusters of 1024 & 2048 nodes for computationally intensive jobs but would also buy time on more powerful systems. It's the norm in many areas of science.
Ok, fess up, who in here is just in the larval stage or is one of those 12-year old "haxxors" one hears about?
Didn't SUN try this?
Sun tried something like this 8 years ago, had a system called GRID..
Well good luck with that.. hmm.. cloud, grid, cluster, piece of fail.. umm sorry I mean Accelerator :D
Re: Didn't SUN try this?
Unless I'm mistaken the Edinburgh University "Eddie" cluster, for those of us that can run isolated parallel work, rather than heavily communicating work, uses the Sun GRID Engine or something similar for scheduling jobs, so the Sun GRID work wasn't useless.
It's easy to see how big centres like place located in places with cheap/green energy could take the place of everyone having their own big iron (as long as the whole process is secure).
Its a fine Utopian dream. But its not realistic at least until everybody has Gb Internet, availability/security is guaranteed and data handling legislation is standardized across all countries involved
If they fail can I have it as a Minecraft server?
- Vid Hubble 'scope snaps 200,000-ton chunky crumble conundrum
- Bugger the jetpack, where's my 21st-century Psion?
- Windows 8.1 Update 1 spewed online a MONTH early – by Microsoft
- Google offers up its own Googlers in cloud channel chumship trawl
- Interview Global Warming IS REAL, argues sceptic mathematician - it just isn't THERMAGEDDON