14 posts • joined Monday 4th October 2010 17:26 GMT
on a serious point...
Both these projects do go to show there is real desire amongst the world's citizens for pushing the boundaries of technology to boldly go where no one has before. There go politicians if you want to engage a new group of voters and stoke the western economies out of the duldrums, have some imagination. Build to the limits of our understanding not what's available today, a certain project to put man on the moon did just that 50 yrs ago!
Take it from experience with both these technologies
Having worked with both ec2 for large scale storage and gluster this is a recipe for storage which runs like a dog (not to mention will probably cost well over the odds very quickly). Low latency, large network backplanes and direct block storage access are the recipe for success with object file systems such as gluster
Beating whitespace pattents
Is it use me or does this have all the hallmarks of someone trying to bamboozle the patent authorities into granting a patent on an existing innovation namely the kind of white-space frequency registration tech currently being pioneered?
Again whats the point
Great box but how many use cases are there in the real world where it dosnt matter that the main memory is unreliable, best i can do is 5 all around media delivery but thats a pretty narrow market especially coupled with the virtulisation limitations
Correction to point
A) They are most probably using a Xeon chip set (3450 likely candidate) which implements ECC on the QPI channels
Couple of design issues to note
Couple of things strike me about this board
A) Core i7 doesn't have ECC controller in the cpu package so presumably they are using a custom chip set which adds ECC support but no doubt reduces memory throughput being outside the die
B) Why havnt the staggered the boards to minimize heat shadowing, probably says something about the intended form factor
All in all has the hallmarks of a one off run and cray wished to get some PR out of the extreme engineering
Its all about the latency
Dealing with a lot of HPC workloads its all about the latency for our apps. The business case for 10Gb was easy as it offered a massive step forward in latency reduction thanks to the increased clock rate. All the implementations of 40 & 100GB ive seen to date are nothing more than multiple 10GB channels bonded together much like DWDM therefore not achieving any lower latency. While im sure there will be latency gains mainly in the protocol signaling it certainly wont be the orders of magnitude gains we saw from 1->10GB so any business case will have to revolve around high bandwidth areas such as uplinks and storage not the server level
Fragility of block storage & hypervisors for clouds
Is it just me or does the idea of distributed block level storage sound a rather poor concept? The issue with distributing the blocks is as the lowest common denominator it is also the most sensitive to latency.
Fundamentally the reason for AWS requirement for this approach is the use of hypervisor providing hardware emulation which requires direct block access for the VM images. One has to ask if this is really such a good approach long term for clouds given its considerable performance overhead as well as fragility..... PaaS anyone?
What a terrible idea, dim sockets in servers are precious enough at the best of times. Even if they do use the memory bus for transport they will be hideously slow in terms of both absolute throughput and latency compared with DRAM. FusionIO and OCZ et al are on the right track allowing it to be used as swap/extended mem pages
Take the next logical step
A PUE of < 1 purely means reusing at least some of the energy exhausted as heat to power the servers a second time
While i admit this has been demonstrated to date the fundamental technology underpinnings have advanced rapidly in recent years to enable it to take place
Pneumatic or Flywheel storage anyone?
Is it just me or does this typically DARPA request sound like a job for pneumatic, hydrochloric or flywheel storage, all have a growing pedigree in national and renewable energy storage.
This all said its most certainly the case that as the human race we are not good at storing energy and its certainly a barrier to the next generation of energy intensive technologies being practically employed
Am I the only reader of Redmond related articles left wondering why they do not port their stack to Linux/BSD?
This could be achieved in reactively short order (through buying up wine and other cross over expertise) which would put them on a level playing field with the likes of Google but with the added advantage of their existing vast ecosystem.
Im sure this very notion might horrify people with feet in either camps but with MS's poor record on core OS development and likewise unix's begrudging relationship with the concept of GUI's could this be a match made in heaven with each party playing to its strengths? Both Ubunutu & Apples OSX are surely the case study in point.
There you go Steve an idiots guide to getting that bonus reinstated :-)
- iSPY: Apple Stores switch on iBeacon phone sniff spy system
- It's true, the START MENU is coming BACK to Windows 8, hiss sources
- Chinese gamer plays on while BMW burns to the ground
- Pic NASA Mars tank Curiosity rolls on old WET PATCH, sighs, sniffs for life signs
- How UK air traffic control system was caught asleep on the job