Hyperscale computing, or simply hyperscaling, is a concept that has begun to be talked about relatively recently. Let's kick off with Webopedia's definition: Hyperscale computing refers to the infrastructure and provisioning needed in distributed computing environments for effectively scaling from several servers to thousands of …
Latency is key.
In general Latency kills performance and drives up costs. Managing that latency dynamically and ensuring that your price/perf doesn't fall through the floor is the key to making this Hyperscaling malarkey work properly. I am not convinced that there are tools available to do that yet, in my mind you need something that lets developers/support folks to monitor/rebalance/reconfigure a graph of processes on the hoof. If you provide the right APIs then the process can be automated to some degree (or perhaps completely if you are very lucky/clever), and eventually libraries of these automation routines/processes will take over the bespoke work...
Time will tell, but I am optimistic because I know it's doable right now using a bog standard UNIX/POSIX box (you don't even need all the VM gimcracks), and I think people are overly familiar with the pain of developing, deploying and managing distributed apps and they are looking for a better way to do things. So I see the shape of a solution and I see demand for it, so I think it'll happen soon. While SDNs might be a useful tool for this effort they aren't strictly necessary IMO, interesting times.
- Just TWO climate committee MPs contradict IPCC: The two with SCIENCE degrees
- 14 antivirus apps found to have security problems
- Apple winks at parents: C'mon, get your kid a tweaked Macbook Pro
- Feature Scotland's BIG question: Will independence cost me my broadband?
- Driverless car SQUADRONS to hit Britain in 2015