Hyperscale computing, or simply hyperscaling, is a concept that has begun to be talked about relatively recently. Let's kick off with Webopedia's definition: Hyperscale computing refers to the infrastructure and provisioning needed in distributed computing environments for effectively scaling from several servers to thousands of …
Latency is key.
In general Latency kills performance and drives up costs. Managing that latency dynamically and ensuring that your price/perf doesn't fall through the floor is the key to making this Hyperscaling malarkey work properly. I am not convinced that there are tools available to do that yet, in my mind you need something that lets developers/support folks to monitor/rebalance/reconfigure a graph of processes on the hoof. If you provide the right APIs then the process can be automated to some degree (or perhaps completely if you are very lucky/clever), and eventually libraries of these automation routines/processes will take over the bespoke work...
Time will tell, but I am optimistic because I know it's doable right now using a bog standard UNIX/POSIX box (you don't even need all the VM gimcracks), and I think people are overly familiar with the pain of developing, deploying and managing distributed apps and they are looking for a better way to do things. So I see the shape of a solution and I see demand for it, so I think it'll happen soon. While SDNs might be a useful tool for this effort they aren't strictly necessary IMO, interesting times.
- Product round-up Ten excellent FREE PC apps to brighten your Windows
- Hi-torque tank engines: EXTREME car hacking with The Register
- Review What's MISSING on Amazon Fire Phone... and why it WON'T set the world alight
- Product round-up Trousers down for six of the best affordable Androids
- Why did it take antivirus giants YEARS to drill into super-scary Regin? Symantec responds...