Hyperscale computing, or simply hyperscaling, is a concept that has begun to be talked about relatively recently. Let's kick off with Webopedia's definition: Hyperscale computing refers to the infrastructure and provisioning needed in distributed computing environments for effectively scaling from several servers to thousands of …
Latency is key.
In general Latency kills performance and drives up costs. Managing that latency dynamically and ensuring that your price/perf doesn't fall through the floor is the key to making this Hyperscaling malarkey work properly. I am not convinced that there are tools available to do that yet, in my mind you need something that lets developers/support folks to monitor/rebalance/reconfigure a graph of processes on the hoof. If you provide the right APIs then the process can be automated to some degree (or perhaps completely if you are very lucky/clever), and eventually libraries of these automation routines/processes will take over the bespoke work...
Time will tell, but I am optimistic because I know it's doable right now using a bog standard UNIX/POSIX box (you don't even need all the VM gimcracks), and I think people are overly familiar with the pain of developing, deploying and managing distributed apps and they are looking for a better way to do things. So I see the shape of a solution and I see demand for it, so I think it'll happen soon. While SDNs might be a useful tool for this effort they aren't strictly necessary IMO, interesting times.
- Geek's Guide to Britain Kingston's aviation empire: From industry firsts to Airfix heroes
- Analysis Happy 2nd birthday, Windows 8 and Surface: Anatomy of a disaster
- Review Vulture trails claw across Lenovo's touchy N20p Chromebook
- Adobe spies on readers: EVERY DRM page turn leaked to base over SSL
- Analysis The future health of the internet comes down to ONE simple question…