Hyperscale computing, or simply hyperscaling, is a concept that has begun to be talked about relatively recently. Let's kick off with Webopedia's definition: Hyperscale computing refers to the infrastructure and provisioning needed in distributed computing environments for effectively scaling from several servers to thousands of …
Latency is key.
In general Latency kills performance and drives up costs. Managing that latency dynamically and ensuring that your price/perf doesn't fall through the floor is the key to making this Hyperscaling malarkey work properly. I am not convinced that there are tools available to do that yet, in my mind you need something that lets developers/support folks to monitor/rebalance/reconfigure a graph of processes on the hoof. If you provide the right APIs then the process can be automated to some degree (or perhaps completely if you are very lucky/clever), and eventually libraries of these automation routines/processes will take over the bespoke work...
Time will tell, but I am optimistic because I know it's doable right now using a bog standard UNIX/POSIX box (you don't even need all the VM gimcracks), and I think people are overly familiar with the pain of developing, deploying and managing distributed apps and they are looking for a better way to do things. So I see the shape of a solution and I see demand for it, so I think it'll happen soon. While SDNs might be a useful tool for this effort they aren't strictly necessary IMO, interesting times.
- Updated Zucker punched: Google gobbles Facebook-wooed Titan Aerospace
- Elon Musk's LEAKY THRUSTER gas stalls Space Station supply run
- Windows 8.1, which you probably haven't upgraded to yet, ALREADY OBSOLETE
- FOUR DAYS: That's how long it took to crack Galaxy S5 fingerscanner
- Did a date calculation bug just cost hard-up Co-op Bank £110m?