Hyperscale computing, or simply hyperscaling, is a concept that has begun to be talked about relatively recently. Let's kick off with Webopedia's definition: Hyperscale computing refers to the infrastructure and provisioning needed in distributed computing environments for effectively scaling from several servers to thousands of …
Latency is key.
In general Latency kills performance and drives up costs. Managing that latency dynamically and ensuring that your price/perf doesn't fall through the floor is the key to making this Hyperscaling malarkey work properly. I am not convinced that there are tools available to do that yet, in my mind you need something that lets developers/support folks to monitor/rebalance/reconfigure a graph of processes on the hoof. If you provide the right APIs then the process can be automated to some degree (or perhaps completely if you are very lucky/clever), and eventually libraries of these automation routines/processes will take over the bespoke work...
Time will tell, but I am optimistic because I know it's doable right now using a bog standard UNIX/POSIX box (you don't even need all the VM gimcracks), and I think people are overly familiar with the pain of developing, deploying and managing distributed apps and they are looking for a better way to do things. So I see the shape of a solution and I see demand for it, so I think it'll happen soon. While SDNs might be a useful tool for this effort they aren't strictly necessary IMO, interesting times.
- Vid Hubble 'scope snaps 200,000-ton chunky crumble conundrum
- Bugger the jetpack, where's my 21st-century Psion?
- Google offers up its own Googlers in cloud channel chumship trawl
- Interview Global Warming IS REAL, argues sceptic mathematician - it just isn't THERMAGEDDON
- Windows 8.1 Update 1 spewed online a MONTH early – by Microsoft