It's so near we can almost smell it: the Holy Grail storage array combining server data location, solid state hardware speed, memory access speed virtualisation, and the capacity, sharability and protection capabilities of networked arrays. It's NAND, DAS, SAN and NAS combined; the God storage box – conceivable but not yet built …
This game is called
Passing the 'I/O bottleneck' token from server to network to data array and back again.
How ever did we manage back in the 1980's with all those truly horrible latencies?
Lets face it, you will never get to Zero latency. For many applications a 1 millisecond latency is not a killer. As you head towards the 'God' solution of zero latency you will get to a point where an app will say, that fine for me. Reduce it further and more apps will drop off the list.
Eventually you will get into areas where the 'law of diminishing returns' takes over. IMHO, this is where we are now. Huge sums of $$$$ etc are being spent to get very little. Sure there is some filtering down into more GP kit but you really do have to aks yourself 'is this really needed? Can I justify the spend to get just that little bit more speed?'
Much like the Space Race, in the end, the answer has to be a simple 'Nope we can't'. When that happens a good few of the companies involved in this area will crash and burn.
Yes, of course! (hand slapped firmly on face)
More apps will drop off the list? An application (or os) will say ;"No, no, this is too fast for me! I am just an old Operating system/database incapable of receiving this data almost instantly!"
Much like absolute zero is the holy grail of science where huge discoveries have been made getting there, new ways in creating storage solutions are developed and imagined while figuring out how to lower latency.
And the space race was the most important event in human history enabling human kind to reach for the stars. And it is not 'No we can't', it is 'Yes, we did!'
Usenet is the place to look for this architecture.
Seriously. We've been doing it for decades.
All of these attempts at reducing latency and still, in a FC-SAN, no mention that Virtual Instruments is the only solution on the market that can definitively show the true, protocol-based latency and IO in real-time.
Micron and Virtensys -- what about Xsigo?
Maybe Micron's CEO flaming out distracted from their purchase of Virtensys. What is Micron going to do with sharable PCIe enclosures? Where is Xsigo in the God-box equation?
is latency that big a deal?
I can see cases where it matters (basically anything that requires interactivity, such as database back-ends, game servers or things that need to be as real-time as possible) but how many of these are going to be held up by client-side latencies (application loading) or network latencies (things accessed from cloud or web interfaces)? I can also think of many more applications (indexing, compiling, transcoding, rendering, etc.) that are more compute-bound or that would much prefer to have bigger throughput/bandwidth than lower latencies. Besides, in a lot of applications the latencies associated with data transfer can be hidden by doing double- or triple-buffering of work packets or precaching data. This can usually reduce latencies to effectively nil, provided there's a discernible access pattern and it's not just purely random access (which the tiered, "temperature"-based storage caches won't handle well anyway). So does latency really matter so much?
...a product for a storage admin who is unable to think about what the application actually needs. We're never going to see this for one big reason, a thing with everything will always cost more than a thing with what you need.
I'd love to try and do Virtualisation in a server located thing though, decentralised replication, eeer, QoS definately required.