Cirtas is cutting back its cloud storage activities as it finds businesses are not taking to storing primary data in the public cloud as ducks might take to water. Think about it. I'm a business and I'm being besieged by suppliers telling me I need to cut the latency applications endure when reading or writing primary data, the …
Get 'em while they'er hot!
Coffin nails! Coffin nails for sale here. Get 'em while they're hot and before the rush.
Cloud storage - the mystery!
As with many new technologies, the cloud is a mixture of hype and confusion, with some nuggets of pure genius. The greatest confusion is the confusion of "in the cloud" versus "for the cloud" for storage. Moving primary storage into the cloud is not yet an option for most players, but here are some exceptions:
Multi-site data access efforts, such as genome projects, require a widely shared storage
Surveillance (in a very broad sense) requires both multi-site access and duplicated data
Mission-critical systems (banking, military, air traffic control etc) need second-site redundancy for current data.
Storage for cloud services (here's where the confusion occurs) needs the redundancy of multi-site storage perhaps more than any of these, as Amazon has just demonstrated. To me that doesn't mean that the storage is some nebulous pool spread over all the sites. It means that the primary copy is localized to the using servers, but is protected by near-real-time duplicates on at least one, and preferably more distant sites.
This is not the Cirtas model, but it isn't SANs or ATMOS Object store either. The crucial issue is providing the near-real-time factor, and other major issue involve the bandwidth used to make replicas. Clearly, having a smart redundancy scheme that reduces the size of replication and the bandwidth to transmit them is necessary for a plausible story. There are several companies exploring this space, and if they take full advantage of dedupe and compression, efficient striping and caching etc., I believe they will deliver a usable product for quite a broad spectrum of cloud-based server applications.
However, our access methods to stored data have become outmoded. We need to accept that much of the data stored is best-dedupes, read-occasionally and object-managed, while there is some data with frequent updates that requires a high level of snapshotting. Block and file access do not serve these paradigms well. We need an initiative, perhaps through SNIA, to extend access methods and make storage both locally and in the replicas efficient and reliable.
The Other Side of Coffin Nails ..... XPEditionary Forces are as Anonymous Legion
"If the primary data is in the cloud then you lose control over its placement and your ability to tune that."
Suitable leading primary data stored transparently for cloud hosting and web networkworking, allows internetional secrets to be shared and strengthened, honed and polished. And New Intelligence results in New Information and Fabulous Advanced Narratives.
A SMART Cloud Delivers Data ZerodDaily Almost Always. And SMART Thinking Clouds Provide Digital Vehicles and Virtualised Stores/the Hardware and Software of AI Turing Machines/Cogent MetaDataPhysical Robots.
Certainly something simple to remember is the fastest clouds carry no stupid secrets and thus are invited everywhere for the information that they can donate.
Data stored is not productive. Information is Active and Intelligence Viral easily Digitally Mastered/Configured/Programmed.
Which makes the future an interesting experience for pioneering travellers and crack code hackers.
Reason for latency
The reason for latency is in the end the single file size. Getting a 20 MB file takes longer than a 3 MB file. 3 MB from the cloud or a data center 500 miles away can be retrieved faster than the same file with 20 MB in size from your local servers.
Here, Native Format Optimization from providers like balesio comes into play. Getting single file sizes down through natively optimizing the contents provides huge advantages in terms of latency.