9 posts • joined 24 Apr 2013
Nirvanix was doomed from the start...
Well, the Nirvanix failed for 3 plausible reasons according to Simon Robinson from 451 Research. To paraphrase his analysis...1) The Nirvanix business model was too capital intensive for what the company was charging and the company eventually burned through its cash. 2) The Nirvanix "Cloud File System" software did not scale-out as well as they were expecting. This eventually became problematic because Nirvanix needed to scale-out to keep growing in order to run with the big dogs. 3) Nirvanix did not evolve beyond its initial storage service offering. This limited the extent to which customers could grow in their use of cloud infrastructure with Nirvanix.
The cautionary tale is customers need to develop an "exit strategy" or have a contingency plan ready when a Nirvanix-like implosion happens. Cloud service providers like Nirvanix are not without blame when they are "working without a safety net" in their business. Cloud service providers should have a capital reserve fund or insurance to wind down their business if it must be shuttered. The "customer-be-damned" attitude doesn't work in the cloud and it will invite government regulation if this type of behavior is repeated. I applaud the efforts being made by Aorta Cloud/Capital to keep Nirvanix operating so customers can have the opportunity to make decisions about what to do regarding their data stored at Nirvanix.
EVault is not the only one...
Well, Glacier-compatible cold storage is also being developed by SageCloud in Boston, MA. The SageCloud founders Jeff Flowers and David Friend are from Carbonite. They are basing their work on the facebook Open Compute Project. SageCloud completed a $10M funding round this summer bringing their total funding to $13M. The company recently signed an agreement with Avnet's Rorke Global Solutions to assemble the SageCloud hardware. SageCloud will make first customer shipments in January 2014. I received an explanation of their cold storage technology under NDA. I think what they are doing for the cold storage/archival market will be well-received in terms of cost, energy-efficiency and performance.
Re: Same 'ol, same 'ol...
What actually constituted a "data center" in the 1970s? Most of what passed for "data centers" back then were IBM mainframe (System 360) time share services. Interactive computing was being offered by DEC, but you generally bought or leased DEC mini-computers and kept them on your own premises. Everything having to do with the actual computation and storage of data was generally installed in "glass rooms" that were temperature and humidity controlled but I don't think they fit the modern definition of a data center. Data storage on rotating magnetic disks or removable "disk packs" was only available for small amounts of data because it was limited in capacity and very expensive. Lots of data was stored on magnetic tapes, which were mounted and read/written when the data was needed. Human beings had to mount and dismount tapes from the tape drives. It all seems quaint by today's standards and it was probably glitchy and unreliable at times too.
Intel could be the winner in the Amplidata sweepstakes
Well, I agree that Intel is a much more likely suitor for Amplidata than Quantum. I also agree that Amplidata's cash burn rate could be driving an acquisition strategy by Amplidata's management. As for Cleversafe suing Amplidata, I think there have been suits and counter suits from both of them over the past few years. Amplidata has worked with Intel and Quanta using their AmpliStor software as part of "Intel's Cloud Builders Guide to Cloud Design and Deployment on Intel Platforms". It would be interesting to see if Intel open sourced AmpliStor as part of their acquisition of Amplidata, as it would provide another open source object storage software in addition to Ceph and Riak CS.
Re: Erasure codes not a good match for Massive Media data
Editorial comment to my previous post...CAStor makes it possible o avoid erasure codes for objects below a certain size...
Erasure codes not a good match for Massive Media data
It appears Amplidata has now tweaked" their software to make it perform better in a predominantly small object storage environment where erasure codes are not a good "fit" for small objects. The ingest and retrieval of millions of small objects using erasure codes didn't work...well enough. Caringo's CAStor makes it possible not to avoid erasure codes for objects below a certain size and use replication instead. It is the combination of replication and erasure codes that makes CAStor a better "fit" for Massive Media.
All governments hate the Internet
A worthy post, but all governments hate the Internet for the simple reason that information can flow freely over it. In states without a tradition of individual liberty, it is easier to control aspects of Internet use. The "Great Firewall" in China prevents obtaining results on searches that contain certain words and phrases. In Cuba there is practically no Internet availability for the average citizen, although recent developments indicate the Cuban government may be relenting a bit. In the countries of the "Arab Spring" uprisings, governments under siege by their citizens were able to cut-off Internet and cell phone service for a time.
In the United States, Internet service is widely available and commerce is heavily dependent on it, so it is not acceptable to use such crude methods as blocking searches and interrupting or suspending Internet service. The NSA, which is part of the U.S. military establishment, has resorted to widespread data collection (Big Data) and analysis of both foreign and domestic Internet traffic. The data collections is indiscriminate and universal. NSA has a one-million sq. ft. facility built in a mountain in Utah just for the purpose of storing and analyzing data scooped up off the Internet or out of the air.
The mere fact that such huge volumes of data is being collected and permanently stored is sufficient evidence that the foundations of a modern police state are being established. It remains to be seen whether Americans will be able to push back against the government-backed military establishment.
Object storage...not so problematic
I suspect the article was written to generate more heat than light on the subject, but storage vendors do have an industry association in SNIA and SNIA is driving the CDMI standard into the storage market at a pretty good pace when it comes to standards. That said, AWS S3 is the de facto standard API for object storage and CDMI will be able to work in conjunction with it.. I spent the better part of last year evaluating a handful of vendors who provide object storage software to enterprise customers and partners to build private and public storage clusters. Each of the vendors is venture backed and their founders all have significant history in dealing with data storage requirements that were not solvable with the traditional file and block storage technology that we've had around for decades. The incumbent storage vendors have taken out their checkbooks and bought the storage technology they think will allow them to participate at-scale in the market. Whether they can be price competitive remains to be seen given their past history of bundling their storage software with their proprietary hardware. Because the object storage market is relatively new, you can expect to see some participants get acquired and some incumbents to change course. The recent Dell announcement about ending their OEM deal with Caringo has more to do with the internal players at Dell than it does with the quality of CAStor. All of this is par for the course and it may be quite a few more years before the object storage market and players coalesce. Remember that there were once over 200 vendors engaged in the manufacture of hard disk drives. After 30+ years we have 4 of them left. The same thing is going to happen in the SSD market too. There is a market for object storage and it is being met by offerings from relative new companies as well as the incumbents. It is not a matter of a technology in search of problem.
Dell...still dazed and confused
The Dell partnership with Caringo was a good move back in the day. The problem with the DX storage server line was you could not put enough disk drives in them if you were actually building out an object storage cluster. Right not you can put 72, 3TB 3.5-inch SATA disk drives in a 4u SGI MIS. That's 216TB per server and 2PB per 40u cabinet with room for a top of the rack 10GbE switch. SGI currently has a partnership with Scality, which is probably somewhat similar to what Caringo had with Dell. BackBlaze has opened up their design for a 4u storage server that holds 45, 4TB 3.5-inch disk drives and Supermicro has a 4u storage server that will hold 36, 3TB 3.5-inch disk drives. So you can take your pick of 4u storage servers that will hold 216TB or 180TB of 108TB each. If Dell want to be a player in this market offering "industry standard x86 hardware, then this is the ballpark for object storage servers today. When HAMR disk drives arrive starting in another year or so, it will be a whole new ballgame as capacities with start at 6TB per 3.5-inch drive and probably peak out at 20TB to 30TB per drive over the next 10 years.
- Vid Hubble 'scope snaps 200,000-ton chunky crumble conundrum
- Updated + vids WHOA: Get a load of Asteroid DX110 JUST MISSING planet EARTH
- 10 years of Facebook Inside Facebook's engineering labs: Hardware heaven, HP hell – PICTURES
- Very fabric of space-time RIPPED apart in latest Hubble pic
- Massive new AIRSHIP to enter commercial service at British dirigible base