* Posts by cloudguy

64 posts • joined 24 Apr 2013


Keeping up with the kollect-kash-ians: Data manager Komprise more than doubles funding


Komprise makes steady progress

Back in the 90s Hierarchical Data Management (HSM) was supposed to move your data around for better management. Don't hear much about HSM now. Komprise has helped to change the name of the game from HSM to Enterprise Data Management by checking your Windows and Linux shares for candidate files to be moved to an on-premises object store or a public cloud object store. The Komprise control plane resides on AWS. However, it can run on-premises. Komprise leaves symbolic links when it moves files to deep and cheap storage. Users and their applications can continue to access their data without being retrained to know where to look for it. I beta tested Komprise before it went GA a couple of years ago. In my demo/test lab, I was able to move over a million files from Windows server shares to a Cloudian object store. Glad to see Komprise receive another round of funding to continue product development and expand marketing programs.

Big Cable tells US government: Now's not the time to talk about internet speeds – just give us the money


FCC captured by cable and telco oligopoly

Well, instead of being regulated by the FCC. The Republican-controlled FCC and its Chairman have been captured by the cable and telco oligopoly. Now the FCC wants to hand over billions in funding to the cable and telco oligopoly to improve broadband service in underserved rural areas. The FCC has bad data on rural broadband speed and availability because they rely on the same cable and telco oligopoly to report it to them. Local governments in cities and towns are better informed regarding who and where the underserved exist. So rather than provide funding to cities and town to plan for their broadband future and build publicly owned broadband networks, the FCC wants to shovel Federal funds into the coffers of the cable and telco oligopoly so they can increase the value of their assets for their shareholders at taxpayer expense. This is how the USA gets the worse Internet service at the highest prices in the world.

Seagate passes gassy 14TB whopper: He He He, one for each of you


Large HDDs are not meant to be backed up

Well, 8TB, 10TB, 12TB and now 14TB HDDs are not meant to be backed up, and they are not appropriate for traditional RAID schemes due to incredibly long rebuild times, although some smarter RAID controllers have learned a few tricks about rebuilding very large HDDs. These HDDs are destined for scale-out, object storage clusters which protect data objects using replication (making multiple copies of data objects) or erasure coding (sharding data objects and calculating parity fragments). In object storage clusters, data protection focuses on the number of cluster nodes and not the HDDs in them. Replication and erasure coding is spread over cluster nodes. No one cares if HDDs here and there fail, which they inevitably will if you have enough of them. Immediately replacing failed HDDs is not a priority. Replicated and erasure coded data objects on failed HDDs will have their "missing" replicas and erasure coded shards/fragments re-created on other HDDs on other cluster nodes by the object storage software running on each cluster node.

Object storage sweetheart Cloudian bags another $94m funding in E round


Re: Cloudian on way to IPO?

Well, here is a correction and a couple of additional details to my post. Digital Capital should be Digital Alpha. Cloudian received $25M in equity funding and $100M in debt financing from Digital Alpha in March 2018. Cisco is one of the limited partners in Digital Alpha which explains why Cloudian will offer Cisco storage servers as part of their previously announced pay for what you use on-premises storage consumption model. Lenovo is a Cloudian investor so they could have an interest in acquiring Cloudian because they lack an object storage solution to call their own.


Cloudian on way to IPO?

Well, Mike Tso CEO of Cloudian hinted that it could happen in 24 months. Jerome Lecat CEO at Scality has made similar comments in the past about Scality going public. The obstacle to their public IPOs would be annual revenue. There is the notion that $100M to $200M in annual revenue is where you want to be when you consider going for an IPO. Neither company is close to $100M in annual revenue. Scality has approx. $30M in annual revenue compared to Cloudian's approx. $25M in annual revenue. So, the question is can either of them get to $100M in annual revenue in 24 months? If the investors are "long-term greedy," they might wait to IPO. If not, then one or both of them will be sold but who would buy them? HPE has some familiarity with Scality but hasn't shown any interest yet. Digital Capital partners have ties to Cisco, so that's a possibility for Cloudian. Lenovo could buy Cloudian if they want to have an enterprise solution for capacity data storage. The number of potential buyers is not that large. IBM, Western Digital, and Red Hat have already made their acquisitions in this market.

Tintri rescued by DDN just hours after filing for Chapter 11


The flash storage gold rush is over...

Well, this is reminiscent of the 1980s when everyone was starting a disk drive manufacturing business. There were too many entrants to survive and today only three have survived (Seagate, Toshiba, and Western Digital) after having acquired many of their competitors over the years. The flash array market is in a similar situation with too many manufacturers with too little to distinguish themselves from their competition. Some have gone public by taking on loads of debt to grab market share. Some have been acquired because it was the only way the VCs would see a pay day and some have failed. DDN is likely looking at Tintri as a relatively cheap way to get access to their flash technology and apply it to their own high-capacity storage lineup. For a while they will have to provide support for existing Tintri customers but the plan would be to move them to DDN branded storage products over time. The VCs may have made some bad bets on Tintri but that's the way it goes.

WOS going on? DDN ejected from IDC object storage marketscape


What's wrong with this graphic?

Well, if there is a $20M annual revenue bar to get over, then Caringo, Ceph (Red Hat & SUSE), and SwiftStack could all be sneaking under it. What seems likely, given the changes in the "rules" by IDC, is DDN doesn't have enough touch points in the object-storage market since its major strength is storage for high-performance computing.

Another quarter, another record-breaking Tesla loss: Let's take a question from YouTube, eh, Mr Musk?


Is Tesla the new DeLorean Motor Company?

Well, back in 1975 automotive executive and engineer John Z. DeLorean attempted to build and sell an original new car (DMC-12) and take on the rest of the automotive industry. The company failed in 1982 when it went into into receivership and bankruptcy. Making and selling cars is a tough business to start from scratch even if you have an army of nerds working on the project. There is an overcapacity of car making worldwide. Tesla occupies a tiny segment of a soon to be large EV market. The fundamental problem is Tesla cannot possibly scale quickly enough to be successful with all of the excess car-making capabilities available to manufacture EVs. Mr. Musk would be smart to stick to making batteries and rocket boosters.

Scality swallows $60m to tame the multi-cloud data management beast


Scality bags $60M for Zenko?

Well, Scality needs more funding. I don't think the $60M will be for Zenko. Mr. Lecat enumerated who the company is competitively engaged with but conveniently omitted Scality's win rate for the engagements. It seems meaningless to mention them at all if you are not going to use it to brag about Scality. Mr. Lecat removed the COO and CMO at Scality this year claiming those C-suite positions were no longer necessary. Then we learned about HPE cutting a side deal with Cloudian in EMEA which is right on Scality's home turf. HPE did that after it dropped $10M on Scality a couple of years ago to find out if the company was worth buying. Along with the $60M in new funding Mr. Lecat is hoping for an IPO in 2023 when he was looking do that IPO right about now or in 2019. The truth is Scality's revenue is not what it should be to do an IPO. So, the most reasonable use for the $60M is to build a worldwide sales and marketing organization to get more market share, close sales (win rate again) and boost revenue to get to an IPO so the investors can have their payday. By comparison, Cleversafe had received approx. $128M in funding when IBM wrote a check for $1.3B to buy Cleversafe in November 2015. With $150M in funding, Scality could be bought for $1.5B if there was a buyer and it won't be HPE. That leaves Cisco and Lenovo in the enterprise-class storage market without an object-based storage software to call their own. So far none of the established object-based storage software companies has been able to do an IPO. There are only the ones that are still non-public and the ones that got bought by public companies. $60M for Zenko doesn't sound logical even if "multi-cloud" really is the next "big thing" in data storage management.

Seagate's HAMR to drop in 2020: Multi-actuator disk drives on the way


HAMR, MAMR and mechanical HDD trickery

Well, you can't blame the HDD manufacturers for trying to stay in the game. HAMR HDDs have been in development for more than a decade, and you still can't buy one. MAMR HDDs are exotic, and you won't see them for sale for a long time either. HDDs with dual actuators have been tried before and abandoned. Maybe it can be made to work, but adding electro-mechanical complexity to HDDs can only be a source of new HDD failures. In the meantime, caging electrons in NAND flash is a quicker way to increase capacity dramatically. Niche markets for HDDs will hang on for maybe another ten years, but when those use cases can be challenged by NAND flash storage, it will be the end for HDDs. The storage imperative in the 21st century is maximum capacity with minimum power requirements. Anything that does not do that will not be around.

He He He: Seagate's gasbag Exos spinner surges up to 14TB


Re: Limits?

Well, last year IBM scientists demonstrated the ability to store 1-bit of data on a single atom and read it under laboratory conditions. Currently, it takes 100K atoms to store 1-bit of data.

IBM thinks Notes and Domino can rise again


Notes was worth the trouble back in the day.

Well, Notes was the crowning achievement in software development for Ray Ozzie and his crew of developers at Iris Associates. Ray had worked at Lotus where he also created Symphony. At one point Lotus could not keep track of all the licenses it sold for Notes and cut a deal with Ray Ozzie for approx. $186M in royalties. Later on, IBM paid one-third of its cash on hand, which was $3B to buy Lotus. By this time Lotus had acquired Iris Associates.

It was the pre-internet "groupware" era, and the only serious contenders were Novell with GroupWise, which is alive and well at Micro Focus with its current GroupWise 18 release. The other contender was Microsoft Exchange with its Outlook client which is still widely deployed and sits alongside Microsoft's cloud offerings.

One of the issues with Notes was that it did not have all the functionality needed for a development platform. There were lots of third parties with Notes apps, but Notes as a development environment only gradually came into being, but by then the Internet was destined to become the new platform for groupware applications like Gmail and Google Apps. As for a Notes revival, I would not bet on it happening. Too much time has passed since the heyday of Notes and today.

Tonight on IPO, Bought or Binned: Cloudian and Scality collide as object storage endgame nears


Cloudian pulls ahead...

Well, the analysis was interesting and similar to some of the comments I've recently made here on what's up with Scality. Both companies are private so metrics and data are hard to get. Scality does have a higher headcount than Cloudian. Revenue is about the same, but Cloudian's smaller headcount gives it a better revenue per employee ratio than Scality. On that basis, Cloudian appears more efficient with its funding. Scality recently eliminated their COO and CMO positions, which seems odd for a healthy company. It probably means revenue is not where it should be and their runway is getting shorter. Scality probably needs more revenue and they look like they are under pressure to do something. Neither company looks like an IPO is in their future as their annual revenues are still well below $100M. HPE got a good look at Scality for their $10M two years ago and have not pushed the buy button. HPE's side deal with Cloudian In EMEA is probably a sign that some of HPE's customers don't want Scality and HPE doesn't want to lose the business. Scality has worked more closely with HPE and Cisco than Cloudian, but Cloudian may be moving to get closer to Cisco with the $100M line of credit they just got from Digital Alpha Capital. Cloudian stated this line of credit will be used to setup customers with either Cloudian-branded QCT storage servers or Cisco UCS storage servers. This leaves me to wonder why Cloudian did not leverage its existing partner relationship with Lenovo, which sells a storage server with Cloudian's software pre-installed. Both Cisco and Lenovo don't have an object storage software to call their own. If Lenovo has ambitions in the enterprise data storage market, and I think they do, then they would be smart to acquire Cloudian. Cisco has bought dozens of companies since their founding in 1984 and did a terrible job with some of them. Remember Whiptail? That $415M mess was shoved in a hole in 2015 just two years after Cisco bought it. In the end, money talks and if Cloudian can get the multiple it wants from Lenovo or Cisco, a deal will get done and the funders will have a payday. Cleversafe minted quite a few millionaires when it was acquired by IBM for $1.3B in October 2015.

Oh, Bucket! AWS in S3 status-checking tool free-for-all


Haven't we have seen this computing security conundrum before?

Well, there has always been a conflict between ease of use and security when it comes to computing. Cloud computing providers sold their model as a way to escape dealing with those nasty on-premises computing environments. And when you combine cloud computing with self-service access to anyone with a credit card, what could go wrong? Consumers of cloud computing don't pay attention to securing their information assets in the cloud. And when they don't pay attention to security, everyone on the public internet winds up with access to their data sitting in an improperly configured S3 bucket. Surprise, surprise! At least AWS is trying to make it clear you are doing it wrong! Better late than never.

We all hate Word docs and PDFs, but have they ever led you to being hit with 32 indictments?


Information in a criminal in investigation is asymmetric

Well, Special Prosecutor Mueller knows more than the people who receive subpoenas. Persons of interest lawyer up before being questioned. Lawyers tell their clients not to lie to the FBI and not to lie before a Grand Jury because lying is a slam dunk for prosecutors and you will go to jail. People facing indicted who think they have long lives ahead of them don't want to spend the next 30 years behind bars. When charged the number of counts is usually high enough to convince them they should plead guilty to lesser charges in exchange for their complete and truthful testimony. With every "plea deal" Special Prosecutor Mueller can increase his overall information putting remaining persons of interest at a more considerable disadvantage because they don't know what Special Prosecutor Mueller knows now that he didn't know before.

Scality CEO: About that C-suite throttling...


Mr. Lecat says we don't need a stinking C-suite at Scality

Well, I know Paul Turner who was until recently the CMO at Scality. Prior to that, Paul was CMO at Cloudian. You will not find a more hands-on CMO than Paul. That said when his predecessor only lasted four months at Scality, it should have been a warning sign about the company's prospects.

It wasn't too long ago that Mr. Lecat was talking about a Scality IPO and then HPE invested $10M in Scality. Well, the IPO never happened probably because Scality has not broken through the $100M level in annual revenue. HPE got a good look at Scality for their $10M and apparently decided not to buy the company even though HPE has no object-based storage product to call its own. More recently, HPE cut a deal with Cloudian to sell Cloudian HyperStore in EMEA. Why would HPE do this after putting $10M into Scality? The easy answer is HPE was not closing deals with Scality in EMEA.

The C-suite headcount at Scality is likely being reduced because Scality is not closing the business deals it needs to support the number of employees on its payroll. Funding provides a runway for business development, but cash "burn" can shorten the runway if it is too high compared to the revenue being booked. Looks more like Scality has too low a revenue per employee number to continue in business without cutting headcount.

Deputy lord of the Scality RING parts ways with object storage firm


Big Payday or IPO for Scality?

Well, the interesting piece of the story is who provided Scality with the recent $35M in additional funding. With approx. 219 employees and approx. $31M in annual revenue, Scality's revenue per employee is low at approx. $142K per employee. Scality's revenue may be less than what it needs to operate, hence the unattributed $35M in additional funding. Scality's competitors, IBM Cloud Object Storage (Cleversafe) and Cloudian have approx. $35M and $24M respectively in annual revenue and 214 and 123 employees respectively. IBM Cloud Object Storage and Cloudian have revenue per employee of approx. $164K and $195K respectively. These revenue per employee numbers are better than Scality's with Cloudian's being much better. On the surface, this would seem to indicate that Scality is not closing enough sales relative to the number of employees on the payroll. Back in the day when Novell was riding high, CEO Ray Noorda used a revenue per employee metric $250K to determine whether the company had too many employees. So the question is how much longer can Scality continue to increase employee headcount without a commensurate increase in revenue before it burns through its funding? Scality could be acquired for some multiple of its funding or it could go public, which CEO Lecat has mentioned in the past. The safer bet might be an acquisition by HPE since it already has $10M invested in Scality and they haven't bought an object-based storage software vendor yet.

Western Dig's MAMR is so phat, it'll store 100TB on a hard drive by 2032


Re: Space and energy will make flash the winner...

Well, probably when new NAND flash fabs scheduled for completion come online in the next several years. Larger capacity new HDDs are not coming to market at a lower per GB price than small capacity HDDs already in the market. The price per GB for HDDs has fallen very little over the past 3 or 4 years while the price of SSDs has fallen from $0.60 per GB to $0.25 per GB. When the delta between the price per GB of HDDs is within $0.10 per GB of the price of SSDs the market will flip in favor of SSDs because of the additional savings in floor space and energy costs. So, I'd say a price around $0.15 per GB for SSDs will be the point in time when HDDs start losing significant market share for capacity storage.


Space and energy will make flash the winner...

Well, HDDs cost less than SSDs and will survive for the next 5-7 years in scale-out storage clusters running object-based storage software. But if any of the unstructured data growth projections come reasonably close to reality then HDDs will just require too much energy and take too much space to store all of the unstructured data being ingested. SSDs have already won the capacity competition. All that remains is to make more of them and drop the price low enough to push HDDs out of the market. I think when the price difference between the two reaches $0.10 per GB, it will be the death knell for HDDs.

Whoosh, there it is: Toshiba bods say 14TB helium-filled disk is coming soon


Re: No RAID rebuilds on large HDDs

Well, not rebuild in the RAID controller sense. In an object storage cluster data is protected using replication (copies) of data objects or erasure coding (data fragments + parity fragment) of data objects to achieve the desired level of data durability. In most object storage clusters replication and erasure coding policies can be specified at the "bucket" level. Replication typically defaults to three replicas with one replica stored on three different nodes in the cluster. Erasure coding schemes can vary considerably in their combination of data fragments and parity fragments, but the fragments themselves are dispersed to a number of nodes in the cluster so that no node has more than a single fragment (data or parity). HDD failure in a given node means the replicas and parity fragments stored on the failed HDD will be re-created by the object-based storage software on other HDDs in the cluster. At no time will the replicated or erasure coded data become inaccessible while this happens.


No RAID rebuilds on large HDDs

The first cry you hear with the announcement of an even larger capacity HDD is that it will be impossible to use them in a RAID array due to an almost infinite amount of time needed for a RAID rebuild. Get a clue. These helium-filled HDDs are destined to be deployed in object-based storage clusters where single or multiple drive failures have no effect on the operational status of the cluster. Failed of disabled HDDs in object-based storage clusters are just pulled and replaced, hopefully under warranty.

StorONE 're-invents' storage stack to set hardware free


Who are those guys?

Well, in a crowded software-defined storage world, being mysterious about how you actually do what you claim to be doing will generate some curiosity and buzz. That said, 35 smart people being paid $150K per person per year could burn through $30M in less than six years. The 50 patents awarded or pending could be researched for information about what StorONE is actually doing. Patents represent a reduction to practice and not just an idea. One patent is typically not enough and is usually accompanied by related and/or nuisance patents to protect the valuable IP. By comparison, Cleversafe, an object-based storage software and hardware startup acquired by IBM for $1.3B in 2015 had amassed 300 patents.

The suggestion that StoreONE is looking for additional funding or a potential acquisition could be plausible but it only has beta customers at this point. By comparison, Cleversafe had over 100 production customers including a three-letter U.S. government intelligence agency when it was acquired by IBM. At some point, StorONE will have to come clean about what they are doing and how they do it. A few years ago, the OpenStack startup Nebula burned through $25M in funding in a couple of years but was unable to attract additional funding despite having investors who were A-list players. Oracle picked up the company for a bargain price basically to hire the people working at Nebula.

An object failure: All in all, it's just another... file system component?


Old story but OBS vendor are doing more to accommodate legacy file protocols

Well, the potential use cases of OBS include working as backend storage to NAS heads (Cloudian HyperFile) for SMB and NFS file access. They might not be as fast as NAS filers but fast enough for most NAS use cases. Virtually every OBS vendor supports SMB and NFS file access methods today and some have supported them for years. Various NFS and SMB gateways and caching appliances have also been around for years (Panzura) that can handle file locking and manage a global namespace. OBS vendors like Caringo, Cloudian, Scality, and SwiftStack are improving their support for legacy file access methods which is to be expected because not every application has been re-written to use a RESTful API to access an object store. That said, every OBS vendor supports the AWS S3 API which is the most popular RESTful API used to access cloud storage. OBS is not about object storage per se, it is about supporting the hundreds of data access, data storage, data analytics, and data management solutions that can use OBS.

Teen Pennsylvania HPC storage pusher Panasas: Small files, fat nodes, sharp blades


Data storage has been a conservative business

Well, Panasas has been around for a long time. Garth A. Gibson, one of the founders of Panasas, is credited with being in the group of three Ph.D. computer science graduate students at UC Berkeley whose research was instrumental in the development of RAID back in 1987. The other graduate students were David Patterson and Randy Katz. Panasas has had its niche in data storage and was not competing with emergent object-based storage vendors like Caringo. Now there is competition with companies like Qumulo and Weka.io getting some traction so Panasas is upping their game. That said, data storage has been a conservative technology business built on proprietary hardware and proprietary firmware. Errors in storage hardware and firmware can lead to the loss or corruption of the data being stored, so moving fast and breaking stuff was never an acceptable approach to doing business.

Now we are in the era of software-defined storage based on COTS (commodity off-the-shelf) hardware. Storage software vendors can now move faster because they are not dependent on building and testing proprietary hardware and firmware in their storage systems. Panasas is making this transition and will have to do it successfully to remain a viable player. Curiously, companies like Pure Storage have at least partially abandoned the use of software-defined storage in favor of using proprietary hardware. This would seem to run counter to the storage industry trend of relying on COTS hardware and doing everything of value in software.

HPE inks object storage reseller deal in EMEA – with Cloudian


Reading the fine print on one side and no comment from the other raises questions

Well, Paul Turner from Scality was quick to point out that the HPE "deal" with Cloudian was not a general resale agreement. His statement is probably accurate, and it looks like a narrowly focused deal that only applies to HPE's professional services organization in EMEA. It does beg the question as to why HPE was not able to accomplish the same objective with Scality. The deal looks like a situation where HPE was determined not to lose the business opportunity and brought in Cloudian. Cloudian does have a strong presence in EMEA and appears to be closing more business than Scality. Cloudian's no-comment was probably part of the arrangement with HPE to bring Cloudian into this limited purpose deal. If Cloudian were permitted to tout this as a win for Cloudian against its competitor, it would tarnish HPE's existing resale agreement with Scality. There could be more to it, but it is not apparent right now.


So much for the old HP "Invent" moniker...

Well, HPE did invest $10M in Scality in January 2016, yet the company did not use the investment to come to any decision about acquiring Scality. This announcement is more likely a sign that Cloudian is closing more business in EMEA than Scality. Perhaps HPE decided it needed to partner with Cloudian in this market rather than lose the business. It could also be a tactic to maintain customer account control by supplying customers with what they want rather than trying to convince them to use Scality. The Register has pointed out that HPE is also partnering with Qumulo and now Weka.IO on HPC customer projects. All of this reinforces the notion that HPE is partnering with third parties who can help them get the business. Nothing wrong with that except at some point companies like Qumulo, Weka.IO, Cloudian, and Scality could all be acquired by others. It all looks like HPE is executing a tactical game plan when they should be acting more strategically.

HPE and WekaIO sitting in a tree, k-i-s-s-i-n-g


Follow the money too...

Well, HPE not only runs Scality's OBS software on HPE hardware, it has invested $10M in Scality. Some pundits considered this investment a prelude to buying Scality, but the investment took place almost two years ago. Seeing how HPE needs to develop a portfolio in HPC and OBS solutions, why has it not made a move to acquire Scality outright? Two years ago this month, IBM pulled out its checkbook and paid $1.3B to acquire Scality competitor Cleversafe. HPE needs to play in the OBS and HPC markets with solutions they own. Partnering with Scality, WekaIO and Cumulo is different than buying them outright like it did with Simplivity and Nimble. Ms. Whitman's hesitation will be HPE's loss.

Tailored SwiftStack update should help get your GDPRse in gear


SwiftStack is making progress...

But so is everyone else in the OBS software business. SwiftStack has had a reputation for being hard to configure and requiring lots of "tuning" based on your performance requirements. That said, the vendor universe for OBS software vendors has been stable with just a few acquisitions over the past three years. HGST acquired Amplidata. Red Hat acquired Inktank (Ceph). IBM acquired Cleversafe. Caringo, Cloudian, and Scality have achieved traction in the OBS market. The enterprise vendors Dell-EMC, Hitachi, and IBM, are playing a long game. And several smaller vendors like OpenIO and Minio have received additional funding rounds. SwiftStack falls in with the pure-play OBS software vendors like Caringo, Cloudian, and Scality but it is also the major code contributor to the OpenStack Swift project. Its marketing efforts seem weak compared to the others in this group, and they have not received any additional funding for several years. GDPR is mostly an EU consideration, but all of these vendors sell in the global market, so they do need to be able to offer their customers GDPR compliant OBS software. Data storage just like networking has its jargon. If you work in it, you learn it.

Has Nexenta's growth stalled?


Is there really such a thing as a G round?

Well, I never see Nexenta show up in any list of object-based storage software vendors even though the company launched NexentaEdge back in 2014 to catch up with that emerging storage market. Today, you hardly hear anything about Nexenta worth noting. They had a reputation for working well as a storage solution for VMware back in the day, but what have they done lately? They do have a reasonably sized paying customer base so they are generating revenue, but apparently not enough to stem the need for additional cash. Since their IPO plans never materialized and a co-founder has left Nexenta, the next logical step would be an acquisition. With $120M invested over multiple funding rounds, there have to be some anxious investors still waiting for a payday. By comparison, Cleversafe had about $127M in funding before IBM paid $1.3B for it in November 2015. Somehow I don't think Nexenta will command nearly that much in an acquisition. If the coming announcement doesn't lead to an acquisition, why would anyone invest millions more in Nexenta?

In 2012 China vowed 'OpenStack will smash the monopoly of western cloud providers!'


OpenStack will not "smash" western cloud providers

Well, it was probably a hope sometime early on in OpenStack's development that it would emerge to challenge public cloud services from AWS, Google, and Microsoft. The efforts of Cisco, HPE, and Rackspace to use OpenStack to compete with the oligopoly of "western" public cloud computing providers appears to have failed. In the public cloud computing market, there is little chance that anyone will be able to harness OpenStack to compete with AWS, Google, and Microsoft at scale. OpenStack may have a future as a private managed cloud service from providers like ZeroStack and Platform9 or from one-off builders of private clouds like Red Hat or SUSE. The lingering question is will OpenStack be able to keep pace with the service capabilities of AWS, Google, and Microsoft?

Enterprise IT storage – where being fat and very dense is, um, a good thing. Right, Cloudian?


Ultra-density storage server revisted

Well, Cloudian released a multi-node storage server (Supermicro) with JBOD chassis (QCT) less than two years ago called the FL3000. All traces of it seem to have been removed from the Cloudian website. The HyperStore 4000 is a combined two-node storage server with 35 HDDs per node in a 4U chassis from QCT (QuantaPlex T21P-4U). The HyperStore FL3000, while expandable and highly modular, didn't offer a lot more than stacking 1U "pizza box" servers like the current HyperStore 1500 storage server. If you need PB plus storage from the get-go, then the ultra-dense HyperStore 4000 looks useful. If you are starting with a sub PB storage cluster, the HyperStore 1500 will be more useful because smaller clusters benefit from having more storage server nodes when it comes to using replication and/or erasure coding data to protect data.

Small but perfectly formed: Dailymotion's object storage odyssey


OpenIO uses its own hardware add-on. Is this progress?

Well, if OpenIO adds a hardware attachment (ARM+Ethernet add-on board) to HDDs and plugs them into a custom chassis, is this software defined storage using COTS hardware? Scality, AFAIK, makes no use of proprietary hardware with Scality's RING OBS software. In any event, it will be interesting to see how this hardware add-on approach from OpenIO actually performs at scale. We already know that Scality RING can perform at scale.

Did somebody say object storage? 9 ways to tell if there's a point


Re: Metadata is where its at.

Well, Caringo has already implemented Elasticsearch in Swarm and Cloudian promises to have Elasticsearch implemented in its next release of HyperStore along with Kibana, which is a data visualization plugin for Elasticsearch. So it is apparent to these OBS software vendors that being able to search for objects using metadata and display that data visually is an important aspect of running an OBS cluster. They have undoubtedly heard about the need for this from their customers.

NooBaa wraps AWS S3 wool around Microsoft Azure Blob storage


Sounds more like a Rube Goldberg Virtual S3 Machine

Well, with just a couple of million in funding, NooBaa isn't going to upset the object-based storage market anytime soon. Storage takes a lot of time to get right and to get traction because storage is a foundational computing technology that tends to be very conservative. Apps can and do crash all the time, but your storage better not. Some of the current group of object-based storage vendors like Amplidata (HGST), Caringo, Cleversafe (IBM), Cloudian, Scality, and SwiftStack have been working at it for more than a couple of years and with multiples of the funding NooBaa has received.

The idea of scavenging around for underutilized storage on desktop computers and servers and presenting this as a secure, stable and reliable way to do object storage takes a leap of faith. Symform, which was founded by several ex-Microsoft employees six years ago, tried something like this as a way to do backup using underutilied storage on computers anywhere in the world using a control plane on AWS for managing it. Symform was bought by Quantum a couple of years ago and Quantum recently shuttered the Symform business unit. So much for the "innovation" of coming up with ways to leverage underutilized storage on desktop computers and servers.

Scality developing way to stream objects to tape and the cloud


Tiering to AWS S3 or Glacier or Google Coldline or tape...

Well, I think Paul Turned did a fine job at Cloudian as their CMO for two years before he moved to Scality. Cloudian currently tiers data to AWS S3 or Glacier, other S3 compliant object storage clusters and now Google Coldline. Apparently, Scality is just getting around to doing it now that Mr. Turner is there. Coincidence or not?

Spectra Logic can tier data to tape libraries using their Black Pearl DS3 appliance. Spectra Logic added a couple of extensions to the S3 command set to deal with tape drives. You can use their SDK to write clients that will work with the Black Pearl DS3 to move data objects to LTFS formatted tapes in a library. So how is this "news" that Scality is planning to stream data objects to tape?

HPE has a $10M equity investment in Scality, so I'm sure whatever Scality is doing to potentially broaden the market for using Scality RING will be welcomed by HPE, which may eventually wind up buying the company.

HyperStore gets Coldline for tired old objects


Why not support all three...S3, Coldline and Azure Blob?

Well, Cloudian invested a great deal in being able to track the AWS S3 API very closely. This allowed Cloudian to tier data to S3 and to Glacier, although Glacier has a separate API. It makes sense for Cloudian to support tiering from Cloudian clusters to Google Coldline because it has some advantages over AWS Glacier in terms of data availability. So, if Cloudian is now able to tier data to AWS S3/Glacier and Google Coldline, why not Azure Blob? This would basically give Cloudian customers a choice advantage when it comes to tiering data .

OpenIO wants to turn your spinning rust into object storage nodes


Kinetic will always have a brilliant future...

Well, aside from OpenIO and its current testing of Seagate Kinetic HDDs, just who has written and deployed any production applications using Seagate Kinetic HDDs?

Three years ago, Mr. James Hughes from Seagate presented the first public presentation and "demo" of a Seagate Kinetic HDD at Basho's technical conference in San Francisco. Mr. Hughes was on a mission with Kinetic to rid the storage world of the evils of POSIX and storage servers with their disk controllers.

After the Kinetic announcement, there was the usual rush of supportive quotes from object-based storage vendors and storage hardware OEMs. SwiftStack, Scality, and Cleversafe said they were interested in Kinetic. Caringo indicated in a private message that they saw no advantage in Kinetic over their current technology. Cloudian said in a private conversation that Kinetic would require a "split brain" software development effort and the Seagate Kinetic code was not up to production quality. So what has happened after the initial enthusiasm and some cautious comments regarding Seagate Kinetic? The answer is not much. Seagate Kinetic remains a storage technology in search of applications using it in production level deployments. Seagate Kinetic will always have a brilliant future.

Behold this golden era of storage startups. It's coming to an end


Re: Next major advance...

Well, I agree and we have seen this "too cheap to manage" argument in other areas. Remember when the proponents of nuclear power for generating electricity said it would be too cheap to meter? How many people actually got free electricity generated by a nuclear reactor? Today, people think cloud storage will essentially be free and you will have as much as you can ever possibly want. Somehow I just don't think it will work out that way. There is always a cost involved in storing and preserving data. The question that needs to be answered is how much of the existing stored data will have any social, economic, scientific or cultural value in 10-20 years? The answer is probably only a small fraction of it. The mountains of data being generated as a result of people tapping their fingers on their smartphones will have a pretty short half-life before it is not worth the storage it is occupying in some cloud bit barn. Data storage is not an infinite resource. People will eventually need to determine what data is worth keeping and for how long.

Haters gonna hate, hate, hate: Cisco to tailor SwiftStack for UCS object storage cramming


The product is going to fly...

Will it fly the way Whiptail did when it blew a $400M hole in the ground after Cisco acquired it?

Your next storage will be invisible (for a while)


DIY ad hoc storage is not for production use...period

Well, it is certainly fun to rummage through the server dust bin and cobble together a storage cluster based on open source software, but who would seriously consider using it as a production storage tier in their organization?

You can get a very low TCA if you build your own white box storage servers or turn to ODMs like QCT and Supermidcro. Google started out building all of its servers as cheaply as possible by doing it themselves. There is really no reason to start with junk servers unless you need to prove the concept before you get the funding you need to do it right.

DIY open source storage software like Ceph is not a walk in the park if you don't have a computer science department nearby. Ceph has a complex, non-P2P architecture with no built-in capability to do charge-back, QoS or reporting...stuff that people are interested in having. Upgrading Ceph has also been difficult. Maybe Red Hat has made some progress along these lines with the release of Red Hat Ceph Storage 2 this past June.

I do agree the determining the TCA for a capacity storage project is relatively easy and coming up with a fully burdened TCO is more difficult, but not impossible. Personally, a single storage administrator should be able to manage 10PB of objecgt-based storage, assuming the cluster is not made of junk servers needing constant attention.

OpenIO pulls up ARM controller SOCs: Kinetic's Marvellous... can anybody do it?


Kinetic HDDs...not much to show so far

Well, it was exciting to see the video of Mr. James Hughes from Seagate present and demo a Seagate Kinetic HDD at the RICON West Conference (Basho Riak) in San Francisco in October, 2013. Every OBS software vendor had a comment about Kinetic...a few were interested and willing to investigate it, while others were not convinced that Kinetic worked any better than what they were already doing with their OBS clusters.

So, here we are almost three years later and no production quality deployments of Kinetic at scale. Maybe OpenIO is on to something, but it looks more like an engineering project right now. Maybe it will be "insanely great" and maybe it won't.

In the meantime, every OBS software vendor wants to deliver their OBS software to solve current customer storage and data management problems at scale. No customer is going to wait another couple of years to see if Kinetic works as originally conceived by Mr. Hughes at Seagate. In fact, most customers using OBS software don't really care much about the hardware that does the storage. What they care about more is the cost of the storage and the ecosystem of solutions they can choose from to solve their data storage and management problems. From this perspective it is all about S3 and using OBS software on commodity hardware, and deploying it with the fewest headaches possible. A tricked-out Kinetic HDD without the OBS application software is useless. It is all about the OBS software.

Scality waggles finger, shows off sixth RING


Scality S3 API support...native or not

Well, here is the rub from the article. "This S3 Connector uses the S3 Server, a Scality-originated open source S3-compatible API server, available on Github." First, Scality does not use a S3-compliant API as its native RESTful API in Ring. It uses a S3 Connector. Second, how many of the 51 AWS S3 API operations does Scality actually support in its S3 Connector? In other words, if you pick a number of S3 apps at random from the hundreds of AWS S3 solutions available, can you point them at a Scalilty Ring cluster and have them all work without modification? AWS S3 is the de facto standard for object storage, but S3 compliance comes in degrees. AFAIK, only Cloudian has a native S3-compliant API that comes the closest to being fully compatible with the AWS S3 API.

Spinning rust fans reckon we'll have 18TB disk drives in two years


HDDs always at the edge...

Well, every "trick" is being deployed to increase HDD capacity...helium, SMR and soon HAMR and everything is designed to run right at the edge of failure in order to keep the price as low as possible. The largest capacity HDDs will likely find their best application in object storage environments where their failures can be better managed, but not in desktop or traditional server RAID storage environments where HDD failures at this size would likely be more catastrophic and/or time-consuming to correct. And then there is the rapid increase in SSD capacity. With 15TB SSDs currently becoming available, SSDs have won the capacity race. HDDs still have a cost advantage, but it won't last for much longer. HDD manufactures will likely stop building HDDs sometime between 2020 and 2025. So spin them while you can.

Seagate's Kinetic drives: They're moving... but in what direction?


Kinetic may have a future...

Well, when Mr. James Hughes from Seagate publicly demonstrated a Kinetic drive at Basho's conference in San Francisco in October 2013, everyone was suitably impressed with what Mr. Hughes and his team had achieved. Kinetic eliminated the server with its disk controller and POSIX layers. It relied on the Kinetic API and its libraries to use the Kinetic drive as a key/value object store combined with the on-board drive firmware, and dual 1Gb Ethernet interfaces that used the standard SATA/SAS connectors. FYI, the typical HDD has between 1M and 2M lines of code on it.

Mr. Hughes was on a personal mission to get rid of POSIX and all of the other "busy work" disk drives get involved with when storing data. Kinetic sounded like a breakthrough in data storage and Seagate was making available small-scale test/dev hardware kits so the app/dev crowd could get started. It turned out that the software quality coming from Seagate was not yet production ready.

Most every object-based storage software vendor made an initial evaluation of Kinetic. Some liked it and planned to develop for it, and some didn't think it offered a significant advantage over what they were already doing. Scality and SwiftStack were on-board with Kinetic. Caringo and Cloudian were not. In 2015 the Linux Foundation started its Kinetic Open Storage Platform with Cleversafe, Scality, Red Hat and Net App numbered among its members. But has Mr. Evans noted, there doesn't appear to be much momentum behind the project almost three years later. The jury is still out on Kinetic, but its prospects may be diminishing as the years go by.

Google's call for cloudier, taller disks is a tall order says analyst


Google HDD...

Well, if it hadn't been for the success of SSDs, Google's interest in producing a different kind of HDD might have some merit. HDD capacity has gone from 1TB to 10TB in nine years. The price for a 1TB HDD has fallen from $0.32/GB to about $0.05/GB. SDD capacity has gone from 1TB to 15TB in three years. The price for a 1TB SSD has fallen from $0.60/GB to $0.30/GB. HDDs still have a price per GB advantage, but SSDs has won when it comes to capacity. As production of HDDs declines, the price per GB will not fall much lower. As production of SSDs increases, the price per GB will continue to decline. Seems very likely that by 2025 HDD production will end. SSDs or their successor will have taken over in both capacity and price per GB. HDDs will fight on with SMR for specialized storage and HAMR, if it ever proves commercially workable, but it is a losing battle for HDDs. Not bad though when you consider that the first rotating magnetic disk storage device was commercially sold by IBM in 1956. Sixty years was a good run.

Secondary storage, the missed opportunity for object storage


Object storage vendors have always needed features and an ecosystem to prosper...

Well, the author is correct that Caringo, Ceph and Ctera were not started yesterday. Caringo's founders developed what became EMC Centera before creating their own storage software in 2005 called CAStor, now re-invented as Swarm. With 10+ years in business and probably more customers than any other object storage software vendor, Caringo has survived without being bought and is looking to tune-up its approach to the object storage market with improved usability, native search and better file level integration with Microsoft Windows Servers and NetApp filers. All good and necessary things to do.

Ceph was part of Dr. Sage Weil's PhD research and dissertation back in 2007. Ceph is open source software, but its commercial sponsor, InkTank, was purchased two years ago by Red Hat for $175M. It combines file, block and object storage, which sets it apart from a pure object storage environment. Its commercial future now resides with Red Hat, which also purchased Gluster (GlusterFS), which is a clustered file system. Ceph is also popular among OpenStack enthusiasts.

Ctera was founded in 2009 and offers on-premises appliances for backup and file sync-and-share through a backend Ctera portal connection to an object store or other types of storage. Ctera has been commercially successful both in the US and Europe in the SMB and enterprise market as well as the service provider space.

The author doesn't break a lot of new ground here. The market for PB+ scale object storage customers is about 20K worldwide according to Scality's Mr. Jerome Lecat. Scality recently received a $10M investment from HP, which could be seen as a prelude to its acquisition by HP in order to counter IBM's purchase of Cleversafe for $1.3B last year.

The company that Mr. Signoretti did not mention, although he is familiar with it, is Cloudian, which does address the "scale down" and well as the "scale out" aspect of the object storage market with their HyperStore software and appliances. Cloudian's crown jewel is its full compliance with the AWS S3 API, which means that any third party solution or appliance that works with S3 will work with Cloudian. Cloudian can also tier data to AWS S3, Glacier or another S3 compliant object store. Cloudian plans to release a "Panzura-like" global file management virtual appliance later this month that is integrated with HyperStore.

Object storage vendors are aware that being "cheap and deep" is not a formula for commercial success. Customers need solutions that run the gamut from legacy file access methods, to RESTful API access, which generally means S3 compatibility, to tighter integration with big data analytics and search. There are also differences among the object storage software vendors in terms of their architecture, management and deployment. Customers should be willing to undertake a POC and perform due diligence before selecting a vendor best suited to meeting their requirements.

Ker-ching! IBM paid 10 times Cleversafe’s funding for the startup


IBM needed an OBS platform...

Well, prior to acquiring Cleversafe, IBM did not have an OBS story to tell much less sell to customers. GPFS is not an object store, and IBM's messing around with Swift was not getting them anywhere. So IBM pulled out its checkbook and wrote a big one for Cleversafe. 10x funding is not an unreasonable payday for the people and investors at Cleversafe. You may recall that back in the day IBM paid $3B in cash for Lotus, which was one-third of their available cash. Another thing to consider about the Cleversafe acquisition is the 200+ patents obtained by Cleversafe, and the fact that the CIA is one of Cleversafe's customers and investors (through a CIA associated company). IBM has a long history as a trusted IT provider to federal agencies, so I'm sure the CIA is happy to see that IBM has their back when it comes to OBS.

Cloudian's better at Amazon S3 than anyone else, apparently


Local cloud storage is the future...

Well, Mr. Ash from Cloudian is not bragging, he is just explaining the features already available in Cloudian's HyperStore software defined storage software. Every object-based storage software vendor supports at least the basic AWS S3 functions, but S3 is not their native API.

Cloudian made three "bets" with their HyperStore object-based storage software architecture. 1) Cassandra, which Cloudian extends for use as its metadata storage service, has numerous real-world use cases and is well tested and supported. Apple is reported to have 70K Cassandra servers. 2) Native support for AWS S3. AWS has the largest "ecosystem" of applications and solutions written to use S3. Cloudian's compliance with all of the S3 API functions means any S3 application or solution will work with Cloudian HyperStore. 3) Hybrid cloud storage would become a requirement for enterprise customers creating their own local storage clouds. Cloudian can tier data from HyperStore clusters to AWS S3 or Glacier, which by the way, actually have different sets of API functions.

Cloudian has been approached by other object-based storage software vendors interested in licensing Cloudian's native AWS S3 compliant service. Cloudian has chosen not to do it, because it is a key to their success in the capacity storage market.

In terms of actual customers...Cleversafe has about 150, Scality has between 50 and 100 and SwiftStack has just over 50. So for Cloudian to "bust a move" and achieve as many new customers as all three of these combined, it will need to on-board about 250 new customers. This is a significant challenge and it will be interesting to see if Cloudian can do it in conjunction with their reseller partner channel.

SwiftStack CPO: 'If you take a filesystem and bolt on an object API'... it's upside down


Kinetic represents change...

Well, if you listen to Mr. James Hughes whose team developed Kinetic, you will see that it eliminates storage servers and the need for a POSIX-compatible file system. The Kinetic key/value API plus Ethernet on the Kinetic HDDs is the innovation. The response from the object-based storage sofware vendors has been mixed. Mr. Joe Arnold from SwiftStack sees great promise in the Kinetic framework. Caringo has looked at it and said...meh, SWARM is better. Cloudian was of the opinion that it would need to implement a "split-brain" design in their software stack to use it. Scality was interested in seeing how it might be made to work with RING. Cleversafe (IBM) joined the Linux Foundation's Kinetic Open Storage Project, but not sure what they are actually doing with it. Not sure what Amplidata (WD) thought about it, but WD also belongs to the Kinetic Open Storage Project. HGST (WD) was also experimenting with their own Ethernet HDD that ran Debian, but this is not what Seagate is doing with Kinetic. Given that there has barely have two years of third-party development effort related to Kinetic, it seems premature to declare Kinetic a success or failure as an object-based storage architecture. Time will tell, even though Mr. Arnold is bullish on Kinetic.

EMC mess sends New Zealand University TITSUP for two days


Wait for the post mortem...

Well, the lack of facts at hand makes it anyone's guess as to exactly what happened. That said, storage networks have a lot of moving parts and a failure in the networking part could easily disable access to the storage part. If the outage was due to a planned upgrade or maintenance, then there should have been a roll-back procedure in order to recover. While you cannot rule out human error, you expect that the people involved in operating and managing the storage network are adequately trained and experienced. The vendors involved along with the university will likely issue a "post mortem" when the facts surrounding the outage are understood. Then the guilty can be charged.


Biting the hand that feeds IT © 1998–2019