* Posts by Storage Guy

6 posts • joined 11 Feb 2011

Pimp my racks: Scale-out filer startup Qumulo bangs up its boxen, er, '4U'

Storage Guy

Taking the Reg Bait

Disclosure: I work for Dell EMC

@Chris, what’s up with the infographic?

Just so I understand . . . You spoke to the Qumulo Dir. of Product Management and came to the conclusion that with the inclusion of 10TB drives, Qumulo’s QSFS file system is now magically equal in scalability to ceph, GPFS (Spectrum Scale) and Gluster file systems and offers far great scale than NetApp and Isilon? (My assumption is you meant NetApp ONTAP and DDN with GPFS.)

Currently, NetApp’s ONTAP can scale to 88PBs (or 20PBs with FlexGroup). Isilon OneFS is just shy of 100PBs. And Qumulo QSFS . . . Qumulo doesn’t publicly state their FS scalability. Nope, graph can’t be depicting file system scalability.

Maybe you meant scale in the sense of 4U enclosure density. That metric doesn’t fit either as Isilon supports 60 drive enclosures and DDN offers an 84 drive enclosure. Qumulo’s QC360 is a 40 drive enclosure (4 SSD, 36 HDD).

Ceph and Gluster are software so it can’t be number of nodes you’re graphing.

I give up on trying to make sense of the scalability axis.

Let’s shift our gaze to the relative positioning of Qumulo in Enterprise use cases – right there with NetApp and Isilon. The general assumption is the Enterprise requires features like replication, DR/HA, multi-tenancy, audit, WORM compliance, encryption, mirroring, quotas, snapshots, etc. Apparently, your idea of features required by the Enterprise is limited to snapshots as that’s the only available feature of QSFS.

Maybe it’s an info-free graphic?


Tiery-eyed NetApp previews on-prem storage and cloud tie-up

Storage Guy

Re: Is this CloudPools

First, the obligatory disclosure - I'm an employee of Dell EMC.

@AC: IMO, not like Isilon CloudPools. CloudPools utilizes multiple policies such as file type, last touch, age, etc to tier to cloud. Based on current publicly available info, FabricPools appears to me to be more like repurposed FlashPool code which I'd categorize as more of a caching algorithm with policies on functional cache properties. That's substantially different thanCloudPools where policies are applied directly to the data itself.


Dell looking at higher debt mountain to buy EMC

Storage Guy

6-year old with calc fail

Disclosure: EMC Emerging Tech employee

Reviewing just the storage products, you're 6-year old is failing to look it at this from a market perspective. If you break this down by entry, mid-range and high-end markets there is far less overlap. Where Dell has products and strong share in the entry market, EMC does not. The mid-range segment introduces a bit of overlapping product with predominate share being EMC. For the high-end of the market, EMC brings the tech and share where Dell is weakly positioned today.

It actually makes for a pretty compelling portfolio offering that does not leave many gaps in the available markets .


Benchmark bods reckon NetApp storage has the edge over Isilon

Storage Guy

Re: Practically worthless

@Nate, Eric from EMC Isilon here. I do agree with your points on this. We don't suggest or even support VMs running directly on the storage nodes.


It's a CLUSTERPLUCK: Isilon array gobbles 4TB drives

Storage Guy

Here's Why. Isilon doesn't have the rebuild limitations associated with 4TBs HDDs and RAID

Usual full disclosure - I work for the Isilon Storage Division of EMC. The Reg didn't explicitly cover this, but this is why 4TB drives and Isilon is very news-worthy.

The way Isilon's OneFS file system distributes chunks of files, first across all nodes, and then deep across drive in a cluster means that on a drive failure, a complete drive rebuild is not required. Only the bits of files that were on the failed device are restored. As example, assuming the cluster/drives are 70% full, only 70% of the content of the failed drive is reconstructed. And the process is accomplished with huge amounts of parallelism with all the nodes participating in the file reconstruction to free space distributed throughout the cluster. The more nodes in the cluster the faster the file rebuilds. Elegantly scalable data protection.

The Isilon architecture doesn't have the drive rebuild bottlenecks of many RAID based systems which need to reconstruct the complete 4TB drive by reading parity from a rather small number of drives in a RAID set and writing the data to a single hot spare. The hot spare becomes the rebuild bottleneck at ~140MB/sec and considering that in a common 8+2 RAID-6 group a drive rebuild requires 8x4TB reads - 32TB read to rebuild 4TB.

Isilon can adopt big fat 4TB spinners and with its parallel file rebuilds and elegantly scalable data protection, it greatly reduces the risk of multiple drive failures and/or non-recoverable bit errors causing data loss. Something that most RAID based architectures can't scale with drive size.


DEC founder Ken Olsen is dead

Storage Guy

Was it Videotext?

Seem to recall being able to bring up all kinds of documentation, manuals, policies - docs of any kind really across DECs internal network. Much like we use the web today.



Biting the hand that feeds IT © 1998–2017