* Posts by Isilon_Nick

4 posts • joined 11 Feb 2010

NetApp punches Isilon right in the scaled-out clusters

Thumb Down

But NOT Scale-Out! ;)

It's important to realize this is 96 RAID groups, 24 file systems and 24 separate controllers, which means you can't actually do real quotas, replication, or snapshots. Ironically, not at all enterprise and much more appropriate for HPC.

Its also worth pointing out that you can't scale a single workload without painstaking alignment of data across the various controllers.

A solid SPEC submission, no doubt - but not a scale-out result.



Sony Pictures virtualises filers


Avere not front Isilon


Glad to hear Avere is making some real progress with their scale-out caching appliance. We at Isilon love the approach!

To set the record straight, Sony is not fronting Isilon with their equipment. I spoke with Sony and they are only fronting NetApp filers across the WAN.

Just want to set the record straight. ;)



Isilon flashes up NAS clusters


Isilon meta-data LIVES on SSD (not cached)

@Max 6,

I think you're missing the problem we're solving. Isilon already has a fully coherent, expandable, global cache that works across our nodes. That uses DRAM for caching both data and meta-data and is far faster with lower latency than flash. The beautiful thing is that every additional storage node adds to the global cache, which is shared.

The problem we are solving is UNCACHED meta-data (and file) access. The meta-data for OneFS isn't cached on the SSDs; it resides permanently on the SSDs. This allows for lightning fast namespace as well as speeding up many file-system operations.

If you'd like to read more, you can check out my blog on the subject:




Majority of Isilon 1200+ customers have over 10 nodes...

@Mr Atoz,

This is clearly FUD. Isilon has over 1200 customers today, with a solid majority of them having more than 10 nodes in a single cluster. We have customers that have multiple PBs in a single cluster/single file system, that have scaled transparently from TBs. We have customers with over 100 nodes in a single cluster/single file system.

There is no need for a consistency check in OneFS. It is a journaled file system that is hardware assisted using a battery-backed NVRAM located in each node - it is by design, always consistent. We have supported multiple types of drive densities in nodes and multiple generations of nodes for over 5 years.

I would love to fill you in on more details about our architecture, but what you've been sold is some FUD.





Biting the hand that feeds IT © 1998–2017