Isilon, which makes scale-out NAS clusters up to 10PB in size, is turbo-charging them by adding NAND flash to speed metadata operations. The company's S-Series and X-Series product lines will have STEC flash drives added to hold file metadata, with user data files stored on hard disk drives as before. This will enable Isilon to …
So can the SSD nodes be mixed with non SSD nodes in a cluster.....I'm guessing no since they use software RAID. They don't do well when mixing drive types.
BTW, clusters up to 10PB in size? Please.....word has it they have severe issues going above 10 nodes in a cluster. They make such a big deal out of one FS but why would you want one FS to be multiple PB in size? With any kind of corruption....guess what? You lose it all and then you have to rebuild or restore. I wonder how long it would take for a consistency check to run on 10PB?
Majority of Isilon 1200+ customers have over 10 nodes...
This is clearly FUD. Isilon has over 1200 customers today, with a solid majority of them having more than 10 nodes in a single cluster. We have customers that have multiple PBs in a single cluster/single file system, that have scaled transparently from TBs. We have customers with over 100 nodes in a single cluster/single file system.
There is no need for a consistency check in OneFS. It is a journaled file system that is hardware assisted using a battery-backed NVRAM located in each node - it is by design, always consistent. We have supported multiple types of drive densities in nodes and multiple generations of nodes for over 5 years.
I would love to fill you in on more details about our architecture, but what you've been sold is some FUD.
There is a big difference between supporting multiple drive types and using them effectively. How does your software RAID stripe across disparate drive types? How do you handle drive upgrades? That must be a slow painful process.
Of course your file system is designed to be consistent, most file systems are, but the point is that the larger you scale, the more likely it becomes that some sort of trauma will occur that effects the entire file system. I much prefer the multiple file systems approach and binding them into a common name space. That way if bad tings happen they are compartmentalized and recovery is much easier and faster.
Paris cause it looks better than it really is.
PAM is configurable..
NetApp PAM allows you to cache metadata only, blocks and low priority blocks in PAM. Sounds like it's a bit more versatile than the Isilon version...
Isilon meta-data LIVES on SSD (not cached)
I think you're missing the problem we're solving. Isilon already has a fully coherent, expandable, global cache that works across our nodes. That uses DRAM for caching both data and meta-data and is far faster with lower latency than flash. The beautiful thing is that every additional storage node adds to the global cache, which is shared.
The problem we are solving is UNCACHED meta-data (and file) access. The meta-data for OneFS isn't cached on the SSDs; it resides permanently on the SSDs. This allows for lightning fast namespace as well as speeding up many file-system operations.
If you'd like to read more, you can check out my blog on the subject: