4 posts • joined 11 Feb 2010
But NOT Scale-Out! ;)
It's important to realize this is 96 RAID groups, 24 file systems and 24 separate controllers, which means you can't actually do real quotas, replication, or snapshots. Ironically, not at all enterprise and much more appropriate for HPC.
Its also worth pointing out that you can't scale a single workload without painstaking alignment of data across the various controllers.
A solid SPEC submission, no doubt - but not a scale-out result.
Avere not front Isilon
Glad to hear Avere is making some real progress with their scale-out caching appliance. We at Isilon love the approach!
To set the record straight, Sony is not fronting Isilon with their equipment. I spoke with Sony and they are only fronting NetApp filers across the WAN.
Just want to set the record straight. ;)
Isilon meta-data LIVES on SSD (not cached)
I think you're missing the problem we're solving. Isilon already has a fully coherent, expandable, global cache that works across our nodes. That uses DRAM for caching both data and meta-data and is far faster with lower latency than flash. The beautiful thing is that every additional storage node adds to the global cache, which is shared.
The problem we are solving is UNCACHED meta-data (and file) access. The meta-data for OneFS isn't cached on the SSDs; it resides permanently on the SSDs. This allows for lightning fast namespace as well as speeding up many file-system operations.
If you'd like to read more, you can check out my blog on the subject:
Majority of Isilon 1200+ customers have over 10 nodes...
This is clearly FUD. Isilon has over 1200 customers today, with a solid majority of them having more than 10 nodes in a single cluster. We have customers that have multiple PBs in a single cluster/single file system, that have scaled transparently from TBs. We have customers with over 100 nodes in a single cluster/single file system.
There is no need for a consistency check in OneFS. It is a journaled file system that is hardware assisted using a battery-backed NVRAM located in each node - it is by design, always consistent. We have supported multiple types of drive densities in nodes and multiple generations of nodes for over 5 years.
I would love to fill you in on more details about our architecture, but what you've been sold is some FUD.
- Product round-up Ten excellent FREE PC apps to brighten your Windows
- Review Tough Banana Pi: a Raspberry Pi for colour-blind diehards
- Product round-up Ten Mac freeware apps for your new Apple baby
- Analysis Pity the poor Windows developer: The tools for desktop development are in disarray
- Chromecast video on UK, Euro TVs hertz so badly it makes us judder – but Google 'won't fix'