back to article Sun debuts FlashFire, calls record books

Sun has announced its solid state FlashFire products - an apparently record-breaking 2TB array and a 96GB card. The F5100 array, some details of which were revealed in September, has up to 1.92TB of single-level cell NAND flash, originally thought to be supplied by STEC but now from Marvell, and comes as a 1U rackmount shelf. …

COMMENTS

This topic is closed for new posts.
Silver badge

Don't forget to double the price

As you'll need at least two: one for the data and the other for it's mirror. Oh yes, then a chunk of the I-O performance goes towards keeping the two copies in sync, so you'd better halve that figure, too.

0
0
Stop

@Pete 2

Only if you want to mirror the whole unit. You can separate the same unit up to 16 domains and one domain can mirror another.

0
0
Anonymous Coward

MTBF ?

I'm a little rusty on MTBF's

but if each module is 2000000 Hours MTBF,

then is 80 of then 1/80 th the MTBF,

which is around 2 1/2 years.

Not bad, but does that mean you need a raid or check sum then ?

0
0
Silver badge

@Victor 2

> one domain can mirror another.

You must work for Microsoft/Danger.

Single point of failure?

0
0
Pint

re: Don't forget to double the price

A few comments on this simplistic statement:

1. How is this different than other solutions?

2. You can mirror between HBA's allowing for full IO use on writes.

3. Reads, which are the majority of IO in most configs, will not have the mirror penalty that you have tried to claim.

4. Not everyone needs to mirror their data... Often fast access storage does not have to be reliable... It's not like

you'll put your entire database on one of these.

A beer, just because I like beer... mmmm..... beer.

0
0
Bronze badge

RAID protection

I would expect enterprises to run these using a RAID protection. However, it doesn't need to be RAID-1 (mirroriing). A RAID-5 setup would work fine and such is the very low latency of I/Os that the RAID-5 overhead of read-modify-write cycle (so one write operation can result in 4 back-end I/Os) is not going to be an issue. (In any case, arrays with NV caches hid much of this anyway). Even when running in degraded mode with the loss of one drive out of the RAID-5 set, then such is the performance and latency of these devices that I doubt you'll even notice it in performance terms. Also, unlike hard-drive based RADI-5 sets, rebuild time should be much faster and much less intrusive.

Something like a 7+1 RAID-5 set would reduce the redundancy cost overhead and still smash the hell out of any hard disk based setup in terms of IOPs and latency.

0
0
Silver badge

@MBTF

> around 2 1/2 years

Correct. It means that half the units will have a failure in less than this time and a quarter

of them would have a failure in 15 months. To run one of these in a production environment

without having a fallback that didn't rely on any of this array's components, (and why else

would you pay such a large amount of cash for this quantity of storage if not for a front-line

production system) would be madness and / or negligence.

0
0
Boffin

Re : @MBTF (if your running single instance)

Everything has a mean time before failure and not many people would deploy without some form of redundancy but there are many ways to skin a cat.

If you did run with a single flashfire (perhaps Raid5'd or mirrored in the same array) but then replicated the content elsewhere then who cares? We've many databases that would massivley benefit from something this fast due to millions records going in and even worse load when reporting or nightly index rebuilds kick in. Replication at the database level (Dataguard) to a standby or a dual fed config would lose this risk your worried about (second system taking and processing the same feeds on a different site) I'd still want tape in there somewhere though.

Your right though, you'd be mad to run single instance but if your internal SLA's allow you'd have the fastest 4tb restore that I've ever seen!

You'd hope for two at a time carved up for multiple servers attached locally, acting as the large chunks of L2ARC or metadata in the latest Solaris 10 release ZFS. Chuck in some backend SAS attached JBOD's for spindle density (for cheaper IOP's than from disk arrays) and you'd get hybrids with large caches, cheap per terabyte costs and bucket loads of performance. Hitting these pieces purely for DB index's or holding metadata in VxFS MVS filesystems would be a cracker too.

ZFS heaven!

0
0
Anonymous Coward

@Steven Jones -- Don't do it..RAID on Flash is a no-go

Steven,

The update-in-place nature of RAID-5/6 striping is about the worst kind of poisonous workload for NAND-Flash

The RAID write-penalty is unacceptable on SSD because it turns every write operation into N+1 write operations, meaning (a) you'll wear out your Flash N+1 times faster, and (b) Flash write performance is awful when you turn off the DRAM write buffer, which MUST be done for any kind of parity-based RAID.

If you lose a cached write in the RAID-5 or RAID-6 scenario, you've corrupted your data -- and will probably never know it until you try to rebuild.

Furthermore, Flash writes are so slow compared to reads that the on-disk write-buffers fill up fast, and array performance goes down the tubes. FYI, this is why IBM/STEC used mirroring on thier recent SPC-1 benchmark result.

If you want to test this yourself, build a 4+1 RAID set on Intel X25-E SLC Flash and let it run for a bit against IOmeter with a 50/50 Read/Write workload. Leave the write-cache enabled.

After a couple of minutes the SSDs are performing like cheap SATA disks.

Then, write a big zip-file out to the array and pull the plug on it just after the copy completes. Pull a disk, power up the array again and test the zip file...you will find garbage for data.

0
0

Sun has good tech

everyone agrees on that. Even the competitors.

If you use SSDs as a cache to a ZFS disk solution, then ZFS will correct errors in the SSDs. And also, the SSDs needs to "warm up", meaning that the cache needs to be populated with random data. If there are problems in the SSD, the ZFS will automatically get the correct data from disk instead and just overwrite the SSD data.

The SSDs are not important for the drives, SSDs will never write wrong data to the discs. I believe you can just pull the plug for the SSDs without any problems. You will just loose a huge performance boost. The SSDs are just an addon boosting performance.

0
0
FAIL

Statistics fail

MTBF means Mean Time Between Failures

0
0
Pint

that's right, no need to RAID the SSD

That's right, SSDs are not "Inexpensive Disks” that should be used in a “Redundant Array” – what RAID stands for.

SSD from Sun Storage working with ZFS are for data caching, not persistent data storage.

This is different SSD from what you use in Apple laptops.

Caching tier do ECC checksum, not RAID parity checksum.

SSD eliminates the need for short stroke and wide striping of HDDs for higher performance, but does not replace HDDs for data storage.

Dunno why folks are obsessed with MTBF of SSD – nothing lasts forever, and so what if a caching device fails? You still have your data on HDDS and the broken SSDs are covered by the warranty. (Yeah, don’t buy SSD on the street.)

0
0
This topic is closed for new posts.

Forums