back to article Adaptec adds DRAM cache to entry-level RAID

DRAM-caching boosts entry-level Adaptec RAID controller performance past software RAID and cache-less host bus adapters. Adaptec, the adapter company that vanished down the Steele Partners plughole, exists as a RAID controller operation and brand inside PMC-Sierra. And now it has announced the "Adaptec by PMC family of Series …

COMMENTS

This topic is closed for new posts.
  1. Anonymous Coward
    Thumb Down

    I call BS

    Sorry, there is no way simple DRAM cache on a controller will outperform software RAID any more. Software raid nowdays in decent OS-es is deeply integrated to buffer management and IO operation (re)-ordering. As a result throwing the same amount of RAM at a controller level vs OS level is guaranteed to result in a lower performance.

    Only if the cache is specifically for faking synchronous write ops and there is battery backup (with sync to flash for long term power failures) you can probably get better performance. Even in that case, I would rather let the OS manage it as leaving it to the controller cache means playing with data integrity on a level which would make most people uncomfortable.

    Overall - it's Adaptec all right.

    1. Nigel 11
      Thumb Up

      Seconded

      Hardware RAID. Just say no.

      1. Paul Crawford Silver badge

        Hardware raid - yes

        Ever tried to dual boot windows & Linux on a software raid?

        It is not always about speed, or basic hardware costs, but sometimes just from the convenience of a card that makes your disks look like one SCSI volume for easy support. The Areca cards I have used preserve disk assignment if you swap cables, and will re-build after a hot swap without care from the OS.

        For software RAID I would want ZFS, as much for snapshots and file checksums. That is a real boon to data integrity.

        Oh, and the last word to everyone out there is a reminder "RAID is not Backup"

  2. Jad
    FAIL

    Adaptec Technology ...

    I wonder if they have improved any ....

    http://marc.info/?l=openbsd-misc&m=125783114503531&w=2

    http://marc.info/?l=openbsd-misc&m=126775051500581&w=2

    1. Paul Crawford Silver badge
      Unhappy

      @Adaptec Technology

      Thanks for the warning!

      Must say I used an old IDE Adptec card several years ago, and it had separate cables per disk so one HDD fault should not cause a system failure.

      Guess what? I had a dead disk and the card locked up when booting, so I had to find and remove the faulty HDD before I could boot the PC. Not exactly "high availability".

      You really wonder if these companies really test things, you know with special HDD that simulate various faults (I/O time-outs, bad sectors, etc)?

  3. Nigel 11
    WTF?

    What's that blue thing?

    I have to ask, what is that blue thing around the heatsink in the middle? Does it serve any purpose, or is it just eye-candy?

  4. Joerg

    Whoever tells that software RAID solutions are better clearly knows nothing

    Whoever tells that software RAID solutions are better clearly knows nothing, really.

    Either you two above are just two little kids with no knowledge of how hardware and software work and what RAID is or maybe you are competitors plants posting that nonsense of yours.

    Hardware RAID will always offer a way better performance than software solutions for too many reasons other than the obvious ones.

    Hardware RAID controllers are for proper reliable and fast RAID storage solutions, software based implementations are cheap marketing scam good only for those that don't want to spend a few bucks where needed for a proper RAID configuration.

    1. Flocke Kroes Silver badge

      What obvious reasons?

      For that matter what are the far too many cryptic reasons? Perhaps you could make you point more clearly without any facts but with a few more ad hominem attacks.

    2. Nigel 11
      Thumb Down

      OK, I'll qualify that

      I'll accept that enterprise-grade hardware RAID may be OK, if you are in that market. This article clearly isn't referring to that market. I wasn't either.

      On a modern CPU and motherboard, the overhead of XORing two buffers in negligible. The SATA ports are independant and move data to RAM by DMA. The Memory bandwidth is adequate for RAID-5 operation, even during a rebuild. I've benchmarked it. The Hardware RAID was slower than the same controller in JBOD mode and software RAID. I've benchmarked it. I get pretty much the same performance from software RAID-5 as I get from a single disk, on flat-out all-write activity (the worst case). I got *better* performance from a 3Ware controller in JBOD mode with software RAID, than from hardware RAID-5 on the same controiler.

      But even if there were a major efficiency penalty, which there isn't, I'd still take the software RAID. Ask yourself, in five years time, when your RAID controller croaks, are you certain that you'll be able to get a replacement controller with complete on-disk-format compatibility? Are you absolutely sure that if someone acidentally swaps two disk data cables, the controller won't trash your data? Are you absolutely certain that after the controller has been swapped, the next rebuild operation will work the way it should? Are you sure that you'll be OK even if the company that made your RAID controller has gone bust, or has been taken over by a venture capitalist who has sold it on to the highest bidder? And so on. Any answers you get, they'll be supplied by salesmen. Of course it'll be OK. (What was the question? Something technical I don't understand).

      At the very best you are locked in to one controller vendor, with the only alternative being many hours downtime while you copy several terabytes from one array on an old controller to a new array on a new controller, quite probably across a network because you can't plug both old and new array into the one system.

      I know that with Linux RAID I can shuffle the disks and it won't matter. I know I can take the disks out of one system and plug them into a completely different system, and have the same array up again minutes later. I know that I can replace 250Gb disks with 1Tb disks one at a time, and then resize the array to four times bigger. I know I can reshape a 3-disk array into a 5-disk array. I've done all these things. And I know it'll carry on working effective forever.

      There is one critically important thing: make sure your RAID system is connected to a UPS, and that the UPS is correctly configured so that it *will* perform a clean system shutdown when its batteries get low. Battery-backed RAID controller I hear - well, let me tell about such a system, when the motherboard failed, and two disks got swapped when the thing was re-assembled in another box, working against the clock of a discharging battery. What my source thinks happened, is that it flushed its RAM cache onto the disks first, and only then noticed that the disks were swapped, and then quietly reconfigured its array so the filesystem corruption had plenty of time to spread. But of course, it's all secret-source firmware running on secret hardware, so the only certainty is that it barfed, and the data got scrambled.

      1. A Non e-mouse Silver badge

        Yes you can..

        Have I replaced a RAID controller with a different model and not worried ? Yes

        Have I moved RAID arrays from one RAID controller to another and lost nothing ? Yes

        Have I taken the disks (or discs) out of a RAID array, shuffled them, and put them back in and got my RAID array back ? Yes.

        How? HP(Nee, Compaq)

        The Compaq/HP RAID controllers have been able to do these feats for many years.

      2. Frank Rysanek
        Thumb Up

        agreed, for the most part

        The one advantage of a hardware RAID (if integrated with a proper/compatible chassis) is failure LED's. This is somewhat difficult to get from a software RAID on a plain HBA.

        At the low end, you need a disk enclosure/backplane with either SGPIO (combine with Adaptec, Areca or just about anything recent with SFF-8087) or with discrete "failure" signals, one per drive (combine with any Areca controller).

        As for firmware continuity, Adaptec AACRAID used to be my favourite during several years, but in the recent years Areca has taken over their crown. Replace a controller with something you find in your dusty stock of spare parts - well that's where the fun begins :-)

        Swapping cables around has never been a problem on any HW RAID. One recent experience, with a SAS-based Areca: I built an array in a 24bay external box (attached to an Areca RAID). Then I powered down the box, added another external JBOD, and scattered the drives between the two enclosures. Powered up, and voila, no problem - Areca combined all the drives correctly, from the two enclosures. Or another example: build a RAID in one 24bay enclosure, and then plug in another external SAS enclousure at runtime. No problem - enclosure detected, drives enumerated, ready to configure another RAID volume or whatever...

        A quiz question: suppose you buy a new server with two drives in an Adaptec (AACRAID) mirror. Before installing your production OS, you try some recovery exercises, to see how the firmware works. You set up a mirror in the Adaptec firmware, you install an OS maybe, you remove a drive from the mirror and insert another one, to see what it takes to rebuild the mirror. The rebuild goes on just fine. You go ahead with the OS install and turn that into a production machine. You remember to "erase" the drive that you initially pulled, when testing the hot-swap: to be precise, you plug it alone into the Adaptec RAID controller once again, and remove the degraded array stump. Then you plug back your two production drives (the mirror), and put that "cleared" drive aside. After two years, one of the production drives fail - so you fumble in your drawer, produce the spare drive, plug it in, maybe a powercycle... and voila: *the production mirror array is gone* ! Explanation: the recentmost configuration change, logged on the drives, happend to be the array removal on your "spare" drive...

        Otherwise I agree that for Linux it doesn't make much sense to buy a HW RAID just to mirror two drives to boot from. If you know your way through the install on a mirror, and maybe to install grub manually from a live CD, and especially if you don't plan to spend money on a proper hot-swap enclosure (so that failure LEDs are not an issue either), the Linux native SW RAID will prove similarly useful as any HW RAID firmware. For Windows users willing to spend some money on hot-swap comfort, I tend to suggest the dual-port ARC-1200 with some SAS series enclosure by Chieftec = the ones coming with workable failure LED's (the ARC-1200 is SATA-only).

        As for parity-based SW RAID on Linux: if you can find it in dmesg, the MD RAID module does print a simple benchmark of several alternative parity calculation back-ends (plain CPU ALU, MMX, SSE etc) and picks the speediest one. And the reported MBps figure has been well into the GBps area for ages (since the PIII times). 3 GBps on a single core are not a problem - corresponding to 100% CPU utilization for that core.

        For most practical purposes though, you'll be limited by your spindles' (drives') random seek capability. This is about 75 IOps for the basic desktop SATA drives. That is typically the bottleneck with FS-oriented operations. In such a scenario, you won't get anywhere near a HW RAID's CPU throughput limit. And yes, OS-based buffers / disk cache can sometimes help there - provided that you can configure the kernel's VM+iosched to make use of all the RAM (speaking of Linux that is).

        1. Nigel 11

          Failure LED alternative

          Make sure that the drive's serial number is written on a sticky label on the outside of its enclosure. If you don't have hot-swap enclosures it's also a good idea to write the serial number on a label towards the back of the drive, where it can be seen after it is installed.

          Then when the system says something is wrong with (say) /dev/sdd, use "smartctl -i -d ata /dev/sdd" to find its serial number. In the case that it's bricked itself so bad that smartctl can't get the serial number, get the serial number of the /dev/sdx that are still working, and proceed by elimination. And don't forget to re-label the enclosure or new drive!

    3. Ammaross Danan
      FAIL

      Compute resources

      Quick thought for you:

      I'd rather have a few %CPU taken from an i5 (or X4 depending on your faith), then offloading to a likely a 400mhz chip (if it's lucky). Most hardware RAID cards bottleneck themselves by not being able to compute parity fast enough and not being able to handle the volume of data transmits. Even high-end RAID cards can't handle the throughput that software RAID can. If you don't believe me, go get two high-end RAID cards, stick 8 SSDs in RAID-0 on them, then software RAID-0 the two RAID volumes together. Scales quite linearly, which you wouldn't see if the software RAID was the bottleneck.

    4. gherone

      Hardware RAID controllers are (mostly) obsolete.....

      With today's CPUs and prices, software RAID is running circles around hardware RAID controllers. Not to mention that you can have a couple GB of RAM dedicated to caching, in a software RAID, which you cannot get at a reasonable cost in hardware controller. All the benchmarks I have seen, on the same hardware, show that software RAID is faster then hardware RAID on controllers below $1K.

      If we start talking about data center stuff, the equation may be different - but for small, cheap servers (the market addressed by these cards) hardware RAID 5 does not have a business or engineering case.

      There may be a business case for hardware RAID, for certain applications where software RAID support is not present, but not for a file server using a current OS with good software RAID support.

    5. Davidoff
      FAIL

      Whoever believes that Hardware RAID is always faster than Software RAID knows jack shit

      "Hardware RAID will always offer a way better performance than software solutions for too many reasons other than the obvious ones."

      That's utter BS. More often than not, Hardware RAID performs much worse than software-based RAID solutions, especially with entry level controllers like the ones in this article which have a simple XOR engine and no I/O processor. Hardware RAID comes from an aera where your average CPU would become quickly overwhelmed by the amount of XOR calculations required for the better RAID levels (5, 6). Today, in the days of superfast multicore processors, that's a different story. In most environments the necessary XOR calculations cause negligible load on the main processors for small to medum arrays. However, the reason hardware RAID controllers still exist today is that for bigger arrays, the CPU load increases, taking away CPU performance from the main tasks. So in scenarios where demanding applications are used, it makes sense to use a good(!) hardware RAID controller to keep the CPU free from doing RAID calculations. It doesn't mean the Hardware RAID is faster, though. It just means the CPU can spend more time on its main tasks.

      BTW: many storage arrays of today are just that - a huge software RAID, running on a PC as controller. That's "for proper reliable and fast RAID storage solutions" for you.

      So next time you should get a clue before labeling others as "little kids with no knowledge of how hardware and software work and what RAID is", or you will again look as stupid as you do now.

  5. jonathan rowe
    Thumb Down

    oooooooo!! 128mb of cache

    Well that's really going to take the heat off a 256gb SSD isn't it??

  6. Anonymous Coward
    Thumb Down

    how much ram?

    if it was a GB of battery backed cache then it would have my vote - but unfortunately this seems not to be the case.

  7. Anonymous Coward
    Anonymous Coward

    Way back when.

    My first caching controller card had 4mb of ram at a speed of 10mb per second. 128mb of ram would have made a big difference back then.

  8. Luke McCarthy
    Coat

    I could do with a wee dram myself

    Mine's the one with the whisky flask in the pocket.

  9. Anonymous Coward
    WTF?

    cache size?

    what is 128Mb of cache nowadays? if hardware raid card almost the same size as video card, which holds up to 2-4gb , how hard it should be to put a 512mb or 1gb to an entry lvl hardware raid controller? i wont buy a hardware raid with 128Mb of cache for $200, i'll better invest this money to 2 disks and make a software raid on a motherboard. thanks.

This topic is closed for new posts.

Other stories you might like