back to article Speed freak: Kingston HyperX Predator 480GB PCIe SSD

If evidence from last/early this year’s large technology shows is anything to go by, then 2015 might just be the year that consumer PCIe SSD market begins stirring – something that's long overdue. Kingston HyperX Predator 480GB HHHL PCIe SSD Kingston's HyperX Predator, the company's fastest SSD to date PCIe-based drives, …

  1. Anonymous Coward
    Thumb Down

    Outrageous pricing, no thanks

    I can buy a 1TB sata SSD (albeit slower) for considerably less.

    1. Cameron Colley

      I can buy a 3TB sata HDD (albeit slower) for considerably less.

      Depends what I want it for though, doesn't it?

      1. Anonymous Coward
        Anonymous Coward

        I suspect the costs and BOM are not hugely different from a sata SSD, your being price gouged charged novelty premium.

    2. Captain Scarlet Silver badge
      Happy

      Its still an extra maker though to start driving the prices down.

  2. Nigel 11

    240Gb version price

    Wasn't in the review? Anyway, Googled it around £180 for anyone wondering.

    Yes, you can buy a SATA SSD for less than half the price. But if you need the speed of a PCIe drive, SATA is useless however much cheaper it is! Hopefully competition will rapidly drive PCIe drive prices down to SATA SSD levels.

    I want one, but don't need one!

    1. Gordon 10

      Re: 240Gb version price

      Discounting the Want don't need buyers. What consumer level purchasers actually <n>need<n/> the speed gain?

      Isnt the market pretty tiny?

    2. Rabbit80

      Re: 240Gb version price

      I get similar speeds from my SSD RAID setup..

    3. Anonymous Coward
      Anonymous Coward

      Re: 240Gb version price

      Really, really depends on your setup. I have a 240 GB PCIe x4 and in comparison I have an IBM serveRAID x8 tied to up to 8 SSD's. Considering that I've two dozen SSD's in sets, fast is normal. I love the board for my normal lab config. It's when it's play time that things get (fun) weird. I absolutely loathe load time as I experiment. BTW, even when I use just three drives the RAID wins by a nose.

  3. Gotno iShit Wantno iShit

    Can PCIe SSDs be run RAID 1? I don't trust anything.

    1. dogged

      RAID isn't backup.

      1. Pascal Monett Silver badge

        He didn't say it was.

        1. Nigel 11

          No reason I can think of why you can't run two of them in a Linux software RAID-1 set.

          What I wonder about, is whether SSDs of any sort can be trusted to report data that's gone bad in the same way as hard drives can. Checksums and ECC codes are utterly fundamental to spinning rust drives, but even with these it's possible for a controller failure to allow undetected data corruption. And RAID-1 normally assumes that if a write succeeded without error, a read may be satisfied with data off either disk in the RAID array without checking that it's (still) the same on the other device.

          For the paranoid, these devices might be fast enough that a new RAID class could be defined and used without crippling loss of performance. Minimum three mirrored members. On read, get the data from all three members and check for equality. If one differs from the other two, it can be assumed to be the bad one. Two members wouldn't let you know which was the good one (assuming that the bad drive wasn't detecting its own failure state).

          Like navigating with one chronometer or three chronometers, but never two.

          1. Ian Michael Gumby
            Boffin

            @Nigel

            I am looking at this as part of a linux box build.

            I'd put the OS and 'swap' on two SSDs (SATA) and then use these in either RAID 1 (mirroring) if I have two slots. Or RAID 10 to get yet even better performance.

            Its a test bed of sorts, however, based on price / performance and density... I may opt for Intel's bigger yet more expensive kit.

            Money is less of an issue because its work kit. (Business expense ;-)

            A test platform for working on Spark / Hadoop / etc ... and I need a box that is quiet, I sit next to it and trust me... back in the 90's I had a rack sitting next to me... not fun.

          2. Anonymous Coward
            Anonymous Coward

            If SSD doesn't checksum the data (and I don't know if it does), then ZFS would seem to be a good option. I have not heard of SSDs failing by returning bad data; I *have* seen lots of them fail completely though.

            SSDs do seem to return the normal set of SMART stats, including reallocated sector count, so one presumes they can detect when a sector is bad.

            1. Anonymous Coward
              Anonymous Coward

              On the Windows Server front you can set it up in Storage Spaces and ReFS to mirror at least two copies. Toss in a few spinners and it'll also do tiering and if at least one spinner force at least one copy on disks as well as the PCIe. With ReFS it'll be wandering all of that to prevent bitrot.

              For Linux, ZFS up until Linux provides an equivalent or better.

          3. Suricou Raven

            You can provide reliability at a higher level. The btrfs filesystem does almost exactly as you describe: Everything it stores, it stores with checksum. If data it corrupted the check will not match and it will detect the error. If you've set it to provide protection too, it'll have another copy it can use for recovery.

  4. arnieL

    Give me a PCIe card

    With 4 slots.

    1. Anonymous Coward
      Anonymous Coward

      Re: Give me a PCIe card

      It's "only" SATA so you'll be limited to 600MB/s per port—and given it's £27.58, probably rather less than 2.4GB/s across all ports—but here you go, a low-profile PCIe card that takes four mSATA sticks:

      http://www.scan.co.uk/products/lycom-pe-125-ahci-6gbps-raid-4x-msata-low-profile-pcie-20-host-adapter

      1. arnieL

        Re: Give me a PCIe card

        Thanks, Mr Coward. Have seen such devices before, as you say only SATA so not ideal. Will have to be content with dreams of an Intel 750 for the moment.

        1. Suricou Raven

          Re: Give me a PCIe card

          Try looking at the Samsung SM951. It's not quite up to the same performance as the Intel 750, as it's an ACHI device (With promises of an NVMe in the pipeline), but it's still bloody fast. 512GB capacity, and a whole lot cheaper than the HyperX or 750.

          If you wait a few months you might be able to get the promised NVMe version of the SM951.

  5. Boothy

    Half the price per GB and I'll probably get one.

    Still a bit pricey for use at home, at least for me (unless your a money bags of course).

    The 480GB version works out at around 79p per GB.

    Same size SATA SSDs are ~30p per GB range. (although of course limited speed to around 550MB/s).

    So currently, these are 2-3 times the price per GB, to give you 2-3 times the speed of a SATA SSD.

    As a comparison, SATA SSDs were around the same 79 Pence per GB in late 2011/early 2012. So hopefully these will drop to 25p per GB (or less) by 2018.

    Get the price down to around 55p per GB (so mid to late 2016?), and I might be tempted.

  6. Richard Lloyd

    Shifting the drive mix...

    Still a bit too expensive, but it might start replacing the SATA 3 SSD boot drive + large HDD for media combo that's currently the sweet spot. Sadly, we're still years away from dropping HDDs from that combo - once 1TB+ SSDs drop their price enough, they'll become the new "media drives" with something like this PCIe SSD for booting/apps.

  7. The little voice inside my head

    Booting from PCIe, quality

    Cool tech, storage speed was a huge bottleneck, I remember the days I had to wait close to a minute and a half for booting XP. now it takes like 10 seconds... from SSD. Now, what bugs me is the way we started accepting a certain degree of imperfections and failures... like HDDs used to have longer warranties, I think they were better build in the past, now they are like disposable cameras. Hopefully these devices are manufactured to a higher quality standard.

  8. A. H. O. Thabeth
    Holmes

    Is this the same Kington who ships...

    ...one device to reviews and a cheaper slower device to its customers?

  9. Martin Maisey

    Using with FreeNAS / MicroServer G8

    I got the 240GB version of this recently to put in a new HP MicroServer G1610T / FreeNAS build. I use it for L2ARC read cache, for which it is absolutely outrageously fast for both large sequential read and small random IOPS type workloads. It's paired with a single Crucial 80GB SSD (supercapacitor-backed) for ZIL write log, connected to the optical SATA port - SATA-II, but not much bandwidth is required for ZIL logging as by definition the writes are small. Volume storage is 4x2TB WD RED SATA. 16GB of RAM completes the picture, of which 8GB is used for running FreeNAS and for ARC read cache. FreeNAS boots off MicroSD.

    This provides a bit of a beast of a SAN for my network (client access via CIFS/Plex, Proxmox VM/container storage over NFS/iSCSI - all via 2 aggregrated 1GbE NICs) in a *very* small physical package.

    The expansion potential of the MicroServer is pretty limited (4xSATA 3 via SAS HBA, 1 x SATA2, 1 x PCIe) and this feels to be a reasonably balanced configuration for converged end user storage and VM NAS pool use cases. This pretty much describes my home network - file, media and print serving have to just work otherwise my wife gets upset, but I also want to be able to play with stuff ;-)

    As an added bonus, I can run a few VMs directly in the remaining 8GB on the box using the FreeNAS VirtualBox plugin. These VMs then get very fast access to storage over the local PCI and SATA buses, rather than having to access over the network. There are a few temporary 'lab' style workloads (for things like Cassandra etc.) that might benefit from that, though they would have to not require much CPU given the relatively weedy Celeron processor provided as standard with the MicroServer. There are options for upgrading the CPU - e.g. to the Xeon E3-1265Lv2 4 core / 8 thread Ivy Bridge - but I'm not sure I can really be bothered with that... .

    The only limitation I've so far found is that there's no real option to put 10GbE NICs in to see how fast the NAS is capable of servicing large block reads from the HyperX-cached working set - strongly suspect the limitation here will be the 1GbE NICs built into the MicroServer.

  10. Manolo
    Happy

    Smaller and cheaper

    I'd like to see smaller and cheaper drives like that. I use my SSD just for my root partition, with /home on spinning rust. A 32 GB SSD would be ample, yet when I built my PC a few years ago 64 GB was already the smallest I could get. Why would I need a superfast 480GB SSD if all it does is hold documents and media? I wish I could buy a 32 GB SSD with those speeds for around €40.

    1. arnieL

      Re: Smaller and cheaper

      64Gb for £30 good enough?

      http://www.scan.co.uk/products/64gb-sandisk-pulse-25-ssd-sata-iii-6gb-s-mlc-flash-read-490mb-s-write-240mb-s-7200-iops-1800-iops-ma

      1. Manolo

        Re: Smaller and cheaper

        "64Gb for £30 good enough?"

        Thats a good price, but it's like the one I already have: it's SATA, not PCI-E. I would like a faster PCI-E card, but not in the currently available sizes, because they are just too expensive.

    2. Boothy

      Re: Smaller and cheaper

      €40 is around £28, starting price for SSDs is around £25 for 30GB, so within your budget. Although I suspect a lot of these are probably old stock.

      Although if you want better value for money (i.e. £ per GB), £50 will get you a faster 120GB drive.

      1. Manolo

        Re: Smaller and cheaper

        "£50 will get you a faster 120GB drive.'"

        But no PCI-E, and that was my point.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like