back to article Oracle muscles way into seat atop the benchmark with hefty ZFS filer

Oracle has announced new cache-heavy ZFS appliances and has seized the SPC-2 benchmark top spot with one of them. Announced in early September, the third-generation ZFS appliance has most I/O served from DRAM cache, up to 2TB of it, and that makes these boxes stream data like a dragster sucking nitrous oxide. As before the …

COMMENTS

This topic is closed for new posts.
  1. Anonymous Coward
    Anonymous Coward

    Is it me ....

    With 2TB of DRAM available it seems like ZFS is more being an "in-memory" file-system and not so much much about the back-end storage.

    Or did Jim fowler score and exceptionally good deal on DRAM bits and decided to load everything up with DRAM.

  2. billse10

    "Also the street price/performance numbers will be lower than list price/performance, depending on how good you are at getting a discount from your supplier"

    when was the last time anyone paid Oracle list price for anything ?

  3. Anonymous Coward
    Anonymous Coward

    Their older stuff was the same way. Ours "only" have a TB of DRAM 2TB of flash cache and 160Disks and they scream. There is nothing in the new design I can see that shows how these are so much better than the 7420 so I am thinking a lot of it must be software based or maybe because they moved to faster 12Gb SAS cards or something. I see they did add 16Gb HBAs while ours are 8. Processors look the same though.

    Depending on your data set though 2TB of RAM isnt enough to run everything there. Ours is over 12TB

  4. Anonymous Coward
    Anonymous Coward

    Headlines are but part of the story

    Its all well and good having massive I/O based on caching, etc. The real issues are: Will it be stable when administered? Will Oracle actually provide bug-fixes for it?

    Sadly our experience with the previous generation of Oracle storage that would fail when you as much as looked at changing any settings, and their utter ineptitude at fixing software bugs that were reported months, or even years ago, means we will NEVER consider them as a potential supplier again!

    1. StorageCamel

      Re: Headlines are but part of the story

      Oracle uses more than 200 PB of the ZFS kit (now ZS3) in its own data centers. So putting their money where their mouth is.

      These arrays are used to provide for:

      41,650 concurrent application users

      15.4 million database transactions per hour

      8 disparate enterprise applications

      4 storage protocols

      and last but not least 3 weekly full backups to StorageTekTape

      Zero downtime environment

      Proves to be very stable to many.

      (Author of this post appears to work for Oracle - Mod.)

      1. Diskcrash

        Re: Headlines are but part of the story

        Not to be too sarcastic here but HP uses HP, NetApp uses NetApp, IBM uses IBM, EMC uses EMC, Dell uses Dell, etc. So the only story would be if they didn't use it but such a claim still pretty much ignores any economics or operational factors that a customer who has a choice of what they can use would want to consider.

    2. Dejijones

      Re: Headlines are but part of the story

      I have been using the previous generation 7420 for almost 3 years now and it has been very reliable, no problem with stabilty at all and patch updates has been regular as well. I don't always agree with Oracle's way of doing things, pricing for example but the ZFS SA is definitely a good product.

      1. Anonymous Coward
        Anonymous Coward

        Re: @Dejijones

        Have you used the dual-head failover option? We do, and it sucks donkey balls. In practically *every* failure case wee have had, which is normally the 'akd' management process and averages around one per month, the damn thing fails to perform a fail-over.

        Have you ever attempted to use the option to save and restore the machine's settings? We did and had to have it factory re-setted to get the thing working again.

        Do you SMTP to monitor it? We do and when there is a fault the adk process blocks (appears to be single threaded) so you get no information about the fault. And both the command line and web interfaces also hang for minutes at a time.

        The list goes on, and on... :(

This topic is closed for new posts.

Other stories you might like