back to article Oracle and HP's database machine predicated on Voltaire

"Common sense is not so common" is a quotation attributed to Voltaire, the 18th century French philosopher, essayist, writer and wit. So, how sensible were HP and Oracle in basing their database machine around an IO fabric that has struggled to find favour beyond the HPC market? The fabric in question is InfiniBand, and the …

COMMENTS

This topic is closed for new posts.
  1. Dazed and Confused

    Latency not bandwidth

    Isn't the primary reason for selecting InfiniBand due to it's low latency not it's high bandwidth. The clustered database is highly sensitive to latency for both distributed lock management and caching.

  2. Anonymous Coward
    Boffin

    Wonderful infomercial article, full of fud throughout

    1. DCOE is not essential for datacenter convergence. Datacenters can take a leaf out of the ATCA book where FC and Ethernet are more or less interchangeable across the fabric.

    2. It is wonderful to quote switch per-port numbers, however they do not include adapter costs. 1G and for high end server 10G costs are 0 as these are included on the motherboard. Infiniband is not and the cost is nearly double that the cost of the switch port.

    3. The reality is that enterprises continue not to grok IP. As a result Ethernet continues to be their technology of choice for long-range connectivity. While Ethernet has always lagged behind Infiniband in terms of performance this is what enables it to win in terms of market share. With Ethernet it does not matter if you are connecting a server in a rack to another server in the same rack or to a server on the other side of London.

  3. Anonymous Coward
    Anonymous Coward

    Latency is the biggest benefit

    It's interesting to see if anything real comes out of this.

    As for the benefits of Infiniband with clustered Oracle, as I see it, the main differentiatior is not bandwidth, but latency. It's rather uncommon to hear about even 1Gbps cluster interconnect being the bottleneck (because of low capacity). The lower latency of Infiniband, as compared to UDP on Ethernet, is however a significant factor in the lower bound of clustered Oracle event latency.

  4. mvrx
    Thumb Down

    10GbE won't be more expensive for long.

    While delayed about a year, I think later generations of x58 were planned to have 10GbE included. Infiniband keeps losing because its not mass market. Its only a matter of time before we are are blessed with 10GbE on our motherboards.

    I also have hopes with IBM's advancements in cheap optical technology we'll see an eventual optical PCIe @ 40gb or 100gb interconnect that will branch into home networks. I really want to be able to buy a high end PC, and natively interconnect it with a BIOS level hypervisor creating a single resource pool. This would also effect GPU arrays.

  5. Anonymous Coward
    Anonymous Coward

    @Adaptor Costs

    Quite a lot of motherboards have Infiniband 4x DDR on them now as well - I know, I've been looking at them for the past few weeks when selection hardware for our new application platform. The key advantage of IB is drastically reduced latency compared to Ethernet, by a fact of 10-100 in some cases.

This topic is closed for new posts.

Other stories you might like