back to article Oracle fudges touts Sparc SuperCluster prowess

Oracle has said that sales of its Sparc T series of servers are growing in the "double digits" in its most recent quarter, and that it expects this to continue through the remainder of its fiscal year. And while Oracle has not been precise about what is selling and what isn't selling, what is clear is that Oracle wants to peddle …

COMMENTS

This topic is closed for new posts.

Page:

        1. Kebabbert

          Re: It's hilarious

          @Jesper Frimann,

          While I respect your knowledge, I do not really appreciate your very strong bias. You can complain on something, and in the next post you do exactly the same - how nice is that? You answer "Phil 4" like this:

          "...So you now refute to name calling.. That is nice and mature, but kind of shows your true colours..."

          Let me ask you, how many times have you done the same to me? Some would call that hippocrisy when you complain of "Phil 4" calling you "Jester", and when you call me "kebab brain" or anything else of all insults you spewed upon me - everything is fair and square, right?

          I remembered a discussion we had. I showed a benchmark where the SPARC Niagara T2 had lower latency than POWER6 and you said something like "I reject your benchmark because POWER6 had better throughput". Later I showed a benchmark where SPARC Niagara T2 had better throughput, and guess what? You said something like "I reject your benchmark because POWER6 had lower latency"!!!! No matter what benchmark I show, I can never get you to accept them. Damned if you do, damned if you dont.

          Again and again you take the freedom to complain loudly when others do something, and when you do exactly the same thing everything is fine. How mature, right? Now, this is a side I dont like with you.

          I can show you a benchmark, and you reject that because of A. Then I show another benchmark, and you reject it because of B. It will continue for eternity, C, D, etc. Wriggling and wriggling. Worse than a lawyer.

          Whats worse, one time you said something like "but kebabbert, what has happened? You show me a benchmark! You are backing up your claims with a benchmark???". What ugly behavior. People has complained on me posting to many links to benchmarks and white papers and research papers. Just look above here, I posted several benchmarks. Well knowing of that, you pretend to be surprised "why, are you posting a benchmark??? I have never seen you do that!!!". Really really ugly.

          Me on the other hand, often accept your benchmarks and had no problem of saying that POWER7 was the fastest cpu a while back. Because it was true, just look at the benchmarks! But to make you say something similar of SPARC would never work, no matter which benchmark I showed you. "No, that benchmark had blue ink, it must be black ink".

          1. Jesper Frimann
            Devil

            Re: It's hilarious

            @Kebbabert.

            Now there is a huge difference between using a nickname.. and then using a real name. If I had called myself PizzaErnie things are a bit different. If you had used, lets say your real name was Stefan Ängström. People prob' wouldn't have made word plays on Kebbabert.

            As for being biased. Then to fanatics the somewhat neutral seem to be biased.

            As for benchmarks.. then there are benchmarks numbers and then there are "Earth Shattering World Record", where it turns out that there actually only have ever been one submission.

            Again I have had no problem calling T4 what it is.. a good solid product, I've said it several times, but that doesn't mean I have to love the T1, T2 and T3. I have no problem pointing out the RAS features of the M9000, but that doesn't mean that I'll participate in the 'dance around the golden bull', that people seem to insist doing when it comes to the SuperCluster.

            I call 'em as I see them.. and I am sometimes wrong.. which I admit when I am.

            // Jesper

            1. Jesper Frimann
              WTF?

              Re: It's hilarious

              And just an update on the benchmark numbers.. If you remember the SPECjEnterprise2010 benchmark, here IBM actually released a new 16 core benchmark. And now funny enough then, as I predicted, POWER7+ is all of a sudden 2.2 times faster than the T4 on a per core basis. Now the T4 submission is still faster as it is using 128 cores versus 16 POWER7+.

              Again as I predicted.

              // Jesper

  1. Phil 4
    Thumb Up

    The secret sauce is Oracle's storage to compute Infiniband I/O

    Whats interesting in most of these articles questioning Oracles Engineered Systems performance -and like here on SPARC SuperCluster, is that they never mention the one technology that separates Oracle from every other vendor. While all of Oracles competitors continue to fumble with (potentially very fast) storage attached to compute via fibrechannel, limiting the performance potential of the storage, Oracle fully leverages InfiniBand attached storage. Infiniband interconnects move 16x more data than GigE and 5x more than 4Gb Fibrechannel. So, Oracle is in a unique situation having developed InfiniBand attached storage connecting a multi-tier storage architecture. This is where much of the performance differences come from.

    1. Anonymous Coward
      Anonymous Coward

      Re: The secret sauce is Oracle's storage to compute Infiniband I/O

      # Infiniband interconnects move 16x more data than GigE and 5x more than 4Gb Fibrechannel"

      Didn't you just an hour ago try to refute the claim that Oracle compared new with old? What irony.

      1. Phil 4
        Thumb Up

        Re: The secret sauce is Oracle's storage to compute Infiniband I/O

        HUH? That’s the whole point! Regardless of whether you compare Oracles Engineered Systems to the current latest competition from IBM or HP or with older systems for consolidation, its clear that Oracles Engineered systems including SPARC SuperCluster have a significant competitive advantage in performance as touted by this and many other reports.

        1. Anonymous Coward
          Anonymous Coward

          Re: The secret sauce is Oracle's storage to compute Infiniband I/O

          "its clear that Oracles Engineered systems including SPARC SuperCluster have a significant competitive advantage in performance as touted by this and many other reports."

          No, what using Infiniband would make clear is that Oracle wants to make their SuperCluster systems about anything other than Sparc. As mentioned, Infiniband is used by many other vendors. Oracle didn't invent scale-out clustering with Infiniband three years ago.

    2. Jesper Frimann
      Trollface

      Re: The secret sauce is Oracle's storage to compute Infiniband I/O

      Eh ?

      You do know that a lot of vendors have been doing Infiniband connected storage and networks for years.

      Way to many of you sunshiners are kind of like the Mac fanatics, typically not people who work with computers, (I like macs for what they are just to make that straight) who insist that everything was invented by apple.

      // Jesper

    3. Anonymous Coward
      Anonymous Coward

      Re: The secret sauce is Oracle's storage to compute Infiniband I/O

      Yes, that Oracle Infiniband is so much faster than the identical Infiniband used in XIV and IBM's systems. Honestly, only Oracle would use an open standard as a competitive advantage. Oracle also uses these super secret CPUs called Intel Xeon, pronounced z-on, in their Exa systems... keep it on the down low though.

  2. Phil 4
    Thumb Up

    @ Jesper Re: The secret sauce is Oracle's storage to compute Infiniband I/O

    Yeah, many vendors may be supporting Infiniband connected storage for years but no one besides Oracle has produced an integrated/engineered solution using Infiniband to connect storage to compute nodes (host connectivity) that’s even leveraged by the database ie: flashcache. Even IBM's latest Puresystems don't support infiniband connected storage to compute and neither does HP's cloud matrix or VBLock.

    And FYI, IBM's XIV only supported infiniband with its Gen 3 models which was announced last year. Not years ago. And have you checked what a base IBM XIV with just 72TB Raw price lists for? Almost $500K! And for compute? How much for Power7 system with same # of cores as a half rack SPARC SuperCluster (64 x cores)? 2 x Power 750's ( 32 x cores each) list at ~$400K. That’s close to a $1M combined. To put things into perspective, a complete half rack SPARC SuperCluster that includes 2 x SPARC T4-4s plus 108TB of raw data disk capacity plus over 1TB smart flash cache lists for $685K fully integrated.

    1. Matt Bryant Silver badge
      Facepalm

      Re: @ Jesper The secret sauce is Oracle's storage to compute Infiniband I/O

      ".....but no one besides Oracle has produced an integrated/engineered solution using Infiniband to connect storage to compute nodes...." Besides the fact that many customers have had IBM, hp and even Dell build them such stacks as consulting exercises before Exadata was built, I think you'll find Oracle were so short of expertise they had to go get hp to design, integrate and build the first generation Exadata. It wasn't until after the Sunset that they switched to Sun hardware by simply switching out hp components for Sun ones. And the Infiniband switches and adapters used are badged items, not Sun's nor Oracle's own work. But don't worry, I'm sure they put a lot of original thougth and development work into the Oracle badges they stuck on the front.

    2. Jesper Frimann
      Thumb Down

      Re: @ Jesper The secret sauce is Oracle's storage to compute Infiniband I/O

      @Phil 4

      Your lack of basic knowledge about non oracle products is ... well.. alarming... Your problem is that you don't raise above the Oracle selected solution/method taking a top down look. You don't transcend beyond a single vendor solution paradigm.

      You define infiniband connected storage nodes to compute nodes as being a superior solution for supplying storage,

      Again the actual storage nodes are 'just' x86 serves with RAM, flash, SAN adapters that connect to disk drawers with SAS connected drives, the only real difference between this and hundreds of other solutions like it is that it uses infiniband to connect the Storage nodes to the It's actually very traditional and boring with todays standards.

      Now one of the forefathers of Infiniband was the IBM SP-switch, which was used in the IBM Scalable Power System HPC systems, which were also quite popular in the commercial marked.

      If you look at how you on larger SP systems did high performance large storage configurations.. then they had compute nodes connected to storage nodes via the SP switch. (which in it's late versions actually was more or less infiniband) Try looking at this paper http://www.almaden.ibm.com/cs/spsort.pdf from 1999.

      So your unique "integrated/engineered solution", is nothing new...other vendors have been there done that.. have a thousands T-shirts, and have moved on.

      So your whole Oracle made it first... is crap. sorry.. for the wording but...

      And why in the heavens are you pulling forward a XIV to compare to a SuperCluster server ?

      The whole reason for having a disk system like an XIV or an EVA or a DMX or.... is to consolidate storage from multiple physical hosts.

      Again if I am to build a solution with the same characteristic and limitations as a Supercluster I'd use internal DASD on a POWER server. Why the F word would I use an external DIsk box ?

      The whole idea of having a external Disk system like an XIV (which I hate btw. ), DMX, EVA or DS is to have the disk external... and have many hosts connecting to the same disk system.

      Again this is marketing bull, trying to take a weakness of a system and making it into a strength.

      If I want to serve fast storage in an integrated/engineered system to a virtual server, why not simply plug SDD in a PCI-e card and then use the hypervisor to share it out. Simpler, faster, more reliable, and much much cheaper.

      Again you are showing up with a knife to a gunfight, insisting that everybody else don't use their AK47, M16 or Glock's cause all you have is a knife.

      // Jesper

      1. Matt Bryant Silver badge
        Happy

        Re: Re: @ Jesper The secret sauce is Oracle's storage to compute Infiniband I/O

        From the Wikipedia entry on Infiniband:

        ".....InfiniBand originated from the 1999 merger of two competing designs: Future I/O, developed by Compaq, IBM, and Hewlett-Packard; Next Generation I/O (ngio), developed by Intel, Microsoft, and Sun. From the Compaq side, the roots of the technology derived from Tandem's ServerNet....."

        Nope, can't see Oracle anywhere on that list! IBM and hp had a fun habit of coming up with competing and proprietary high-speed interconnects before they decided to play nice on Infinband, FC and GbE. I still remember the "joys" of RACs built with hp's Hyperfabric, and even hp's 100VG LAN "standard" before that (cue fade into sepia)....

        1. Anonymous Coward
          Anonymous Coward

          Re: @ Jesper The secret sauce is Oracle's storage to compute Infiniband I/O

          HP coming up with technology? How many decades ago? Too bad Fiorina and Hurd killed real R&D from HP. HP used to be an innovative and interesting company. They haven't been a true R&D house for years. Even if these other companies started investing in Infiniband, many of them have abandoned or reduced their investment in the technology mostly because they didn't see it as a WAN technology.

          Oracle is now using it as a interconnect fabric, which makes tons of sense. Not only that, they've bought a 10% stake in Mellanox, the company driving Infiniband chip and hardware technology and also modified their software to really take advantage of the interconnect technology. No matter what you say, no one else is doing that at this scale and having this order of impact on performance on their portfolio stack (not just at a given tier of the IT stack e.g. servers or storage).

          The point that you're missing is that Oracle is taking standards based technology and improving upon it and adding value. And you can call them proprietary, but they really do try to adhere to standards. They're not just a volume product sales company like HP or a services led sales organization like IBM who would cobble things on your site. For Oracle, It really is about R&D that will result in aggregate benefits for their customer base that will justify their prices. Just look at their latest private and public cloud announcement.

          1. Matt Bryant Silver badge
            FAIL

            Re: Re: @ Jesper The secret sauce is Oracle's storage to compute Infiniband I/O

            "HP coming up with technology?...." Well, apart from all that printer stuff that keeps hp as the number one ink vendor, and earns more profits than the whole Oracle server division, you might want to Yahoogle for memristor.

            ".....How many decades ago?...." The memristor stuff is now. I guess you just don't keep up with IT stuff.

            "....Oracle .... bought a 10% stake in Mellanox, the company driving Infiniband chip and hardware technology...." Intel, the other Infinband switch maker, might disagree. Oh, and Intel own 100% of their stake, so they are actually influencing and developing, and not just watching.

            "..... and also modified their software to really take advantage of the interconnect technology. No matter what you say, no one else is doing that at this scale...." As has been pointed out, Oracle weren't even in the standards party. All the development, including drivers, were done by other parties and inherited by Oracle. The only unique thing about the Oracle idea is the attempt to lock customers into a walled garden of no choice and no open integration. By producing solutions that will work with other vendors' kit, hp, IBM and even Dell are putting in more real Infiniband work.

            "....The point that you're missing is that Oracle is taking standards based technology and improving upon it and adding value...." Nope, all they are doing is adding walls and removing choice to protect their DB revenure stream.

            ".....who would cobble things on your site...." There is a big difference between producing a tailored solution and "cobbling", and those tailor-made solutions will usually perform much better and more efficiently than a one-size-fits-all appliance.

      2. Anonymous Coward
        Anonymous Coward

        Re: @ Jesper The secret sauce is Oracle's storage to compute Infiniband I/O

        You state: "Again the actual storage nodes are 'just' x86 serves with RAM, flash, SAN adapters that connect to disk drawers with SAS connected drives, the only real difference between this and hundreds of other solutions like it is that it uses infiniband to connect the Storage nodes to the It's actually very traditional and boring with todays standards."

        It's quite clever actually. These "just x86 servers" in aggregate can provide more performance and functionality for an Oracle database workload than traditional FC storage arrays. The real difference is not in the Infiniband, but in the software. The software that runs on it is database aware. That's the whole point and you're missing it. It's not about the hardware or the individual tiers. It's about the aggregate providing more value and reducing the costs. The fact that others have cobbled together a similar integrated solution through consulting services is the whole point of Exadata and SPARC SuperCluster. Why create a one off or maybe a handful of customized integrated solutions that will vary by customer when you can create one set of standard configurations that are engineered together from the start. Do you realize how much risk and change management issues you reduce that way? Do you realize how much you can do to the management tier to view things in aggregate? It's about the sum of the whole, not just the individual parts ,which incidentally are quite strong enough to hold themselves in comparison with competitor's components.

        Being able to off-load storage centric queries to accelerate performance while providing redundancy from a software perspective is interesting. The redundancy is quite interesting as it is software based. The data is mirrored or alternatively triple-mirrored. Data is spread across all the storage cells for performance and redundancy so that even if you lose an entire storage cell, there's no issue. To protect against the failure of an entire Exadata cell, ASM failure groups are defined. Failure groups ensure that mirrored ASM extents are placed on different Exadata cells. You can also hot swap the drives. ASM automatically stripes the database data across Exadata disks and cells to ensure a balanced I/O load and optimum performance.

        I agree with you that an XIV to SuperCluster comparison is not necessarily apples to apples. But SuperCluster is not a server. It's an engineered system that encompasses servers, storage (Exadata storage cells, 7320 NAS, and optionally external FC), networking (IB and 10 GbE), and management software. So if you were to compare the XIV, you could compare it to the Exadata storage cells and / or the 7320 NAS.

        You stated "The whole reason for having a disk system like an XIV or an EVA or a DMX or.... is to consolidate storage from multiple physical hosts." That's the whole point of a SPARC SuperCluster. You can consolidate Oracle databases on a database LDom. You can use ASM grid consolidate the database storage tiers from many legacy systems. You can consolidate Sybase ASE databases on to another guest LDom using external FC storage since ASE prefers block devices and get the benefits you outline. You can consolidate the Weblogic middleware tier onto an Exalogic LDom and get the performance enhancements of Exalogic. If you want third party middleware or applications like websphere or SAP, you can put those on a general purpose LDom. You can also consolidate further using Solaris Zones. In other words, the architecture of the storage tier does not limit storage or compute consolidation.

        And because this is optimized for the Oracle database, which is about 50% of the database marketshare and because database usually drives a sizable portion of the storage marketplace, you really get a huge benefit.

        BTW - your argument with respect to "serving fast storage in an integrated/engineered system to a virtual server, why not simply plug SDD in a PCI-e card and then use the hypervisor to share it out. Simpler, faster, more reliable, and much much cheaper" is flawed. If I want fast I/O, I'm not going to stick VMware in between my compute and my storage unless I have a liking for bottleneck and unpredictable database I/O performance. And for that matter, if Oracle wanted to do what you're stating, Oracle could have used their F40 PCIe card or their F5100 Flash array in their T4 or X3 product lines, their OVM Server hypervisors, and their Oracle Linux and Solaris OSs to do what you're claiming. They probably already did that, tested it out and found issues, which is why they came up with the Exadata and SuperCluster architectures. There are reason why Oracle chose not to go the way of the HP+Violin solution or what IBM and HP have been doing up until recently with Fusion IO and TMS RAMSAN despite the fact that Oracle had their own comparable solutions e.g. providing performance without compromising on HA.

        There are reasons why a large team of software and hardware engineers who really know their products and their industry really, really well and really, really, deeply chose not to go that route.

        You might raise arguments about unstructured data. In that case, you could use Oracle's Big Data Appliance perform your map-reduce functions at that tier, and use Oracle's connectors (http://www.oracle.com/technetwork/bdc/big-data-connectors/overview/index.html) to structure, and load the data into a SPARC SuperCluster or an Exadata. Then you can use an Exalytics BI engineered system to analyze or slice the data as you please. There is a reason why Oracle bought Endeca (http://www.oracle.com/us/corporate/acquisitions/endeca/general-presentation-517133.pdf).

        Want to use a statistical package like R? Well you can use R directly on that newly loaded data directly on the Oracle database running on an Exadata or SuperCluster.

        In other words, the comparisons that have been made so far are purely hardware centric. You're missing the point. This is about the software and hardware integrations to produce something more valuable and more than just performance, especially when you consider the ecosystem of solutions.

        1. Jesper Frimann
          Devil

          Re: @ Jesper The secret sauce is Oracle's storage to compute Infiniband I/O

          @AC.

          I am not saying that the SuperCluster isn't clever, it is a clever solution, build by the building blocks at hand from Oracle. The key being at hand here.

          And I take your:

          "In other words, the comparisons that have been made so far are purely hardware centric. You're missing the point. This is about the software and hardware integrations to produce something more valuable and more than just performance, especially when you consider the ecosystem of solutions."

          I am again quite aware of what Oracle is doing, and I am quite aware of the value of a proprietary fully integrated solution stack. And you do have a point. But it's not been the focus of the discussion up until now.

          So I see your little shift of focus here as a clever attempt of taking the focus away from the fact that Phil 4's arguments, which were all hardware and cost centric, aren't right.

          The facts are: That the platform isn't cheap, in TCO. The components inside the SuperCluster aren't of a quality that makes it comparable to an M9000, SD2, Highend POWER or even a mainframe. It's commodity server components.

          I am quite aware of the fact that the x86 Storage servers is able to do some optimization directed at being a storage server for an Oracle Database server. Most of the things that Oracle lists.. are just ... well storage server functionality, but smart scan isn't, and it is clever to have indexes on the storage system, that limits the amount of data you return from the storage system. It is clever.

          And again you then list a lot of stuff about how you can consolidate a lot of stuff inside the machine and and and. Again that isn't something that couldn't have been done better.. MUCH better on a SD2 or a largish POWER (Or a next generation M9000). With much better utilization, serviceability, reliability and performance.

          Now with regards to your rant about VMWare. Who the F word have been talking about Vmware, I don't care about VMware. I was talking about POWER servers, you do know that those run POWERVM right ?

          Again look at the Storage benchmark where and old 595 trashes pretty much everything else.. also the storage servers used inside a SuperCluster.

          And a solution like that is so.. much simpler and easier to maintain.

          And sorry I was almost laughing when you wrote "Do you realize how much risk and change management issues you reduce that way?"

          This is one of the things I do each day in my job. My job as Technical Design Authority is to drive down risk, price and complexity in a org. that spans 10+ countries and have tens of tens of thousands of servers by all different kind of brands and architectures.

          And believe me.. SuperCluster isn't the answer to my problems.. even though I daily get mails from Oracle saying that it, not to mention calls from hard pressed sails people who will metaphorically get their throats cut if they don't sell an Exadata or supercluster, I don't envy those guys.

          Yes compared to a complex multi vendor scale out clustered solution with external storage, that runs a pure or almost pure Oracle solution stack and and and... then it's simpler but that is not what people here like me and others have been arguing. Compared to a big virtualized quality UNIX box, then it looses out. Sorry..

          A server where something as simple as changing an adapter becomes a node down, and where the main selling point is that it's a single vendor vertical integrated solution, where you also can do other stuff... kind of ... but then you have to and and.. read what you wrote yourself

          "You can consolidate Sybase ASE databases on to another guest LDom using external FC storage since ASE prefers block devices and get the benefits you outline. "

          Again that defies the whole point of the SuperCluster solution....

          And then there is the whole integration with existing backup solutions, management software and and and.... oooohh....

          // Jesper

        2. Anonymous Coward
          Anonymous Coward

          Re: @ Jesper The secret sauce is Oracle's storage to compute Infiniband I/O

          I see that the Oracle guys here only focuses on hardware cost but as every person knows licensing cost far exceeds hardware cost for these solutions. For every socket you pay for in hardware you pay 30 to 50 times the amount in software.

          So you have to make the most out of every core and the best way to do that is to run some Type 1 hypervisor and run your Oracle instances on a shared processor pool to drive up core utilization.

          So what are the Oracle offerings here? Yup, Oracle VM(Xen) for their x86-boxes and LDOM for SPARC. Exadata looks fine on that PowerPoint slide but if the quality of Exadata/SuperCluster is on par with Oracles virtualization technologies then I just say: "NOT WITH A TEN FOOT POLE".

          1. Jesper Frimann
            Thumb Up

            Re: @ Jesper The secret sauce is Oracle's storage to compute Infiniband I/O

            Jup, I agree.

            I actually helped out a project last year for a large client, which cut their Oracle license usage down by 60%. The savings, over the lifetime of the new solution, paid for the whole project including new hardware man time etc etc.

            Seeee.... that was clever.

            // Jesper

        3. Matt Bryant Silver badge
          Facepalm

          Re: Re: @ Jesper The secret sauce is Oracle's storage to compute Infiniband I/O

          ".....There are reasons why a large team of software and hardware engineers who really know their products and their industry really, really well and really, really, deeply chose not to go that route....." Yes. The reason was Larry had thrown a tantrum with hp over his tennis buddie Hurd and didn't have access to hp's engineering resource to build him a next gen of Exadata, especially as the fired the few real engineers Sun had. So all Oracle can do now is tweak it and hope no-one realises it's just a walled garden on an appliance and nothing special at all.

  3. Anonymous Coward
    Anonymous Coward

    Re: The secret sauce is Oracle's storage to compute Infiniband I/O

    This is not accurate. Infiniband is only one part of the secret sauce. It's actually the SPARC SuperCluster's and Exadata's Storage Cell, which is Infiniband enabled, that is the secret sauce. That's not just from using Infiniband from a hardware perspective, but also from a software integration perspective. These Oracle systems use iDB, which is built on the industry standard Reliable Datagram Sockets (RDSv3) protocol and runs over InfiniBand. ZDP (Zero-loss Zero-copy Datagram Protocol), a zero-copy implementation of RDS, is used to eliminate unnecessary copying of blocks. This is an extremely fast low-latency protocol that minimizes the number of data copies required to service I/O operations.

    The key differentiation between using Exadata vs using some combination of distributed servers (x86/POWER/Itanium) attached to some storage i.e. NAS via dNFS or FC via ASM is that Exadata/SuperCluster storage layer is actually aware of the database data. You're not just having the storage layer sending all these blocks back without understanding whether the underlying data is relevant to a given query. So the storage tier is sending less but actual query-relevant data back to the database compute tier.

    To understand this better, you have to look at the capabilities:

    1. Exadata Smart Scan: Row filtering, column filtering and some join processing (among other functions) are performed within the Exadata storage cells. This also includes Encrypted Tablespaces (TSE) and Encrypted Columns (TDE).

    2. Storage Indexing: Help avoid unnecessary IOPs. This feature would check the min and max values of DB columns stored on a given cell and determine if it even needs to look up the relevant data in those columns.

    There are some other capabilities as well that you can go and look up for yourself.

    Also, the comparisons that Larry was making at his Keynote (http://www.youtube.com/watch?v=m4BPuQ0Da6k&feature=plcp) was comparing EMC VMAX with SSD Disk from a bandwidth perspective and P-Series+IBM DS Storage with SSDs. The key point is that the Flash in the SuperCluster and the Exadata is actually an extension of the memory hierarchy, not fast disk like SSDs are. The updated difference with the latest Storage Cell Software is that the flash is not only a read-cache, you can also use it as a write cache. So you get a whole lot faster write performance.

    Then you can compress the data. If you have OLTP, RMAN backup, Dataguard replication, unstructured data compression and deduplication via SecureFiles and data pump data, you can compress this using Advanced Compression. Advanced Compression is a general database feature. This generally gets anywhere from 2-4X data compression. If you use HCC on data warehouse data, you can get 6x-10x compression. If you use HCC on archive data, you can get 15x compression. HCC is exclusive to Exadata, SPARC SuperCluster, ZFS Storage appliance and Axiom product lines. These compression levels are orders of magnitude better than what you can get with traditional storage arrays from Oracle's storage competitors. In addition to reducing the storage used i.e. storage efficiency, what this really means is performance acceleration because you can stick all this data into RAM and Flash Cache. And guess what, this data gets stuck into RAM and Flash cache based on usage automatically. DBAs and storage admins don't need to manually move data back and forth monitoring usage patterns like you would have to with traditional storage arrays. Huge savings in management overhead.

    You really can't do any of the above mentioned things with traditional distributed servers and storage architectures. I don't know enough about IBM's new Pure systems to see if they added these types of enhancements into the DB2 and Storage subsystem or other enhancements to provide this level of performance gains. Definitely nothing on the MySQL, Sybase ASE/IQ, or MS SQL Server space that can do this.

    If you really want to understand these things better, take a look at:

    1. http://www.oracle.com/technetwork/database/exadata/exadata-technical-whitepaper-134575.pdf

    2. http://www.oracle.com/technetwork/database/storage/advanced-compression-whitepaper-130502.pdf

    3. http://www.oracle.com/technetwork/middleware/bi-foundation/ehcc-twp-131254.pdf

  4. Jesper Frimann
    Holmes

    Re: The secret sauce is Oracle's storage to compute Infiniband I/O

    Again I am not saying that the storage solution isn't clever.

    I'm not saying that what Oracle have done on the software stack lever isn't clever, either.

    BUT.... it's Oracle DB storage 'stuff'. And to shock you a little bit.. people do run other things that Oracle products, and People do have centralized backup systems, people do have.....windows, linux,HPUX,Solaris,AIX,zOS........ so all the clever stuff here more or less only applies to a pure Oracle solution stack.

    And where as Oracle sales people might have luck in convincing some drunk CIO, and the most diehard Oracle fanatics that this solution is going to save their IT depardements.

    Then us in the RL know better.

    If I think zfs appliance storage is great. I don't need to buy a SuperCluster, I can just buy a zfs appliance and hook my dell windows server up to it. No problem.

    So SuperCluster is always going to be a niche system that potentially only "solves" a little part of the IT problems that companies face today.

    // Jesper

  5. Anonymous Coward
    Anonymous Coward

    Re: The secret sauce is Oracle's storage to compute Infiniband I/O

    What makes you think these Oracle engineered systems don't integrate with centralized backup systems like TSM, NetBackup, etc. for backup, cataloging, etc?

    And as far as centralized management frameworks like an HP OpenView Operations Manager or BMC Remedy Troubleticketing solutions, those already are in place and there are ways of doing that.

    As far as your choice of Operating systems, you've just mixed Windows with Linux/UNIX with Mainframe. If you want to run Mainframe applications, you either run on the Mainframe (as proprietary as you can get), you deprecate the application and re-write the functionality on to a distributed platform, or you re-host onto something like a Oracle Tuxedo or a Clerity (now Dell) UniKix solution.

    As far as Windows, some linux and Solaris x86 - well you can cobble something together something like oh I don't know Oracle's X3-2 servers running OVM x86, Oracle's 72p 10 GbE switch, Oracle's ZFS Storage Appliance. Oh you want to run VMware? Well Oracle does certify VMware on their x86 kit. You want a bit of a reference architecture on how to do this? Well I guess you could use http://www.oracle.com/ocom/groups/public/@otn/documents/webcontent/1508069.pdf. But clearly, you're not going to get as much value in terms of reducing storage cost by as much as 10x and 10x performance, and having so much performance that you actually need less software licensing.

    Oh you want to do the same thing on SPARC given that the estate of existing servers running Solaris 8, 9, 10 and possibly 11 is huge in your environment but aren't ready to go to an engineered system? Perhaps this would work for you: http://www.oracle.com/technetwork/server-storage/hardware-solutions/o12-043-cloud-sparc-1659149.pdf. But again, you're not going to get the same orders of magnitude benefits as an engineered system.

    If you want to do things on Linux for application/middleware/DB/BI/Unstructured Data, you have Exalogic/Exadata/Exalytics/Big Data Appliance+Connectors. If you want to modernize your SPARC estate and get orders of magnitude performance gains, storage and licensing savings, SPARC SuperCluster is there. If you aren't ready for engineered systems, Oracle has some solutions for you. It's all about solving problems appropriately. If you want to keep calling out objections and issues and aren't interested, then that's fine. You can work with your preferred vendor but I don't think you will be able to deploy it as quickly, address multi-vendor issues and performance issues you may have.

    POWER, Itanium and SPARC are clearly direct competitors. So you would choose one over the other. If you chose to go with POWER or Itanium, that's your choice. However, Solaris has clearly had better ISV support over the years and it's very likely, though there may be some exceptions, that any ISV you have on POWER, you can likely get on Solaris or Linux. So you either port or migrate. And given that POWER and SPARC are both big endian and UNIXs, it probably makes sense to go to a SuperCluster.

    I've heard of POWER VM. It's a good solution, though I have heard of some stability issues over the last couple of years or so with the P795 and it's POWER VM because it stores the hypervisor in memory and DIMM failures can cause a platform wide outage. IBM is working on it and may have already resolved the issue. However, Solaris Zones and LDoms are strong (and often complementary) virtualization options from Oracle. And you get the management software in OEM Ops Center that can manage these from end to end and integrate up into centralized management frameworks. (BTW, solving backup for those issues were done even with Sun). Both Zones and LDoms are hard license boundaries for Oracle (not to mention the SPARC and Intel chips used in Oracle's engineered systems having lower per-core licensing factors as well as the same or less PVU factor for IBM software) software as well as many other ISVs' software licensing schemes.

    Oracle has sold a ton of these engineered systems. If you think that many CIOs were just drunk when that every single one of these CIOs made their decisions and without vetting it by their technology, finance, operations, legal, risk and compliance organizations, and not see a positive and significant financial, technical, and operational return before approving such expenditures....well.....

    But if you want to solve a problem and do so in a cost effective manner, there are options worth considering with Oracle, especially in this day and age of standardization, simplification and cost reduction. In the end of it all, if you wanted to object to any subject, I suppose you could because nothing is perfect.

    1. Matt Bryant Silver badge
      Pirate

      Re: Re: The secret sauce is Oracle's storage to compute Infiniband I/O

      "....POWER, Itanium and SPARC are clearly direct competitors. So you would choose one over the other. If you chose to go with POWER or Itanium, that's your choice. However, Solaris has clearly had better ISV support over the years...." ROFLMAO! You do realise, since that courtcase that Larry gave himself and then lost, hp's Itanium platforms are the ONLY ones in the market that are guaranteed the full gamut of Oracle software and support, for as long as they are available? You think I'm kidding, try asking your Oracle rep for an open-ended guarantee that Oracle will support all the same products even on their own x64 servers with Oracle's RHEL clone, or with Slowaris on SPARC! They won't because they can't. So, in terms of Oracle support, hp actually has an industry unique guarantee, all due to Larry and his big, greedy mouth. LOL!

      1. Anonymous Coward
        Anonymous Coward

        Re: The secret sauce is Oracle's storage to compute Infiniband I/O

        Regardless of any contractual commitments, the net effect of the lawsuit is that Itanium sales have dropped and effectively dead ended the platform. You should see the number of HP BCS reps that are running out the door or are just plain unhappy with Itanium sales and pushing HP DL 980s instead. I think I saw something in August stating it was at a 5 year low. Regardless of what you say about Solaris and SPARC, Itanium has an even poorer impression in the market place and future. It was part of the reason why HP was looking to buy Sun for its hardware anyway. They knew their challenges as far back as 2010. It's only gotten worse.

        1. Matt Bryant Silver badge
          Happy

          Re: Re: The secret sauce is Oracle's storage to compute Infiniband I/O

          ROFLMAO! The Sunshine is strong with this one!

          ".... the net effect of the lawsuit is that Itanium sales have dropped and effectively dead ended the platform...." But you Sunshiners assured us NO-ONE would be buying Itanium, and even with the hardest FUDding possible the sales were still continuing BEFORE Oracle lost the trial. And some of the downturn was no doubt due to the imminent arrival of next gen Itanium systems using Poulson. Customers now looking at hp Poulson system purchases now have the ONLY cast-iron guarantee of Oracle appliction availability and support available to any server in the market.

          ".....You should see the number of HP BCS reps that are running out the door...." And where are you seeing this from? I doubt if it is anywhere near hp so I'm guessing it was during some dream or fanatsy? Strange that the hp BCS salesgrunts would be exiting a company that still has the strongest x64 line sales and very strong storage sales, but at least if some are leaving it's by choice, rather than the bloodbath that happened at Sun.

          "......I think I saw something in August stating it was at a 5 year low....." UNIX has been trending down for years, just go ask those ex-Sun server salesgrunts now working for Dell, IBM, and - yes - hp!

          ".....Itanium has an even poorer impression in the market place...." I doubt if you can even see a fraction of the marketplace round those Sunshine blinkers.

          "....It was part of the reason why HP was looking to buy Sun for its hardware...." WHAT!?!?!? Are you smoking something? Oracle wanted hp to buy the Sun hardware biz back in 2009 and hp declined. Hp were also not interested in buying the whole Sun carcass. That is a matter of public record as shown in that trial were Larry got his a$$ kicked (http://www.businessinsider.com/oracle-hp-sun-microsystems-hardware-split-2012-6). And IBM weren't interested in buying Sun even when it meant they could get control of Java. Only a truly delusional and fanatical Sunshiner would want to pretend otherwise.

          /SP&L

    2. Jesper Frimann

      Re: The secret sauce is Oracle's storage to compute Infiniband I/O

      Amazing... we've got us an actual Oracle sales guy here, must be with the speed of which those Oracle links pop up.

      Sure the platforms integrate with other infrastructure solutions, but that kind of defeats the whole purpose of having a 'Datacenter in a can' as the SuperCluster and it's siblings are sold as. The whole pitch, or at least the one that we have seen from Oracle, is a 'You only need this box'

      As for LDoms and Zones versus POWERVM.. you've gotta be kidding.

      LDoms is basically partitioning when it comes to memory and processor resources, as people have done with vPAR and LPARS on other platforms for years. Something as simple as dynamic reconfiguration, came what... last year with 2.0 ?

      Comparing it to VMware, Xen, POWERVM or HPVM capability in this area.. .well not really appropriate.

      Sure it can do some I/O virtualization, but that is hardly the same

      Then sure there is Zones, but that's operating system-level virtualization, not really something you want to use when having virtual machines for example in different firewall zones or having multiple customers on the same machine or mixing up production and test/development. And please spare me for the It's no problem, it's people like me and the company I work for that get their b*lls in the machine if it doesn't work.. not the Oracle sales guy.

      And the stability hint with POWERVM.. oh, that one elegantly hinting must be from page one in the Oracle systems sales manual.

      And as for the low cost of ISV software licenses on SPARC.. there is a natural reason for that.. and that is the actual throughput of the platform.

      And then you'll use plenty of licenses anyway. Again if you have a T4-4 and you want to run 20 LDoms test/production/education whatever each using 10 threads each having a average utilisation of 20%. You'll end up having to using (if they use Oracle software) 13 licenses.

      If you on the other hand had been able to run .. lets say POWERVM on the T4-4, you would have been able to do shared pool overcommitment.. lets just say a factor of 3 to raise the average utilization to 60%, hence cutting down on your required licenses to... 5, and if you had been using a POWER server the number of licenses would be even less.

      So it's really a nobrainer why Larry bought SUN, it's protection of his base install. And it also explains why he charge you for the full server when/if you use Vmware.

      As for drunk CIO's.. I've seen plenty.. cleaned up after them.. and the worst are the ones that aren't drunk.. cause that is what it is.. the worst are the ones that aren't drunk but just incompetent. Or are more concerned about .. well... lets stop here

      And just saying that there are CIO's that have bought systems then it must be good... ...

      Ok. Where is the increase in UNIX markedshare then ? If these systems are that good and so successful where are the Marked numbers ?

      The real deal is that IBM is slowly making the UNIX marked into a one horse race. It is not something that all us old UNIX warhorses like... but that is what is happening. Oracle SPARC servers are getting killed out there in the RL marked.

      And no matter how much you dress the SuperClusters and it's siblings and put lipstick on it, it's still lowend-midrange servers with a somewhat clever software stack ontop of it. And you'll never get it to run at SD2, Mainframe, POWER server utilization, reliability and serviceability

      Now if they actually got their fingers out and started making systems like in the old days.. or rather bought companies that could, then well.. it might work, cause there still is a large and loyal customer base out there.

      1. Matt Bryant Silver badge
        Happy

        Re: Re: The secret sauce is Oracle's storage to compute Infiniband I/O

        "....bought companies that could, then well.. it might work, cause there still is a large and loyal customer base out there." Apart from the fact I'm not sure who they could buy. Fujitsu is the only candidate I can think of that can make a proper SPARC server and they are beyond Larry's pockets, even in the Japanese government OK'd the deal. The Solaris base is already shrinking away, mostly to Linux on x64 but NOT Larry's clone of RHEL. And Larry doesn't just need to buy a new hardware biz, he needs a whole new salesforce. Larry hasn't done nearly enough to keep the Solaris base loyal, can't ship the hardware when they do make a sale, and is destroying the salesforce it needs to sell the servers in the first place (http://www.businessinsider.com/oracle-former-employees-explain-why-it-cant-sell-hardware-2012-4#ixzz1xW57H0zB)! I expect Larry's exit from the hardware buisness is not too distant in the future.

  6. Anonymous Coward
    Anonymous Coward

    Matt - See slides 8 and 9 - http://www.oracle.com/us/corporate/features/6-1623009.pdf

    1. Matt Bryant Silver badge
      FAIL

      <Yawn>

      That was a POSSIBLE option, not the ONLY plan. I'm sure there were other plans looked at where hp merged with Oracle, or bought SAP, etc, etc. The REALITY is hp took one look under the Sun hood and backed off. Same goes for IBM. And even then your own slides show hp only ever expressed an interest in Solaris, the software, and had no interest in the Sun hardware. Indeed, hp then decided RHEL and hp-ux were more important to them in the enterprise and Solaris wasn't such a good idea for switches, and went and bought 3COM instead, cementing their position as CISCO's only real competitor in enterprise networking. Try again, little troll!

      /SP&L

Page:

This topic is closed for new posts.

Other stories you might like