As something of an engagement present for server maker Sun Microsystems, Oracle, the company's looming $7.4bn suitor, this week cut the prices it charges for key database software on the Sun Fire rack and blade servers using the company's "Victoria Falls" Sparc T2+ processors. Under Oracle's software pricing regime for its …
The Core Scaling Factor...
The core scaling factor by Oracle has long favored IBM, even with the boost of the core factor for some of IBM's processors.
The core scaling factor by Oracle still favors IBM, even with moving the factor down to 0.50, as suggested by the article above.
It seems that Oracle is finally going to have some skin in the hardware game and is looking to start to compete.
Interesting price war..
I guess IBM will now have to cut their prices for DB2 and IDS on AIX boxes to keep up with Oracle.
Its important to note the following:
"The Sparc T2 chip has eight cores, with eight threads per core, and includes integrated 10 Gigabit Ethernet links on the chip; it is available for single-socket machines."
Just because a core has 8 threads per core, it doesn't mean that the performance of Oracle on the chip will increase significantly or that it can be tuned to take advantage of the extra thread.
The current round of database designs are not parallel enough to take advantage of these extra threads. While Sun wants to say that a core w 8 threads is really like 8 virtual cores or 4 virtual cores, that doesn't translate to 8 times or 4 times the performance boost over a core and a single thread/ double thread.
There would have to be a major overhaul of Oracle to really scale and take advantage of these cpu advantages. Until more of the major chip vendors move to a similar architecture, there is little incentive for a major RDBMS house to make the effort to change the infrastructure to take advantage of these chip advances.
In short, you may be better off purchasing a cheaper cpu and bring down the cost per transaction, than spending $$$ for the additional horsepower you can't use.
Maybe this is why they're cutting their prices?
They are taking competition quite seriously
> Itanium chips were originally at a 0.75 scaling factor, by the way,
> but were reduced at some point,
Well, they were, and reduced, too, because everybody in the business believed that Itanium is going to be the next big one rather than the Itanic.
Unfortunately, while Itanium is a nice all-round CPU, it isn't really good for database work, unless the database is a rather small, rarely accessed dataset (in which case it simply sucks as much as any other CPU).
> and despite the large number of cores in modern x64 chips from
> Intel and AMD (four or six), Oracle has not been tempted to raise
> the scaling factor here. It will be interesting to see what Oracle does
> when AMD crams 12 cores in a socket and Intel starts cramming in
> eight cores.
Nothing will happen. I know AMD is going to make the Magny-Cours a multi-chip module (MCM), is that also true about the 8-core Nehalem? I have read many conflicting reports on that.
Note that Oracle still has an MCM clause regarding IBM Power CPUs, where they are licensed at 2x the cost per socket (they are treated as two CPUs that they actually are rather than one package). I would expect Oracle to use that clause against AMD and Intel in their upcoming chips.
This might make T2+-based machines really nice Oracle boxes, given that they already are well-suited for that kind of workloads.
There were some interesting comments in the last round of SPARC-bashing in the linked article. I would just like to correct some statements by Matt in that discussion that:
1. Memory bandwidth does not make up for memory latency -- idle cycles are lost regardless of whether memory serves gigabytes or terabytes per second. Database queries are rarely larger than a few kilobytes, but the latency prevents that data from reaching the CPU quickly. If you have a few cores and all have to wait on a random query, they will stall. A Niagara will stall too, but instead of 8 or 16 threads stalling, you get 64 threads. Small cache has nothing to do with it because with the speed of a single thread (assuming all threads stall and are switched), the memory latency can be treated as one cycle.
Oh, and the cache of the T2 was enlarged compared to T1 only because you need to retain more data for more threads. That's quite elementary. If Matt's argument for more cache held any water, Sun's microarchitects would have to increase the cache more than two times keeping with the 2x increase in the number of handled threads, and they have increased cache by a measly 33%, from 3 to 4 MB.
By the way, as for the bandwidth, a T2/T2+ chip has four DDR2 controllers on-die. That gives more bandwidth than two or three DDR2 controllers and only 33% less than three DDR3 controllers on-die, so the Niagara chips are definitely not starved for memory bandwidth.
2. DDR3 memory might not be faster than DDR2 memory in some workloads. DDR3 memory might have a cycle latency (CL) of 7 or 9 cycles, whereas typical DDR2 memory has CL of 4 or 5. A DDR2-800-CL4 is always faster for small random queries than DDR3-1600-CL9, even though it has far less bandwidth.
3. If your thread stalls, it doesn't matter if you have a 64 MB cache or 64 KB cache. The CPU does not work on large sets anyway -- 2 or 3 64 bit data at the most per one cycle, with 64 bit instructions adds up to 256 bits or 32 bytes. Some SIMD commands will take more data, and some data may be larger and working on it may be spread across multiple cycles, but a small cache is never a hindrance if the CPU waits on memory. If a random database access comes, a CPU will not have the data cached (by definition of random data). If the CPU waits, say, 50 nanoseconds for the data, it can either idle (as most CPUs do) or switch to a different thread (as Niagara, Nehalem and some NetBurst chips do). Nehalem and NetBurst cannot switch more than once, but Niagara can then switch a 14 times more and when the data arrives, it can switch to the requesting thread at an instance or cache it and wait for the thread. After that random data is processed, it doesn't need to be kept in cache, anyway.
4. As for the Rock. While it's sad that Sun will not be releasing that CPU, they did not revise their roadmap as much as it has been suggested. The Rock was to stay on the market only for two or three years (which is ludicruously short for an enterprise CPU) and all improvements introduced by Rock were to be incorporated in the new VT core of all future Sun CPUs rather than keeping Rock as a separate family.
To the best of my understanding, Sun has agreed with Fujitsu to not duplicate effort, leaving the general-purpose Sparcs to Fujitsu as their SPARC64 line.
Oracle pricing changes for Itanium coming
Oracle will increase Itanium to .75 for Tukwila
Itanium was never .75. When it went dual core it mistakenly fell under the "Intel or AMD" .5 factor.
Oracle has made sure this will not happen again by clearly stating that only
"Intel Itanium Series 91XX or earlier Multicore chips" qualify for .5
Too bad Tukwila will be twice the performance per chip (twice the number of cores) of Montecito but three times more expensive for Oracle.
Cheers from the UK
Multi-Chip Modules: AMD, Intel, POWER, SPARC, etc.
toughluck Posted Thursday 1st October 2009 20:29 GMT - "Note that Oracle still has an MCM clause regarding IBM Power CPUs, where they are licensed at 2x the cost per socket (they are treated as two CPUs that they actually are rather than one package). I would expect Oracle to use that clause against AMD and Intel in their upcoming chips."
The Multi-Chip Modules did affect Intel based chips, in the past!
Remember when Intel glued together 2 single-core CPU's to make a dual-core?
Remember when Intel glued together 2 dual-core CPU's to make a quad-core?
Remember when Intel glued together 3 dual-core CPU's to make a hex-core?
These actions, to bring to market CPU's faster than competitors (who were creating single chip modules, like Sun & AMD) were to penalize those vendors who wanted to get a jump on Oracle when the competition were investing in real, organic, CPU development.
Many of these older CPU's (i.e. Intel triple chip hex-core module) which took the short-cuts are now slower than the real-thing (i.e. Intel single chip quad core.) If AMD is going to take a short-cut to combine cores, one would suspect that they will be subject to the same restrictions as the other vendors.
All hardware and software vendors do improvement to their designs... the larger silicon houses have a greater number of resources at their disposal, to take short-cuts, and "cheat" software houses, as far as some software houses are concerned, out of rightful revenue.
Regardless of whether this is true or not, this is one of the reasons for these various licensing schemes (i.e. counting a single-socket UltraSPARC T1 processor as a dual-socket single core, counting a single-socket T2 as a quad-socket single core, etc. even though they were only a single chip CPU.)
@Ian Michael Gumby
> Just because a core has 8 threads per core, it doesn't mean that
> the performance of Oracle on the chip will increase significantly or
> that it can be tuned to take advantage of the extra thread.
> The current round of database designs are not parallel enough to
> take advantage of these extra threads. While Sun wants to say that
> a core w 8 threads is really like 8 virtual cores or 4 virtual cores, that
> doesn't translate to 8 times or 4 times the performance boost over
> a core and a single thread/ double thread.
Ummm, actually it does. Databases are one of the few types of applications that scale almost linearly with the number of threads. Each query can be (and usually is) set up as a different independent thread.
Databases are also memory and storage dependant. As database queries are (usually) random, there is no way to avoid heavy memory use and efficient use of available memory bandwidth is that much more important.
Sparcs really shine there.
> There would have to be a major overhaul of Oracle to really scale
> and take advantage of these cpu advantages. Until more of the major
> chip vendors move to a similar architecture, there is little incentive for
> a major RDBMS house to make the effort to change the infrastructure
> to take advantage of these chip advances.
Maybe, no and no.
Maybe an overhaul of Oracle is required.
No, chip vendors will not pick up Sparc, as they would need to divert resources from their other designs, nor is it actually necessary.
And no, Oracle will likely own Sun soon and this gives them incentive to provide any and all necessary improvements or enhancements.
> In short, you may be better off purchasing a cheaper cpu and
> bring down the cost per transaction, than spending $$$ for
> the additional horsepower you can't use.
Maybe, but only for small, maybe medium, databases. Sun T iron is not too expensive compared to the competition -- for what they are worth, benchmark results show it's vastly cheaper than POWER iron and more or less on par with x64 (when comparing bang per buck), and their running costs -- especially power and cooling -- are much lower than systems at comparable prices. Now you also have lower licensing costs. This all translates to much lower TCO for Sparcs and Oracle will not really lose anything on that.
> Maybe this is why they're cutting their prices?
They are cutting the prices to be more competitive. Using Sparcs for databases was, and still is, overlooked by most datacenter owners, even though the pace is slowly picking up since T2+ was introduced.
"Unfortunately, while Itanium is a nice all-round CPU, it isn't really good for database work,unless the database is a rather small, rarely accessed dataset (in which case it simply sucks as much as any other CPU)."
Bull. That is an unqualified comment. The most common and most expensive operation for a RDBMS is I/O. The speed of which depends on the I/O subsystem and not CPU. And on most larger databases will exist as a separate hardware platform and accessed via a switch.
As for "database work". What is that? Is that somehow different from the CPU instructions executed in a Java/.Net application? A lot of the CPU work done in the database is exactly the same as that of a "traditional client" - as the business logic and rules and data validation typically done in these clients are now often found as stored procedures in the database.
Now if your argument is about Spec-int vs Spec-fp then let's hear it.
> Ummm, actually it does. Databases are one of the few types of applications that scale almost
> linearly with the number of threads. Each query can be (and usually is) set up as a different
> independent thread.
Actually it does NOT. Number of threads per core just increases utilization of the pipeline. They still run on the same core and the performance never scales lineraly.
And Oracle does NOT scale linearly as well -- that's what documentation says.
Databases and IO
Yes, databases are IO-intensive and that's where Sparcs shine. I know I simplified (maybe oversimplified) the issue, but it boils down to the same thing. Database queries are easily threadable. Sparcs can switch out of a stalled thread (regardless of what the thread is waiting for), and when they switch out, other threads can be executed.
The thread does not have to wait for the IO and stall in a traditional sense (which would cause the CPU to idle), but this mechanism allows other threads move forward and then will switch back to the stalled thread once data is available.
What I meant by a rarely accessed dataset, is that there won't be threads that can move forward while other threads are stalling, so every CPU is going to depend completely on the IO.
Now, I won't go into specint or specfp, I don't even know them for pretty much any of the CPUs on the market, so I've got no idea what I could prove with it or what you would.
"Actually it does NOT. Number of threads per core just increases utilization of the pipeline. They still run on the same core and the performance never scales lineraly."
Threads in what format? Oracle for example does not use a threading model on Unix/Linux systems.
"And Oracle does NOT scale linearly as well -- that's what documentation says."
Please provide URL to said documentation. And btw, Exadata database machine has shown to scale linearly.. so it depends on WHAT you want to scale and HOW you measure that. I/O? CPU? Number of transactions per second? Number of SQLs per second? What?
All I see here is a lot of FUD with vague and broad statements being made on how well Oracle runs/does not run on a specific CPU design.
I run several Oracle clusters.. if there is an issue with a specific CPU design or technology, I am interested... in hard technical facts. Not hearsay and rumour.
I work at a large bank and I have always loved our IBM Power servers, but now we recently benched three of them against one SUN T5440 and I am sad to say, the P570 servers lost big time. So now we are migrating away from IBM to SUN. To bad, I really love those P570 servers, but we can not justify them anymore.
- Geek's Guide to Britain INSIDE GCHQ: Welcome to Cheltenham's cottage industry
- 'Catastrophic failure' of 3D-printed gun in Oz Police test
- Game Theory Is the next-gen console war already One?
- BBC suspends CTO after it wastes £100m on doomed IT system
- Peak Facebook: British users lose their Liking for Zuck's ad empire