Exit Exadata, Fusion-io and Violin Memory - so to speak: the Oracle database random IO speed record has been smashed by an 80-core NEC server fitted with eight Virident flash drives. A single Xeon-based 80-core NEC Express 5800/A1080a GX server, fitted with eight 1.4TB Virident FlashMax solid-state drives (11.2TB of flash …
is done on a one core=1cpu basis afaik. How much would the cost be for an 80 core cpu then!
Oracle licencing states that the Series E7-88XX has a Core Processor
Licensing Factor of 0.5 so the oracle licence would be for 40 processors.
Mind you that probably only shifts the licence costs from totally utterly insane to totally insane.
Oracle IOPS workload choices matter
Would be interesting to see what they can get with SLOB - The Silly Little Oracle Benchmark:
ESG Paper did not use a database. Synthetic workload.
Actually that ESG paper being referred to specifically states their testing was performed with the FIO tool not an actual Oracle database. That matters. I reitterate: SLOB. http://kevinclosson.wordpress.com/2012/02/06/introducing-slob-the-silly-little-oracle-benchmark/
"For Exadata products, the 1 million IOPS is typically achieved using multiple 'storage server' units. Our performance was achieved using single server with multiple FlashMAX cards."
So they ran a single Oracle instance on a 80 core server? And then that is compared to an Exadata Database Machine? How?
The path between CPUs and local I/O bus is a lot shorter than Exadata's path from CPU over Infiniband to a storage server and via a s/w layer through PCI flash to spinning rust. Yet, Exadata is faster!
80 cores is a lot more than the typical Exadata Database Machine. A full X2-2 rack comes with eight 2-socket database servers. Significantly less cores.
This is a traditional SMP versus MPP comparisons - that is almost always skewed as the architectures are drastically different. Of course, that does not stop sales from their marketing speak and bullshit.
The bottom line that a single 80 core NEC server provides LESS I/O scalability and performance than a much smaller (core wise) Exadata DB Machine. WTF are they going to do if the Oracle database processing scales across all 80 cores and do some serious I/O? Which is WHY you want distributed storage servers in order to provide the scalability for the I/O fabric layer.
NEC is seriously missing the point on how to use Oracle and how to make a database scale. SMP is not it. Been there. Done that. MPP is. Been there and still am using it.
"80 cores is a lot more than the typical Exadata Database Machine. A full X2-2 rack comes with eight 2-socket database servers. Significantly less cores."
...a full rack X2-2 model has a total of 96 Xeon 5600 cores for the RAC grid (and a lot more in the storage rid). The only Exadata model that comes with E7 CPUs (to stay on the NEC comparison) is the X2-8 model which only comes with 160 cores. So this quoted statement is wrong.
See www.oracle.com/us/products/database/exadata/database-machine-x2-2/overview/index.html for the statement quoted.
But the actual core number comparison aside. How do you scale I/O on a 80 CPU SMP server with processes on each of the 80 cores hitting the I/O subsystem?
The I/O layer needs to scale with the number of cores and increase I/O demands. And that is what Exadata does by distributing the I/O across multiple storage servers and using QDR Infiniband as fabric layer.
This is not about Exadata specifically. This about scalability. SMP does not scale as well as MPP in cases like this. With a MPP architecture the I/O fabric layer can be scaled from QDR to higher speeds. The storage servers PCI flash caches can be increased. More storage servers can be used for striping data. Etc.
With a single SMP h/w box - how do you scale? How do you for example add additional PCI busses or increase PCI bus speed? It becomes a large and expensive doorstop that cost even more to replace...
There are two exadata products -
X2-2 (1/4, 1/2 and full) made of 2,4 or 8 x4170 Dual Socket Servers.
As delivered a full X2-2 is 96 Database Cores and 768GB of Ram (12 Processors and 96GB of RAM).
Max bandwidth of Gen2 PCIe Slot is ?
How many QDR HCAs are in each server?
X2-8 (FULL Only) 160 Cores and 4TB of Ram (80 Cores and 2TB of Ram per X4800).
Now the Storage -
Each storage cell has ...
12 cores and 24GB RAM, 12 Disk Drives (SAS or SATA) and 4 96GB SunFire F20 PCIe Cards.
A full Rack then has .....
168 Additional Cores, 336GB of RAM
A full X2-2 ( 264 Intel Cores, what @ 35 watts per socket), 1TB of DRAM.
A Full X2-8 (328 Intel Cores) ~5TB of DRAM.
Each with 5.3TB of NAND Flash albeit from SUN.
Check out the latest top ten TPC-C (yes it is generic benchmark forego the obvious) A two socket Server is driving 1 Million TPM ( A lot of 8K IO), all in what 8-12 Rack Units. all drawing less power/cooling and performing better than NEC with 10 Vrident cards or Exadata.
When it comes to scale keep in mind that Oracle Exadata is three clusters - (how much complexity do you need).
RAC for Internode Database Cluster, Oracle Grid Infrastructure (ASM is not required for RAC there has always been other options), Exadata Storage Grid.
Look at all the specialized hardware up above all to make disk drives go faster or to scale as you stated.
What if you just started with a better design -
- Fee fie Firefox: Mozilla's lawyers probe Dell over browser install charge
- Did Apple's iOS make you physically SICK? Try swallowing version 7.1
- Pics Indestructible Death Stars blow up planets using glowing KILL RAY
- Video Snowden: You can't trust SPOOKS with your DATA
- Review Distro diaspora: Four flavours of Ubuntu unpacked