Re: Re: Re: Erm...
The 70x join improvement and the millions of transactions per second are separate improvements. The 70x join improvements are due to executing joins in parallel, closer to the data, minimising data transfer. The 70x improvement is observed between running queries involving joins on the same hardware+software with the new 'AQL' functionality off, then on. The '1 billion qpm' headline transaction throughput improvements are due to increased multithreading within the system processes. You are correct that the previous results were executed on different hardware, so it's hard to determine how much the software changes have brought to the table.
The '1 billion qpm' benchmark was executed against in-memory tables, so disk IO was not a factor in reaching the throughput. The transaction types are primary key reads, retrieving rows with 25 integer column, e.g. a read of ~100 bytes of actual data. No joins are occurring here. The queries are similar, but the results are not cached. So effectively this is 1 billion random reads of 100 bytes per minute / 17.6 million random reads of 100 bytes per second. The data is distributed across 8 different machines. The flexAsynch benchmark used is described in more detail here : http://dev.mysql.com/downloads/benchmarks.html and here : http://mikaelronstrom.blogspot.com/2012/02/105bn-qpm-using-mysql-cluster-72.html
You are correct to be suspicious of any vendor benchmark's real world applicability. The best that can be said is that if each vendor gets the maximum from their systems then those results might be comparable.