"The accuser, after, ahem, discussing his concerns with Intel"
I see what you did there.
Intel on Wednesday disputed a news report that Chipzilla had intentionally published misleading benchmarks in a comparison between the Intel Xeon Platinum 9282 and the AMD second-generation Epyc 7742 processors. The accuser, after, ahem, discussing his concerns with Intel, moderated his criticism somewhat. On Tuesday, Intel …
Asked whether he accepts Intel's benchmarks as fair, Kennedy told The Register in an email that he's willing to consider it a "reasonable marketing effort" once the company updates its post to reflect concerns he raised. "Not what an independent agency would setup for a fair test," he said, "but probably OK for marketing [after Intel clarifies everything]."
Not unintentionally rigged then .... so Intel are really worried about the competitive alternative opposition/opposing alternative competition.
That's a fully functioning free capital market economy just doing its rotten crazy progress thing ..... and a slippery slope of a greasy pole to behold and own/pwn.
In 2005, Intel was accused of engineering its compiler to cripple code running on non-Intel x86 hardware.
Intel compilers continue to cripple the code they generate for non-Intel hardware to this day[+]. At least for my codes, disabling the "Genuine Intel" test in the binaries it produces [*] often increases the performance on AMD systems, including Zen [#], by upwards of 20%. Happily, these shenanigans are rapidly becoming less and less relevant: for many - but not all yet - floating-point-intensive codes, recent gcc-based compilers can generate code which is at least as good, and sometimes faster. With any luck, in a couple of years we could stop paying the
ransom support fees for our Intel compilers altogether, and switch completely to gcc. When we do, I will miss the amazing profiling and tuning tools - but I'll be glad to be rid of the random bugginess of the compiler's front end.
[+] Ok, the last Intel compiler I had looked at for this particular issue was 2018.4. It is conceivable, though unlikely, that they stopped since then.
[*] Which seems to have no bearing on the correctness of the code on AMD systems, contrary to the usual explanation given by Intel for the presence of the vendor-ID check; the feature-flag tests appear to be quite sufficient and correctly implemented.
[#] Didn't have a chance to run the comparison on Zen2 yet; I am really looking forward to it
An AMD spokesperson replied with a link to a webpage containing various Epyc benchmarks. "There are 107 World Records here last I checked," AMD's spokesperson said.
So the Intel claims are so erroneous that they needn't bother checking on if they still held those world records :-D
Funny you should say that, given how many times you've already been nailed to the post for misleading reporting on performance. So either you employ incompetent people to draw up your reports, or you don't do enough reviewing before publishing, or . . you're marketing efforts are a bit too zealous (yeah, let's put it that way).
This kind of behavior is quite common in the industry, just look at the continual skirmishing between NVidia and AMD on the graphics side of things. AMD is always being forced to defend the performance of its processors in all domains, because AMD is a worthy contender and we need AMD to keep everyone else in line.
IT is the one domain where the numbers should not lie. Thanks to AMD for their continual efforts to keep it that way.
While the numbers may not lie outright (i.e. using the same hardware/software/configuration you will likely get the same or very similar results), how the benchmark results relate to the real world can vary significantly.
Regarding the NVidia vs AMD benchmarking, the focus has always been on the latest games with the latest drivers and latest patches - the ability to release a driver fix to address performance issues allows you to leapfrog the competition for a review or two. And occasionally there have been instances of straight out cheating on both sides.
With servers, Intel is applying the opposite logic because the market tends to favour stability over the latest releases - using the older "stable" releases that don't contain the enhancements for EPYC or hardware configurations that are similar on the surface but exacerbate a known weakness in the hardware or benchmark can go a long way.
And if you hide the details away behind graphs showing an X% increase might just get another year of refreshes before the customer seriously considers the competition.
"Not what an independent agency would setup for a fair test," he said, "but probably OK for marketing [after Intel clarifies everything]."
So this is about marketing?
On Tuesday, Intel issued a blog post on Medium – rather than its own website, oddly enough...
Intel puts out some marketing FUD and gets called up on it.
While the software version and "configuration typos" were the focus of this article, the original article also mentioned the memory configuration and OS. Given the outcomes were all used to tilt the playing field in Intel's direction. Now we just have to wait for someone to publish their own results on AMD with software versions/configuration options/memory configuration/OS altered between "best" and "Intels choices" to see what the truth was...
How many companies will even deploy Platinum 9282 given that they are only available in a somewhat limited Intel OEM system? Or is the market for these chips so small that these chips are only really being released for benchmark FUD anyway for the trickle down effect of maintaining the performance crown?
Doesn't matter, they won't be hitting significant yields with it anyhow.
It's the same reason that a car manufacturer produces halo cars with insane power while knowing that they may wind up losing money on them--produce very limited numbers for bragging rights, and then the marketdroids can make comparisons which make zero sense to anyone with a whit of common sense. Sadly, common sense and purchasing power are seemingly inversely proprotional, so this does work.
I'd love to see comparisons based on initial cost, TCO at a given load, or even per-cycle over time (or for a given wattage, both specified and measured). That's a lot, lot more difficult to do well, and can wind up putting you in the weird place that AMD was with their rating system a few years ago--the chips didn't clock for a damn, but they had great performance per-cycle, so an Athlon 3000+ clocked at 2167 and average punters assumed 3ghz, the slightly knowlegeable were annoyed at the marketing, and enthusiasts only saw those speeds when their overclocks failed and the BIOS reverted to stock settings.
It's not quite that bad...
As they are custom, single CPU modules with liquid cooling and can be installed at 4 modules per 2U - it's likely around 1kW/RU giving about 40kW rack once you add networking. Once you add cooling and then any inefficiencies for cooling/power it's entirely possible that they need to budget around 100kW/rack to use these.
Doable, but pretty specific workloads to do that at any scale with any real benefits given the cost reportedly being in the $50k/module range. You could likely put a similar EPYC system in for around a tenth the cost/
Talos doesn't have a blade solution so maxs out at 2 CPU's (2 x 22 cores/88 threads) in 2U vs Intel at 4 x 56 cores/112 threads in 2U. An AMD EPYC solution would potentially be 4 x 64 core/128 thread in the same space and avoid liquid cooling.
ARM isn't a serious HPC option at present.
Biting the hand that feeds IT © 1998–2019