There's only one benchmark worth using...
Ask yourself - is it fast enough?
If yes - great, take the rest of the day off.
If no - buy a bigger machine.
At SC11 I ran into Henry Newman, CEO of HPC consulting firm Instrumental Inc. After exchanging the usual pleasantries and deeply offensive personal insults, we got to talking about some of the recently released benchmark results – and how irrelevant most of them are to the real world. In the course of the conversation, Henry …
Ah, the good old "throw hardware at the problem" solution. Which is heavily flawed in a number of cases. There are some systems which degrade badly on large systems due to lock contentions, processor cross calls, etc, etc - making the box bigger simply adds to the problem. Making the box /faster/ will generally help, unless you're IO/Network bound.
Someone once told me that performance tuning is about moving the bottleneck. If you're CPU bound and get faster CPUs, you'll then be IO bound by HBA bandwidth. So you get more/faster HBAs, then your disks are too slow, so you get faster disks/SSD. Then you're limited by network bandwidth. Lather, rinse, repeat until you get back to the start and are CPU bound.
Back on topic, is anyone surprised by the “(if it) isn’t clearly and explicitly banned by the customer, then it’s fair game” statement?
You need to add:
Is it shiney enough to impress people who don't know what it is
Does it have a Green Computing badge somewhere on it
Does it have some sort of Cloud stuff (I really hate people who say our servers should be on the Cloud and then get annoyed when I say its a stupid idea to put a Sauna in the Server Room)
I remember when Apple announced that they'd got the fastest PC in the world (desktop PC that is) and how great the PPC architecture that they were using was, when compared to Intel architecture.
The trouble is that it turned out that they'd used a piece of code compiled on a PPC optimised compler supplied by IBM the manufacturer of the processor. They compared this to the same code compiled for the current fastest Intel system with a generic unoptimised compiler.
Hence e.g. Doom, PC, 1993: 320x200; Doom, Mac, 1993: 640x480. The problem for Apple/Motorola/IBM was that Intel have enough money to make any design problem go away. Starting from the superscalar integer/floating point pipelines of the Pentium, Intel have been able to make up whatever they lose in instruction set architecture through sheer feats of engineering, give or take the odd misstep.
Dodgy benchmarks or not, the PowerPC was pretty impressive for at least a couple of years.
"I want my mobile to be as fast and powerful as my PC..."
yes, there was a few years ago some guy that was using his mobile to download linux CD images and was moaning about speed and provider costs... :(
I blame MS too, for adding soo much 'showoff graphics' and network stuff that is not needed, unless you are the super-admin for a major network implemented at a big company..
I have to usually go round and switch it all off, resulting in about 50% or more speed!!
mobiles are going the same way, the geeks moan about the paranoid stuff etc, but they are totally outnumbered by the average joe, wanting the PC experience - ie, lots of music vids, PROPER youtube(not limited by the cutdown app!), facebook games, all needing flash!!
face it, we wont need more until they somehow get a tiny mobile with a 4GHz CPU, 8G of memory, 100G of disk space, and a super fast graphic card with its own 2G of memory AND small enough to hold in one hand while youre running for the bus!.... :P
You don't have to cheat to produce amazing benchmark results. Back in the 80s we were benchmarking a couple of systems for our university using FOPR12 from the Whetstone suite. One of them was slower than the other until it was allowed the highest level of optimisation, when it was (if I remember correctly) about twice as fast. The manufacturer was very helpful and gave me the assembly listings of the compiler output at the relevant levels of optimisation, from which it emerged that there was a subroutine whose execution formed a significant part of the computation required, but changed only variables local to itself; the compiler spotted this and simply optimised its execution out of existence. It was a tribute to the compiler, but rather unhelpful for comparison purposes.
If you were paying £300 just for a new graphics card every year then you were a mug. I've been PC gaming since the DOS days, and to be honest £300 is about what I spend on a new computer every 5 years or so. If i'm playing a brand new game and it lags, the AA and a few other points of find details get disabled and I go on as normal.
Playing around with the graphics options to get the last erg of performance out of your system appears to be a lost art these days.
And that's what used to constitute a real "hardcore PC gamer" back in the heady days of LAN parties. Buy the components that existed at the sweet spots for price/performance and clock the nuts off of it. Then find all the tweaks for the drivers and the game settings to get the best without spending a fortune.
Then along came the alienbreed generation of "hardcore gamer" who thought gaming PC's needed to cost in their thousands every year.
Wow, maybe you're just baiting me, but I'm going for it. Your implication that I'm some kind of uber noob is perhaps the second best insult I've had today. Your suggesting I've never tweaked/overclocked/generally abused a PC to wring out every last ounce of performance from it? Wrong. Many a happy hour spent tweaking autoexec.bat & config.sys has been had thankyou very much. On the other hand, I do know that no matter what you do, there are always limits - you seem to imply you could get a 10 year old PC to run a current title at 100FPS. I'd love to see that and eat my words. And thanks for the thumb down - it's always great to get them from AC's.
Andy, your missing the point.
You can get a 3/4 year old PC to acceptably run a recently released titles with acceptable frame rates. My definition of "acceptable frame rates" probably is different to yours if your running at 100FPS, however that's fine by me.
I'm satisfied in my own mind that I can't tell the difference between 60 & 120FPS. Evidently nor can some people I have seen who trumpet how much better their games look running at 100FPS, since at a passing glance their monitors were only rated to display 60FPS, but we'll sidestep that point for the moment.
You don't notice a game running slowly under 50FPS. Your locked to 30(!)FPS on a console, for goodness sake and nobody moans about it. My approach to setting graphics options pays little attention to benchmarking apps and much to "does this cause any perceptible lag" when playing around with the options. Generally, picking a sensible resolution (by pixels per inch displayed on your screen) as opposed to the highest possible and high textures/effects and no AA/post processing lets you have performance up with the alienware crowd (and looks that are frankly far closer to what you have than you'll allow yourself to admit we can get) for a minute fraction of the price.
on servers that interleaved RAM, swapped out with mismatched RAM, so the PII servers had 386-level performance,,,
Good times, good times...
I also remember an Foo webserver vs Apache head-to-head benchmark where Foo came out ahead until it came out that the study, (which was posted on a website whose domain was owned by the webserver developer) was funded by the webserver developer.
Good times, good times...
I suppose there are many different benchmarks and equally numerous justifications for why a benchmark has been benchmarked and the motivations behind its benchmarking?
Then the marketing people get their learned, agile brains around things especially when someone notices that 3 cents worth of additional costs can make up for hundreds of dollars difference in finished goods?
Why??
Marketing of course?
Admission?
It would not surprise me at all if the flash/android thang and no-flash/iOS thang were mere marketing initiatives anyway.
I do a lot of numerical simulations, which can benefit from things like whether the machine has a GPU in it, how much RAM (to a point), etc. I find there's only one benchmark worth anything, namely, put one of my own simulations on the test machine, run it, and see if I like the performance in comparison to my existing hardware. Primitive, but what do I care about how well anyone else's stuff runs?
Still have the T-Shirt. It has "Real World Benchmarks" written across it.
I worked for a number of hardware manufacturers way back, and spent a lot of time in the labs of Ziff Davis, Future Publishing and the others. The dirty tricks used to win benchmark wars were scary. I even found one PC maker had soldered bigger cache chips to hard drives before submitting for review, Graphics card drivers would be created and distributed (to a select few) with the sole aim of being quick in a specific benchmark program (Remember 3D mark anyone?). Then all the 'optimisations' would be removed to get a driver through WHQL testing for shipping machines.
It's a game. The only thing you learn from running a benchmark is how fast that machine runs the benchmark.
Back in the day, we had a mainframe manufacturer turn up, with a loan machine and a benchmark.
We were looking at moving away from DEC VAX to the mainframe system. The rep said "compile this FORTRAN code on the DEC, with all optimisation, then compile it on our machine with standard settings."
He then said, the mainframe would be finished by morning and we should call him, the VAX would probably be finished next week sometime.
The benchmark created a huge multi-dimensional array in memory and filled it with random numbers.
By the time the rep had returned to his office that day, there was a note asking him to call us. The VAX was finished.
It turns out, the rep was too smug and the DEC compiler too clever.
Whilst the mainframe was still chugging away, the VAX had worked out, that there was no input into the program and it didn't do any output, therefore, it could optmise out the array filling and clearing part of the code, leaving an "empty" executable, which finished immediately and one very red faced sales rep. :-D
DEC destroyed itself and its corpse was purchased by Compaq. DEC in the mid 90s had the best hardware in the industry and some of the most stable OS ever produced but alas their business model was to trap you into very expensive support contracts (couldn't just buy hardware) that limited their spread into smaller enterprises. Commodity x86 boxes were starting to get good enough and came a lot cheaper and with a lot fewer strings attached.