I just bought an eight core FX-8350 for my gaming system and I can clock that to 4.8GHz (stable).
Oh well, Christmas is coming. If it's a 125 W Processor, I'll take a 5 GHz.
AMD has announced what it dubs "the world's first commercially available 5GHz CPU," the FX-9590. "The new FX 5GHz processor is an emphatic performance statement to the most demanding gamers seeking ultra-high resolution experiences including AMD Eyefinity technology," wrote AMD client-products headman Bernd Lienhard in a …
"lol did the exact same thing when I built my PC a few months back. Either way I'm not gonna jump straight onto the new chip, I'd rather wait until next year and see what steamroller is like. What I have atm is more than powerful enough."
I've done this... don't let it get it of hand 8-). I see the next chip, wait for just that one new improvement, see the next one by then, think that one looks much better. I let it get a tad out of hand and ended up with about 8 year old computers. 8-)
If what you have is "good enough" then there's no pressure to update.
It's a lot more expensive overall to do piecemeal updates in any case - at some point you uupdate the cpu and find you suddenly need new board/ram/etc - the madness never really stops - and if you what you have is old and not good enough, it's often cheaper to replace the lot than try to find increasingly rare (and expensive) parts.
IE: On my 6 year old Xeon fileserver, 16Gb of FBRAM costs 200 quid today (it was 100 last year). Each of the 2Gb sticks draws 13W - at those prices and overall power consumption (550W, idling. Take THAT, gamebois) it's cheaper to replace the entire thing if you're planning on keeping it for another 5 years. (Just need a suitable case to hold $unfeasibly_large_number HDDs.
5GHz is a good number for shits'n'giggles, but the "base clock" only tells half the story. What REALLY matters is how fast the cpu-ram interface is, because that's what's going to have the most influence in real-world usage.
What REALLY matters is how fast the cpu-ram interface is, because that's what's going to have the most influence in real-world usage.
Depends on your application mix, obviously. In the vast majority of my "real-world usage", the bottleneck is disk I/O. (Occasionally it's network I/O, but rarely, because I have enough tasks going in parallel that waiting for the network on one doesn't adversely affect total work throughput.)
Yeah, if I replaced the conventional disks with SSDs, I might start to care about CPU-RAM speed. But in most cases, when I'm waiting for some slow I/O-bound job like a build or a test run, I just spend the time editing sources for a different project in a different vim window. (Avoiding grievously slow and greedy IDEs helps a lot in these situations.)
If you use a real OS (Any of the *nixen), then the more ram you have, more of your disk is going to end up cached in ram. This is one reason I put 16-32Gb in desktop boxes for our researchers.
Choosing the "right" filesystem for the task in hand makes a big difference too.
If you're using *nix, then consider ZFS with a suitably sized SSD out front. It makes a hell of a difference to performance on even quite modest systems (mainly because it can convert random writes to sequential ones thanks to ssd write caching)
Finally, it's worth looking at cost-benefit. A 128GB SSD costs less than 100 quid. Would that save you an equivalent labour cost if you use one for your build area? (In our experience it's worth it for developers to have this kind of snapiness - but we do force them to use standard equipment on the finished product to evaluate how real endusers will feel)
IBM indeed offered a 5 GHz CPU: the POWER6 processor in the P595 (and most likely some other) pSeries servers. See IBM redbooks and/or wikipedia about POWER6:
"It was released on June 8, 2007 at speeds of 3.5, 4.2 and 4.7 GHz, but the company has noted prototypes have reached 6 GHz. POWER6 reached first silicon in the middle of 2005, and was bumped to 5.0 GHz in May 2008 with the introduction of the P595"
I was just about to say the same thing, IBM's had 5GHz clock speeds for years as has been pointed out. You'd be hard pressed to shove a POWER (not a PowerPC, but a full fat POWER) in a desktop though from what I imagine.
But, hey aren't Mainframe speeds measured in MIPS instead of clock speed? Or is the MIPS measurement for the entire system? Either way, I'll take one of these 5Ghz chips, thank you very much. I just wonder what its gonna take to cool it. My liquid helium budget is rather low.
"The current generation (zEC12) mainframe chips are 5.5GHz; 6 core/chip, 6 chips/module. Each core can be doing 6 things simultaneously (two integer units, two load-store units, one binary and one floating point/decimal unit)."
And it has a huge cache, right?
"there are 2 dedicated companion chips called the Shared Cache (SC) that each adds 192 MB off-die L4 cache for a total of 384 MB L4 cache. L4 cache is shared by all processors in the book."
So, what is a "book"? How many cpus does the zEC12 mainframe have? 24 cpus, and 4 of them are dedicated to the OS? So what is a book, then?
Of 120 engines, up to 101 are available for processes to use (the others are reduntant, run in lockstep with others, or used for IO or similar).
If you're interested in Z12 hardware, have a look at this, then click on "Product Demonstration" in the top right.
"..Of 120 engines, up to 101 are available for processes to use (the others are reduntant, run in lockstep with others, or used for IO or similar)..."
So is one "engine" equivalent to a "core"? It would be much easier if IBM talked about "cpus" and "cores" instead of "books" and "engines".
One book is one cpu? And one engine is one core? Is this true?
Actually, bunging a full POWER chip in to a desktop wouldn't be difficult - remember IBM will cheerfully sell you POWER blades, so that's roughly ATX sized. The footprint's similar to Intel chippery - in other words most of the space has to be given over to heatsinks. They're still air-cooled, but it's fair to say the fan noise wouldn't be living-room friendly...
A shame that POWERPC CPU based personal computers has mostly disappeared from the market really. Everyone knows that a 5GHz Power CPU will run circles around a 5GHz X86 equivalent. Because Power CPUs just have higher MIPS counts due to their RISC architecture.
Don't get me wrong. I'm an AMD fan. However I'm sold on RISC. Also I'm still sold on the idea that MIPS > clock speed. High clock speed doesn't mean anything if the processor does less instructions per second compared to a slower processor.
"Because Power CPUs just have higher MIPS counts due to their RISC architecture."
Intel CPUs are a RISC core with CISC translator bolted on. It'd be interesting to see what they could do if the RISC internals were directly exposed to the outside world.
"...Everyone knows that a 5GHz Power CPU will run circles around a 5GHz X86 equivalent...."
Well, here you see that an old Westmere-EX x86 cpu at 2.4 GHz is just ~10% slower than a 3.55 GHz POWER7 at a SAP benchmark. If you clocked the old Westmere-EX up to the same speed as a POWER7, the Westmere-EX would be 28.4 % faster.
It seems that x86 have improved very fast. Now the latest x86 is several generations newer, and faster. Whereas the POWER7 has only been upgraded one generation in the same time frame: to the POWER7+. The POWER7+ is only slightly faster than the POWER7. So, if you clocked the latest x86 up to the same speed as a POWER7+, the x86 would surely outperform the POWER cpus.
It seems that you assertion is not valid in modern times. Back then, the POWER cpus where indeed faster than x86. But today x86 has much more R&D resources than POWER has, and x86 improves faster.
(We should not mix in Oracle SPARC, because SPARC gets 100% faster for every generation. This is far better than POWER or x86. Even the four cpu T4 servers outperformed the double number of POWER7 cpus in some benches. The eight cpu T5 servers wipes the competition).
Yeah I'd agree, but I think El Reg meant first commercially Desktop CPU..
Also its good to see some movement in this area, there a lot to be said for 4-8 Core CPUs, but lets face, Ghz grunt power has be stagnant for the past 4-5 years.. I look forward to seeing Intel's response, finally we might see the CPU finally hotting up again.
If you evaluate MIPS or FLOPS or whatever with processor speed you'll see there's no linear relationship unless the entire comnputational task can stay in cpu cache - which is not a real-world situation.
There are arguments for and against increasing cache size. One thing we've discovered with larger cache is increased susceptability to cosmic ray events (main ram is ECC, cache ram usually isn't). Perhaps it's time to wrap systems in a lead sheet or waterbag.
Oh, I have an old Asus motherboard+AMD quad core combo here, which has "unleashed" mode.. which you can trigger by pressing the power button.
Okay so it only adds about 1% or so to the speed, but I still have a Turbo button, and this time around it actually does something (as in, makes the computer sound like it's about to go VTOL).
The market segement that AMD is selling this toward really do not care about being Intel being more efficient whatever the merits of that (whether its fact or Intel marketing spin), but I don't buy Intel equipment for my PCs. Mine as in ones that I own.
Maybe its because I buy from Intel all the time at work, hell my mother-in-law even works for Intel, but Im pretty exclusive to using AMD's products at home. Their Linux support seems to be better (even though I use Fedora so using AMD's proprietary drivers is kind of asking to break X and the rest of the video subsystem, until you learn how to do it correctly, and its not exactly clear), they perform better when it comes to Windows gaming in my experience, but YMMV as always.
Also both nVidia and AMD do well with BSD and Commercial UNIX support, but most here don't care about that.
Hmm. Browsing NewEgg, I can roll with AMD's top of the line FX-8350 with 8-cores for $200...or Intel's "Core i7-3960X Extreme Edition Sandy Bridge-E 3.3GHz (3.9GHz Turbo) LGA 2011 130W Six-Core Desktop Processor" for only $1096, which has 6-cores. The AMD is clocked a little higher, but its thermal envelope is also slightly smaller. I wonder which I would choose....the one with slightly worse single-threaded performance, which, in a world that is pretty far into the multi-threaded stuff is almost a non-sequitur unless running an extremely poorly written application...pay out the ass for slightly better performance under certain circumstances where not having an extra two cores would be useful, or not pay out the pass, and never win the 3DMark benchmark for single-threaded performance...what to do...$896 dollars...that would be 572.89 pounds at the moment...hmm.
Is it true that putting the little blue sticker on your PC case makes it a whole 100Mhz faster? ;-)
IBM has had 5.0ghz CPUs in the System P (which usually run Linux or IBM's UNIX equivalent) for years. For oldies, these are RS/6000. The 5ghz model was announced in 2008, and was around long enough that it was also withdrawn (in 2011). AMD beaten by, oh, 5 years?
In the IBM Mainframe world (that is, System Z, with Z/OS or MVS or whatever non-UNIX, plus zLinux) the mainframe "engines" have actually been somewhat quicker than that.
The previous generation were 5.2ghz and the current (as of August 2012) are 5.5ghz -- and those are NOT "turbo" modes, they are 24/7 with-better-reliability-than-you-modes. For oldies, these are S/390.
They are what you use when the thing has *got* to work, as opposed to when you'd just like it to, most of the time.
AMD should probably caveat their claims with "for x86 CPUs"..
"for all the idiots still living in 2005 who think the Ghz wars are still going on."
Actually, for the most part the core expansion has topped out and we're starting to see a requirement for faster thread performance again now so Ghz is very much a current issue on the server side as well as the desktop side. The vast majority of desktop software is single threaded still and since SSD has resolved the CPU wait issue the processor is once again becoming the bottleneck. Not bragging rights, just the next step in performance improvement which those who understand performance have been asking for.
"Actually, for the most part the core expansion has topped out and we're starting to see a requirement for faster thread performance again "
The current focus is on heat. Given I can buy 16core CPUs if I want (several 4 socket boxes full of same purring away in the server room), I don't think it's topping out just yet - the hard part is programs making use of the cores for windows or other single-user environments. In a multiuser environment the more cores the merrier.
That 16 core cpu will make its way down to consumer space soon enough. There are plans afoot on servers for even more cores.
Having said all of that - even the slowest of today's crop of desktop computers are more than adequate for 90% of the tasks they're asked to do (office work, websurfing, minor gaming).
Gamers are a niche market which aren't economically viable to serve
What's driving cpus now is server requirements, which is why there's a big push on heat (We dissipate ~55kW in the server room at the moment. Every reduction in power consumption equates to more machines in there or a reduced AC cost, both of which are strong drivers for using next-gen technologies.)
Idiots who the thinks the GHz wars are still going on? Say that to IBM, who said that it is better to have 1-2 strong cores, than several weaker cores (with a reference to SPARC Niagara). IBM also talked about future 6-7GHz POWER6 cpus and even higher. Today IBM has no 6-7GHz single core cpus. No, instead IBM has several lower clocked cores. IBM was late to the many core cpu party, arriving at last with the POWER7.
ive tended to buy what was necessary to get the job done. From intel 386, cyrix 486, amd 4/100, P133, celeron 300, amd 900, amd 2000+, P4 3.0, AMDx2 4400, amd b50x4 unlocked, i5 2500
I dont understand fanbois either, just get what you need at the price point you want to pay. If you are gaming then a GFX card is generally more important anyway.
They do this every now and then. The AMD K6/III and AMD64 kinda caught Intel with their pants down. AMD then decided to do an Intel and hike the prices, Intel released the Core series and stomped AMD back into second place again.
If this is another AMD blinder being pulled, I'd suggest grab it now before they hike the prices and give Intel some wiggle room to stomp on them again.
not really. The Athlon was the only killer part as the P2 was long tooth and initial P3 rushed, the celeron overclocking was stamped on and the P4 was initially too expensive and slow.
K6/iii wasnt great as it had a deadend board (everyone knew about the forthcoming slot A) and the celeron ranges at the time overclocked 50% on air and were much cheaper (plus you were guaranteed to support P3s on the same slot board). Coupled with a voodoo card you were king of the hill.
Not sure wjhat you mean by a deadend board. They were very cheap and perfect for re-engining an old pentium board. Surprisingly they also ran cooler than the intels they replaced. They ran about 70% faster on integer ops than an intel chip at the same clock speed (K6/ii could only manage 50% faster), so as a stopgap measure they were worthwhile installing. Everyone knew the K7 was on the way and it bought us 2-3 years of not having to put up with some of the "issues" associated with early slot-A hardware.
I'm not a physicist my any measure so this is all taken by me at face value: the quote below is taken from wiki answers but I first heard this a couple of weeks ago at a keynote speech by Joe Baguley from VMware:
Electronic microprocessors are limited by the speed of light. Electrical current through the processor travels at the speed of light and that becomes the limiting factor. The Speed of Light is 29,979,245,800 cm/sec. A 1 GHz Processor can theoretically perform up to 1,000,000,000 instructions/sec. Each processor instruction produces a result in the form of an electrical signal that needs to be stored or delivered somewhere. The time in between consecutive processor instructions becomes the limiting factor since it determines how far the signal produced by the previous instruction can travel at the speed of light. The time in between instructions on a 1 GHz processor is 1/1,000,000,000 sec, if this figure is multiplied by the speed of light we would get a distance of 29.9 cm. This means that no component that interacts directly with the CPU can be at a distance greater than 30 cm for a 1 GHz processor. For a 3 GHz processor the distance is 10 cm. It seems that we are currently near to the limit of how fast our processors can run. The current "multiple core" trend we are seeing from processor manufacturers seems to support this case. Multiprocessing is currently the most cost effective workaround to acheive improved performance if you cannot go any faster on a single processor.
I think the challenge is that the faster the frequency of processors the closer you need the rest of the components. Until everything ends up on the CPU die then I guess this will be an issue.
I think the issue is that the CPU ends up waiting for instructions and therefore wasted cycles, as opposed to the distant components waiting for anything. As I say I'm just taking this at face value but it certainly explains the trend towards more cores and an increase in parallelism in modern workloads.
Biting the hand that feeds IT © 1998–2019