Intel has announced the latest in its 10-year line of "Extreme" desktop processors, and the consensus view from the geekerati who have put it through its paces in prerelease testing is a collective "Meh." As ExtremeTech, Joel Hruska's review of the new chip was headlined, "Core i7-4960X Ivy Bridge-E review: Intel's Great Limp …
The world really has changed. I remember when these releases were followed for months and people knew by heart the chipsets and sockets involved. Been years since I cared. I game just fine on my Mac Pro desktop I bought in 2007 which is simply dual core dual socket with an ancient ATI 48xx series card. Of course the on real shooter I play on the desktop is Counter strike which isn't that taxing. Still the days of new processors and new cards opening up new killer apps and games is past.
Upgrades have not been worthwhile since a core 2 duo/quad to be honest , the numbers might have changed a bit when using apps when benchmarking but in the real world .......... not much has changed
I'm going to disagree slightly here. Each new generation of Lynnfield/Sandy Bridge/Ivy Bridge/Haswell uses less power, runs cooler, the unlocked processors overclock dynamically and the rise of native SATA6Gbps and USB 3.0 is a thoroughly good thing.
I agree that quad core/six core is of marginal interest and feel that dual core with Hyper Threading will suit most mainstream users.
Sandy Bridge Extreme was an absolute hoot but no-one seriously proposed it was anything more than a biggest/fastest/more bandwidth exercise in bravado and it fulfilled that role superbly well.
Pretty much true Cyberhash
I noticed a nice improvement over a dual core Conroe when I upgraded to a Sandy Bridge 2500k quad.
The thing is when performance reaches such levels as the 2500k a 10% increase in speed isn't even noticeable. Unless one is doing very lengthy crunching/rendering tasks.
I don't see anything from Intel that makes me feel a need to upgrade. I suppose a Haswell instead of Ivy-Bridge in my laptop would improve power usage.
I am getting the impression that Intel has already got the best from the Core architecture and all that remains is shrinkage and minor tweaks which will continue to eek out a few percentage points with each revision.
I am not a chip architect, I haven't a clue what may come next but I feel that until Intel come up with a true advancement of the Core architecture or indeed a whole new one, processor performance has hit a plateau.
Re: Pretty much true Cyberhash
You might be right but until the like of MS, Oracle, IBM etc stop charging per core (other CPU ratings are available) for enterprise level software and beancounters rule the roost there won't be much uptake for serious grunt servers.
New upgrade "Quite good."
For Intel's wallet perhaps.
And using all 6 cores on the same job?
Moores law is in abeyance.
frankly, chips are not getting much faster
the only sane upgrade I can see here would be a solid state drive.
Re: Moores law is in abeyance.
Agreed, I found the benefit of SSDs to make a huge difference in comparison to a processeur upgrade.
I would like the native Sata 6 Gbps and USB3 support though. ( Shame my Wifi can't be upgraded to match these kind of speeds though)...
high impact (on wallet) x86 gets faster (slightly)
This device is aimed squarely at the market for those who like to be able to check that their processor is top dog on a performance comparitor chart assuming there are still such people, presumably the same people who used to queue all night to get their mits on the latest offering from Msoft. It's all so last year.
Bit of an insult to label that CPU as EXTREME when it's anything but. Intel does do Extreme, but no longer for 'enthusiasts', see http://www.cpu-world.com/news_2013/2013080801_More_details_on_Intel_Xeon_E5-2600_v2_lineup.html
3000 dollars (so thats 3000 quid) will buy you a pretty decent second hand car
Bit of a shame, I always enjoyed building myself crazy-powerful PCs, even if the most taxing software they ever ran was a game.
We have a a dual socket system at work, 6 cores and 12 threads per socket, so it reports 24 cores.
I ran some cpu heavy stuff and it scaled linearly to 12 (ie the number of real cores), at 13 it started dropping back, at 24 it had dropped to about 3/4 of the performance of using 12 cores.
Put another way, 12 cores = score of 4, 12 cores + 12 threads = score of 3.
Now this surprises me as I thought the extra threads would provide a little extra processing capacity (maybe a couple tens of percent), especially after intel's years of developing it, so what's happening?
Re: threading question
It completely depends on who wrote the software and how competant they are. We use V-Ray as a renderer on exactly the same hardware you described, I did the exact same test and rendertimes with hyperthreading turned on was up to 30% faster and never slower.
Re: threading question
The extra hardware threads presents, I understand, as logically identical to a physical core, so how does competence fit in here? What can they do to increase or decrease performance? I don't understand.
As to your timing tests, that is interesting, thanks. It's possible that mine was heavy on cache where yours are heavy on main mem access, so yours would benefit for extra processing while waiting on memory. Just a guess though.
- Updated Zucker punched: Google gobbles Facebook-wooed Titan Aerospace
- Elon Musk's LEAKY THRUSTER gas stalls Space Station supply run
- Android engineer: We DIDN'T copy Apple OR follow Samsung's orders
- Pics Audio fans, prepare yourself for the Second Coming ... of Blu-ray
- Microsoft: Windows version you probably haven't upgraded to yet is ALREADY OBSOLETE