well....
If your not happy with it, I could be convinced to ditch my AMD chip if you post it to me :D
It seems as if we waited an age for the latest unlocked "K" versions of the upgraded 22nm Haswell CPUs, but at long last they are here in the form of the Devil’s Canyon processors. Intel Devil's Canyon Core i7-4790K CPU Intel's Devil's Canyon has some fancy artwork to go with it, but the chip looks like a chip Currently, …
Agree - it seems like the new TIM hasn't made that much of a difference in the early examples.
My Sandybridge CPU does a very stable 4.5Ghz using a relativel cheap air cooler and I was hoping this would be a significant improvement. Of course there are some other architectural improvements from Sandybridge so I'm sure this would bench quicker then if I ran mine at 4.7Ghz, but I was hoping for something more.
"So, extra 22C and nearly double the power consumption for less than 25% overclock"
Agreed, also that's "at the wall" power consumption, so reasonable to expect that in CPU power consumption, it's more than double.
Still, for less than 200W on extreme computing? We've come a long way since kW PSUs for that sort of computing power.
As a long time user of AMD CPUs I think the FX 8350 is incredibly good value but I also think it's completely disingenuous to compare its performance to that of a top-of-the-range i7. The Vishera chips are cheaper, easier to overclock, offer better multi-tasking performance and come with hardware virtualisation support as standard but when it comes to overall performance they're only really equivalent to the faster i5 processors.
It's at the lower, utility system end, that AMD seem to be strong at the moment. Good performance low cost systems with integrated GPUs that aren't appalling are a good thing. Likely the reason that Intel's integrated GPUs are now usable as well.
AMD also seem to be ahead of Intel when it comes to making more general use of the integrated GPU as a specialised compute core rather than solely as a GPU.
"The Vishera chips are cheaper, easier to overclock, offer better multi-tasking performance and come with hardware virtualisation support as standard but when it comes to overall performance they're only really equivalent to the faster i5 processors."
And they're cheaper than i5 processors too. What's not to like? (you can always use multisocket CPUs and for some unexplainable reason the current crop of Xeons are slightly cheaper than their equivalent desktop part)
Rather than overclock these processors, oughtn't we concentrate on slowing down the humans?
Ideas:
*Message that pops up when things are taking a tad too long asking if you would like to pop the kettle on and make a nice brew.
*replace battlefield four with something a little more gentle, such as "Bulbquest: A game where one ponders which tulips to plant in October for a spiffing display next spring"
*Use the computer primarily as an exotic bar fire, concentrating one's leisure time on knitting an itchy undergarment for a little loved relative
Windows 7, at least, does have a message telling the user that the current task is taking a long time. I know because I see it all the time at work when running such incredibly resource hungry applications as Outlook and Internet Explorer -- sometimes with up to 5 , yes 5! I know, tabs open on a measly i5.
such incredibly resource hungry applications as Outlook
Well, Outlook is incredibly resource-hungry. On every conventional-disk system where I've seen it running, it ties up the drive for minutes at a time. Of course, its storage format takes far longer to "index" and search than, say, grepping a big mbox-format text file. Inefficient and ineffective!
On the other hand, if we didn't have Outlook, what would make Thunderbird look adequate in comparison?
Bulbquest, forget tulips, it's a thrilling game where the player waits until a bulb pops and then has to decide if it's worth fitting a new one versus the reduced power consumption/darkness curve, expert mode makes you factor in the TCO of CFLs Vs Tungsten.
Ready made Corsair water coolers blow away bloated air coolers for cooling, and are not that hard to fit, and can even be cheaper than top end air coolers; they even do them for GPU cards now! I've used an early ready made Corsair water cooler model for years now, with push-pull fans on the radiator.
I'll have to see a lot faster or have my i7 920 24GB machine die before I even consider a new i7, especially at the stupid money for a new Intel CPU, a new mobo, and over 16GB of RAM. AMD are much cheaper, as I know from building beefy FreeNAS boxes.
Actually NO. Your beliefs are totally in error.
There are numerous reputable websites that show the Corsair liquid coolers to be quite inferior to a quality tower HSF. Naturally the fanbois who bought into the CLC hype don't like objective scientific test data that shows how gullible they were to buy an inferior CLC, but it's true. In addition the quality tower HSFs cost less, cool better, make less noise and they never leak coolant to damage a PC as has happened countless times with Corsair and other CLCs. If you want a reality check read the threads at the Corsair forum where customers have had 2-3 and even 4 Corsair CLCs fail before they gave up and moved on from the gimmicks of CLC. Corsair must be laughing all the way to the bank that the sheeple are so gullible.
I can't help but think you've got the wrong end of the stick here. CLC was never really about achieving significantly cooler temperatures on the CPU - ultimately they are still using ambient air to cool the heat exchanger. It is much more about moving the heat directly out of the case. Most high-end air coolers are massive things with loads of metal fins and a couple of fans mounted right in the middle of the case. In comparison the heat block of a CLC unit is tiny and the main heat exchange is in a very efficient arrangement blowing the hot air straight out the back of the case.
CLCs take up far less case space than an equivalent air cooler and don't get in the way of other components (eg. RAM). I've been running a CLC (single 120mm fan) cooled i920 ever since that chip first came out and have never had a problem with either the cooling performance or the ability to easily change other components in the case.
CLC ftw!
the difference between using basic paste and the latest geewhiz stuff is about 1C _if applied correctly_ (ie, a few microns thick). The stuff is only there to fill air gaps, not get in the way of metal-to-metal contact.
You need phase change coolers to get real effectiveness in heat transfer and when combined with appropriate air-cooling they work far better than water cooling (unless you want to spend $900+ on your cooling, which will get a LOT of high quality air coooling)
> Where are the 6/8 core beasties...? Still sitting on my splendid 3930K hexcore.
Me too, although the 3930k's are less than 10% faster and use an extra couple of cores to get there. That's before you overclock though (Hello Water Cooling!) Since the 3930's have a much lower base clock, you should also get a greater percentage increase going up by 1Ghz.
I suspect the issue is that few people can use the extra cores. I got mine to run multiple VMs, so that works out nicely. I find that even with productivity software (I use the term broadly) such as Outlook, network/server latency is what makes it feel slow, as it tries to load all that social networking and IM presence rubbish. I'm not sure Word's rendering really takes advantage of multicores properly either. Transcoding on the other hand...
What makes me sad is all the architecture changes since the 3930k with so little performance to show for it. A cynic might say they might be using architecture changes to prevent piecemeal upgrades...
(Caveat - 3930k list prices are much higher - I just managed to get mine new for less than this new i7)
my current box (A8-3870k, 16gb RAM, 7770 2gb, 256gb SSD830) feels slow and pedestrian
My (personal) machines often feel slow, but they're rarely CPU-bound. I wouldn't swap them ones for faster CPUs at any price, all else being equal, because the insignificant performance difference wouldn't be worth the time it takes to set the new machines up.
Of course, you may be doing stuff that is frequently CPU-bound, particularly since you're using an SSD. My boxes are using good ol' 7200 rpm conventional drives. Even with SSD, though, I suspect my workloads would more often be constrained by network latency and bandwidth, and memory and bus bandwidth, than by CPU cycles. I'm also running numerous multithreaded processes, so I'd see more benefit from more cores than I would from a relatively small increment in cycles.