<- this
In the past decade or so the only major trouble I have ever had when installing or updating systems has been crappy video drivers. Both Linux and Windows.
A pox on them all!
AMD says it will ship graphics chips using its next-generation "Polaris" architecture from mid-2016. Crucially, these processors will use 14nm FinFETs, which means they should have better performance-per-watt figures than today's 28nm GPUs. Let's be clear: today's announcement is timed to catch the hype building around the CES …
This post has been deleted by its author
I am convinced that AMD will keep its new AMDGPU driver in Linux kernel updated for the new chip, including open source register include files.
Bronek -- what do you think about the state of AMD's GPL driver currently? I love Intel's thorough commitment to open source drivers, and combining this with AMD's hardware would be wonderful.
I have been wanting to try AMD again but I'm holding out until they have a GPL version of VDPAU or equivalent. VA-API is not GPL and not nearly as good.
Thanks!
Mark
Hard to believe 14nm finfet won't make a difference to power use but AMD have failed to deliver promised power improvements for so many years now I'm struggling to believe it will actually happen. I fear I'll be struggling on with my old GPU for a considerable time.
If forced to upgrade for work I'm likely to end up back on nVidia despite their shitty attitude to bug fixes (both hardware and drivers) and the many times they just disable broken features instead of fixing them. My power bill will thank me and i won't need AC to run the PC full speed next summer.
I've never understood why someone who spends £300+ on a graphics card cares about it using an extra £10/15 per annum in electricity. I mean if you extrapolate it for a decade, maybe it starts to accumulate to the point you notice it, but you're probably going to upgrade the card after a couple of years anyway, if you're part of that market.
I only recall Perf. per watt becoming the big talking point after NVIDIA suddenly stole a march and got ahead of AMD in this area. Suddenly it became the big differentiator of graphics cards in any online discussion. I mean if the advance was used to reduce heat so you could ramp up the frequency, that would be more of an argument but it's mainly used to reduce power consumption.
If we were talking laptops, I'd get it. But when the same thing is applied to desktops, I just don't see why it's such a big deal.
"I've never understood why someone who spends £300+ on a graphics card cares about it using an extra £10/15 per annum in electricity."
It's more selecting a card that's (for example) £40 cheaper, because it's cheaper, but over its life will be more expensive when you factor in increased power usage.
>>"It's more selecting a card that's (for example) £40 cheaper, because it's cheaper, but over its life will be more expensive when you factor in increased power usage."
Lets run the numbers. And lets use current technology. Here is power consumption at idle and at load and I'm going to use the Fury which retails for around £455 and the GTX 980 which retails around £410, so there's your "£40 cheaper". At idle there's almost nothing in it (about 2W). At full load, the difference is about 100W. Source:
http://www.anandtech.com/show/9421/the-amd-radeon-r9-fury-review-feat-sapphire-asus/17
Let's assume about 12p per KWh. So 100W at 8hours per day is going to cost you around £2.88. A whole year? £35.04.
So there you have it. Run your card at load for 8hrs every day, Mon-Sun all year round, and you still wont make back your £40. And quite frankly, in that scenario you have bigger problems with your life than a small bump on your annual electricity bill.
That's why I call bullshit on this "Power savings" lark. As I said, it only became the Big Talking Point when Nvidia suddenly found themselves able to beat AMD in it. If we're talking laptops, that's fine. But we're not - people keep using this to argue about desktop GPUs. I don't know anyone who would buy a high-end GPU and then freakout because it cost them £20 extra at the end of two years (a more realistic scenario). In fact, by that point such a person is probably itching to buy the latest new GPU.
It seems you're refusing to see, that saving 100W of power from being converted to heat in your computer system is going to make for a less noisy, less annoying computer system.
Not to mention that all your components will live longer if your GPU isn't causing a 10 or 20 degree temperature rise for all other components.
Lastly, a LOT of people would be extremely happy if they didn't need a large metal box to play computer games or farm bitcoins anymore, but could do it on a reasonably sized laptop, like Razor's Blade 14. The GTX 870 in mine wasn't bad, GTA5 ran well on it. And it didn't bake itself into oblivion yet. But I could easily see much better than this.
Nvidia hinted, that with Pascal, laptops may not require 'm' versions of their GPUs anymore.
If the power savings mean nothing to you, portability and noise does mean something to me.
Also, your calculation is flawed, because you're assuming everybody keeps their GPU for only one Year. I keep mine for 3 years, so I'd be well ahead on the FinFet adorned GPU that cost 40 bucks more.
>>"It seems you're refusing to see, that saving 100W of power from being converted to heat in your computer system is going to make for a less noisy, less annoying computer system."
I'm not "refusing" to see anything. The poster I replied to talked about cost savings. That was the argument I was addressing.
>>Also, your calculation is flawed, because you're assuming everybody keeps their GPU for only one Year"
No I'm not. My post explicitly referred to a two year lifespan and explicitly stated that the sort of person who buys a top of the line GPU is typically looking for the next latest greatest within two years. Someone interested in long-term value nearly always goes for mid-range where the depreciation is far, far less in absolute terms.
>>"If the power savings mean nothing to you, portability and noise does mean something to me."
Then you'll presumably love AMDs new 14nm chips which are going to be available a long way in advance of NVIDIA's and already look to be far in advance of NVIDIA when it comes to like for like power saving.
>I've never understood why someone who spends £300+ on a graphics card cares about it using an extra £10/15 per annum in electricity.
Probably isn't the cost of electricity, but the reduced noise from running cooler, or increased power from cramming more in, for a desktop system.
I'd hazard a guess that the desktop is also the proving ground for mobile, where power consumption is important.
>>"Probably isn't the cost of electricity, but the reduced noise from running cooler, or increased power from cramming more in, for a desktop system."
Agreed. Noise is an issue. But that's not what I'm counter-arguing. It's when people start talking about the cost savings, like the person I replied to.
>>"I'd hazard a guess that the desktop is also the proving ground for mobile, where power consumption is important."
Yes, in mobile it matters.
Performance per watt is a measure of the possible total performance.
As graphics cars are limited by their heat generated. Hence why AMD stuck a water cooling loop on the fury X.
With AMD claiming 2x performance per watt and Nvida already showing 10x performance, Im suspecting its not going to be a good year for AMD.
(Source, Nvida just launched midrange 14nm core in it latest diving computer and it's pulling more teraflops than the current Titan X and thats still on GDDR5 and not HBM.)
Getting this 1/2 size shrink to work well enough is probably going to be hard and I can't wait another year or more!
I did like AMD/ATI, but my new main machine will not just have an Intel CPU, but a high end Nvidia GPU too (probably a 980 Ti) to properly drive 4K monitors, both with reasonable power consumption.
The sub-title to this article seems a little unfair. AMD produce good cards and usually are an excellent choice on the price-performance scale. Their high-end cards are also actually better for 4K. They got held back by the hold-up to 14nm which messed up their release schedule badly. I'll be really glad to see them start hitting their stride again.
I'm particularly interested in their new architecture to see if they have modified it much for HBM. Memory bandwidth is THE key thing you build a graphics architecture around. If you have a much higher memory bandwidth then you would want to do a considerably different design. So the two questions I'm most interested in are whether the new line-up will be focused on HBM with lower cards just being rebrands of older models and if so, how much the architecture is really changed to make use of the new technology.
As it gets harder and harder to push the boundaries of silicon, I think we're going to see a lot more emphasis and interest in being clever with what we have. Nvidia and AMD both got blindsided by the failure to decrease node size to 20nm. They both had plans they had to put on ice. Nvidia seems to have handled the crisis better.
But now we're moving again, there are a lot of interesting little details in this new architecture. They've improved the compression further that they introduced in the last architecture - which eases the pressure on memory bandwidth a lot. They've improved the ability to calculate what doesn't need to be rendered significantly, apparently. That's a big deal because the problem AMD have had is the inability to keep their SPs working flat out - this helps feed them faster by needing to pass down only what they actually need for the end result. They've improved the hardware scheduler (same benefit - lets the card get more for the same amount of work) and updated the video encoding and decoding (important to some) with h.265 in hardware.
I'm honestly pretty enthusiastic about this and looking forward to seeing it.
I like this too, but there is one small detail that irks me : they appear to be starting at low-power end . I understand they do not want to inflict Osborne effect on Fiji sales, so it only makes sense. But I would still appreciate if they were a little more explicit about it. If I buy Fury part this year, I'd like to know whether or not it's going to stay near the top, performance-wise, of AMD cards, at least for the rest of this year.
the laptop class and low-end GPUs (replacement for 360, 370, 380) displayed initially and which are supposed to be manufactured at GloFo will ship with GDDR5 - which is more than enough and suits the purpose for this class of GPUs
the higher class of GPUs (replacement for 390 & Fury) which follow later would have - the costlier & more powerful - HBM2 and presumably manufactured out of TSMC's proven high performance process
the zen APUs expected to ship in volume next year (2017) might ship with 1st-gen HBM (expected to be affordable by then) as L4 cache/dedicated-GPU-memory in addition to supporting plug-in ddr4 chips for CPU.
and special version of zen for consoles might be manufactured with SSDs built into the SoC itself I guess
I can't help thinking that AMD and nVidia missed a trick by not incorporating silicon that could do bitcoin/litecoin etc calculations. Think of all the coin farms they could have sold GPUs too.
Or even a separate line of low watt high power *coin chips to really worry the competition.
GPUs lost out to FPGA and ASIC due to power/performance per watt a long time ago. If they could nail the low power and have massive giga/tera hash products people would be queuing up to buy.
A sideline of their engineers working on this could surely pull off something amazing.
I guess its too niche.