AMD's new graphics architecture isn't merely about painting prettier pictures. It's about changing the way computers compute. As first revealed last month at AMD's Fusion Developer Summit, the chip designer has gone out of its way to ensure that its future APUs – accelerated processing units, which is what the company calls it …
As long as it runs DOOM5 at 100FPS, I couldnt give a toss
As long as it decodes blurays and flash video, I couldn't give a toss.
So now all they need is a benchmark to compare AMD APU to Intel. Or the world will say that Intel is better becasue AMD does not approve of SYSMARK.
Seems to me that AMD needs to support some OPEN Benchmark.
Huh? What are you getting at?
It's commonly known that Intel's top end CPUs are a decent bit faster than AMD's top end offerings...
But Intel doesn't even compete in the same universe as the top end AMD GPU offerings...
So I could tell you exactly how those benchmarks would look. Anything that is heavier on the GPU, AMD would win, anything that is heavier on the CPU, Intel is likely to win. Shock!
I wonder if they will be unable to support Linux base OS's with this.
This comment made me lol!!!
You must not be aware of how things are done in Open Source....
First, it is not "they" that need to support Linux-based OSes.
Second, Linux may very well get support (or at least better support) for IOMMUs before Windows.
If it's open, it's usually available. That's why nVidia drivers exist for Linux, but Radeon drivers are still "experimental": ATI didn't support an open driver interface.
When It Is They
It would be "they" if they chose to keep the critical details of the hardware secret, so that no one could program it themselves.
Fortunately, that's pretty unlikely if they intend the thing to be used for general-purpose programming, rather than just for one Direct X driver to be written for it, with everyone else just calling it (or complaining that there isn't also an OpenGL driver).
But since it's derived from video card technology, I can understand the fears, even if they are wholly unwarranted in this case, as you note.
English language + techno-babble
So in summary....
in AMD's FSA the VLIW-based SIMD is replaced with a GPGPU architecture, created with a CU + GPU by Scaling tasks across its Vectors on a per Wavefront basis. This gives a great QoS for the GUI, and will be called GCN, according to the AMD CTO.
Dammit cut that out
I haven't had coffee.
that's why your name is jesus: nobody except your father understands what you're saying.
you win by default.
Jesus Wins. Fatality!
Are you kidding?
Even I don't understand him most of the time!
*(Heh. God Puncher. The militant atheist superhero...)
And like a good data backup guru ...
... Jesus saves.
they can design the best hardware in the world but there drivers are garbage. i sent my last ATI card back as im not waiting 8 weeks for them to fix drivers.
and how many decades ago was that?
Since AMD took over ATI the driver have improved a vast amount, and since the days of the 3000series cards they have been just as good as nVidia on Windows, and recently they have been better at just about evrything.
In the last year or so they have provided Linux drivers on par with or maybe even better that NV. True there latest cards may need the LInux crowd to wait a few weeks, but that is because they are NEW chips which need NEW drivers, as opposed to NV whihc has done naff all apart from re-naming old chips with the occaional speed bump due to process maturity.
"there" + "i"?
I could go further but why bother.
Yes you, boy, at the back of the class!
Oh dear, I've really got my mortarboard in a tizzy this morning...
"You program it in a scaler way" - that would be "scalar".
On a more relevant note, a company that has become all but irrelevant tries to claw its way back by designing things that nobody wants and that simply don't perform.
(Now sits quietly and waits to receive the ceremonial Inverted Thumb of Death from the AMD fanbois)
You, sir, are of the highest intelligence because you happen to have, with the unusual sharpness of a Moriarty-like mind, recognized, in a flash as it were, what capitalism is all about.
Except for the "all but irrelevant", "things that nobody wants" and "simply don't perform" parts.
Have a cookie. It's a bit mouldy.
No matter how good AMD chips are...
...Intel have a bigger marketing budget, so will ALWAYS be better in the publics eyes...
Thats always the odd thing.
AMD moans about Intel but Intel does one thing that AMD never does...Advertise.
So whan folks go to buy a PC they get a choice of Intel or AMD.
Well they've never heard of AMD but they get to hear the Intel jingle at least three times a day on TV so they buy from the one brand they have heard of.
AMD needs to sack their marketing team, get a new one and start spending on some jingles etc. after all there is only one other company doing it in their field so how hard can it be?
Even Acer has adverts in the UK.
As for no-one wanting AMD, I build my budget PC boxes with AMD CPUs in them. Why? The saving of using AMD enables me to put a 60GB SSD in the box. Customers dont notice the difference between an Athlon II or a i5 but they notice the SSD. They also get USB3.0/HDMI/usable integrated graphics, which they dont get on the budget Intel motherboards.
not strictly true anymore
The new intel celerons (E3400) are absolute crackers. Not pricey and faster than a bunch of athlons in similar brackets.
budget Intel boards are not as well equipped in most cases as AMD based ones.
A lot of them still have serial and parallel ports on them. To get the modern ports you have to spend £15+ or more to get them.
On a budget box thats eating into my profit for no benefit whatsoever.
any windows developer?
I heard game and multimedia developers choose Intel compiler for its excellent optimization characteristics and resulting executable has tendency to look for genunineintel to perform well. Otherwise, on AMD, it is horrible.
So, if its true, how will AMD manage to convince developers to use their cpu/gpu optimization while they sit idle with thousands of high end apps ignoring their old fashion CPU?
As a user hating Intel brand itself, I am almost convinced to go with i5+nvidia gpu on Windows 7 because of the sad fact above. Oh, I also decided to boycott ATI because it refuses a very easy win7 recompile of Vista driver.
There are many compilers that produce excellent code for both Intel and AMD. Devs do not have to use use Intel compilers.
We should all thank AMD.......
for without AMD Intel would still be selling 500mhz Pentium 5 at $1000.00 per unit.
AMD is far from irrelevant. Now VIA is irrelevant.
AMD is taking RADEON and changing the core of computing.
Intel has Sandy Bridge which is a core duo glued to crummy graphics.
Intel has no graphics capability. RADEON was an expensive purchase but just maybe they knew what they were doing when AMD overpaid for it.
they revolutionised CPUs. i remember how great my first Athlon was compared to the intel chips of the time, and cheap too!
but lets ignore the K6's though. They were baaaaaad (as opposed to my OC'd celeron, the mighty 300 -> 450)
Said it before, will say it again.
All that is needed for many tasks is an FPGA co-processor. Load an FPGA image that does exactly what you want, the CPU sends whatever data is needed to the FPGA, the FPGA squirts the result out of the other end. For stuff like transcoding video this would be a PoP. You could have FPGA images in a repository, just like you do with linux packages. This is not difficult to implement. You can have several versions in the repos, one for each vendor (the synthesis and place and route for each vendor is different), just like you do with i386, x64, ARM, MIPS, PowerPC.........
"All that is needed for many tasks is an FPGA co-processor."
Indeed. I've wanted a motherboard (or add-in board on a fast bus) with a few smallish FPGAs to play with for a while too.. load a trascoding image into these 2, a decrypt into that one, code generation ones into this guy - swap 'em around when you've finished one job etc etc. It wouldn't cost pennies, but shouldn't be too pricey either - there's probably more work into getting the memory access sorted and the IO traces properly matched than anything else on the hardware side, and that's just fiddly rather than expensive.
Obviously this would be mostly, though not entirely, just to play with - but surely there's nothing wrong with that !
Plenty of eval boards with CPU+FPGA. Tend to support Linux OotB but you didn't want any other OS did you?
You'll find the Linux kernel in many, many OSs. Certainly way more than Darwin or NT.
What was your point?
@the big yin
Errrrrrr.... that there are a lot of the hardware evaluation boards that are designed for people to experiment with CPU and FPGA hardware architectures - which sounds very much like the thing Tim is expressing an interest in. These boards usually ship with a Linux kernel and a GNU toolchain as these tools, and open source in general, are a very powerful and effective way of enabling the intended experimentation and evaluation.
Is that ok with you?
AMD's problem is getting major software develoeprs to code for this new architecture. The vast majority of developers code with Intel / NVidia in mind, as they have the lion's share of the market.
Without market share and the marketing power that comes with it, AMD can build an amazing processor and still fail, as no-one will be interested in taking advantage of it, for fear of marginalising their software.
I am doing work with AMD GPU's and OpenCL is what I currently use.
There are pro's and con's but it provides me with the ability to
run my code on/across a wide range of compatible systems.
My devbase runs on the big servers at work and a ~200UKP desktop system with a ~70UKP GPU and home.
No, there is no catch-22
"The vast majority of developers code with Intel / NVidia in mind, ..."
You what? The only people who code with NVidia in mind are the folks who write drivers for their cards. A high nines proportion of the programming community *can't* write NVidia-specific code in their usual coding environment. Similarly, although it is slightly easier to write CPU code that runs on Intel but not AMD hardware, hardly anyone does it because most the compilers for most languages won't let you and for the rest it is fairly tedious.
There is no catch-22 for this or any other radical architecture. Here's why...
If AMD can create a C compiler that generates decent code for the heterogenous system (and by decent, I mean faster than just using the homogenous part, even for algorithms that aren't embarrassingly parallel) then the Linux kernel will be ported within a month of the systems being available in shops. Others will provide cfront-style front-ends for all common languages. In many cases, this is already how compilers for such languages are written, so this will be trivial. Once you have a complete Linux distro running, faster than is possible on a homogeneous offering, Microsoft will announce a Windows port because if they don't then they'll spill oceans of blood in the server market, where Linux already has enough market share to be taken seriously.
If AMD can't create such a compiler, then their new architecture just isn't as great as they are claiming and no-one (outside AMD) will care.
@simbu - Huh?
AMD currently has a slightly larger market share than Nvidia for desktop/laptop graphics doesn't it? So is Intel / NVidia "the lion's share of the market"? Just as likely to be Intel / AMD I'd have thought... unless I'm missing the point and you're talking about something other than graphics.
Agree with the basic issue you raise though - the nuances of the dominant cpu (Intel) are far more likely to get an optimised code path compared to carrying out a major re-write to support a radical improvement (AMD) but with a smaller installed base. AMD need to make it very easy for developers to get a decent advantage from this new hardware.
Vast majority of developers...
There is nvidia specific kit in the highend desktop computation market. These are expensive machines with multiple special purpose GPU cards. They utilize a proprietary nvidia API. However they are by no means general purpose machines. They are very specialized, much like a computing cluster.
This is a small niche. Seems to do well for particular problems. Not at all general purpose.
AMD seems to be trying to take that approach onto the desktop. Dunno if will really matter there though. Between Doom and XBMC, most people seem to be pretty well set with the current kit.
Timid doesn't topple
"If AMD can create a C compiler ..."
I agree with everything you said, however, playing along with the nonsensically exaggerated title of the blog post, your recipe can't/won't topple nuttin'.
It's high time AMD goes all in. Start from scratch. Throw out everything everyone assumes about how software and systems "have to" be developed and operate - forget the bad stuff, improve on the best stuff to create a top to bottom re-imagined stack and life cycle - complete with OS (not another clone of a clone of a clone of unix pretending to be special), GUI and software development tools (compilers? There's dinosaur) and apps and an unprecedented "contract" with the user.
Make it run everywhere, but run best on AMD chips. Focus on mobile, but support old-fashioned non-mobile devices too. :-) "Sure, Android devices are good, but have you seen Android+ ?"
~$100,000,000 <5 years. Completely underground.
Risky, but toppling isn't for the timid.
Re: No, there is no catch-22
If I can just reply to myself...
"Microsoft will announce a Windows port because if they don't then they'll spill oceans of blood in the server market, ..."
I'm *assuming* that this heterogeneous thingy is fully virtualisable. Current GPUs are not, which is why you don't get bleeding edge games performance in a VM. However, it is obvious (to me!) that if you are depending on the GPGPU for performance on general benchmarks then you won't be considered AT ALL in the modern server market unless you preserve that performance advantage under virtualisation.
So, um, AMD could block their main route to market if they choose to omit virtualisation support in the first release.
Let us hope that AMD will produce a great processor and give intel a kick up the butt. If the processor is good enough, it will become popular.
Round and round we go, where we stop, nobody knows!
Aren't we at the Itanium/x86_64 point again?
Surely the problem with all of these APU or GPGPUs is that suddenly we will have processors that are no longer fully compatible, and may run code destined for the other badly, or possibly not at all!
The only thing that x86 related architectures have really had going for them was the compatibility and commodity status of the architecture. For a long time, things like Power, PA, Alpha, MIPS, Motorola and even ARM processors were better and more capable than their Intel/AMD/Cyrix counterparts of the same generation, but could not run the same software as each other and thus never hit the big time.
Are we really going to see x86+ diverging until either AMD or Intel blink again?
Transmeta were ahead of their time.
In their design the CPU or processing engine could be anything and the instruction set be a sort of software running on it.
But obviously the future of x86 is a little hazy given ARM based devices may start to dominate the personal computer market.
And that, boys and girls, is why you should use a long password. Modern GPUs can calculate hashes at an extraordinary rate, making brute force attacks on 6, 7 and even 8 character passwords eminently doable.
I'd be pretty pissed if my password was being stored hashed without a salt. I'm not naive, I know it happens, but it should be be illegal under DPA for lack of due care.
Not saying this is an infallible solution, but I'm always amazed when a service warns users that their passwords have been compromised.
GCN is not a TLA as defined by those idiots at PCMag.com
Since I couldn't understand most of the rest of the article I dropped into Pedant Mode.
I know the Reg writers don't have time for things like correct spelling, grammar or semantics; but when you give links to definitions like TLA could you please use a correct (or more correct) definition?
In this case the Wikipedia definition allows for the fact that in most instances of TLA, the "A" stands for abbreviation, not acronym as PCMag seems to think.
GCN is a three letter abbreviation, not a three letter acronym.
FFS! Since when has Wikipedia been the definitive guide to English? Surely Wikipedia is only for lazy students.
TLA has always stood for "Three Letter Acronym" for as long as I can remember.
Wikipedia isn't necessarily the best authority on this.
TLA - don't like Wikipedia? How about a DICTIONARY?
LOL - you may not like Wikipedia - that's irrelevant.
My point is that the author of the article, and PCMag.com (and apparently you and AC) do not understand the difference between an abbreviation and an acronym. They are not synonyms.
Here is a link you can use... www.dictionary.com
Like I said - I'm in pedant mode, but TLA is not a TLA by your definition.
. . . and other misconceptions
Are you sure there's not an "i" instead of an "o" in your last name?
- World's OLDEST human DNA found in leg bone – but that's not the only boning going on...
- Lightning strikes USB bosses: Next-gen jacks will be REVERSIBLE
- OHM MY GOD! Move over graphene, here comes '100% PERFECT' stanene
- Pics Brit inventors' GRAVITY POWERED LIGHT ships out after just 1 year
- Beijing leans on Microsoft to maintain Windows XP support