back to article AMD gases up Bulldozers for Intel push back

Advanced Micro Devices is in a number of tight spots at the moment, but the company is hopefully optimistic that its future "Bulldozer" Opteron processors due later this year will let it dig in and grab some desperately needed - and profitable - server market share from archrival Intel. The first Bulldozer Opteron chips, the 16- …


This topic is closed for new posts.
Big Brother


Even if AMD were able to produce a chip that out-performs Sandy, AMD's market share will still be under Intel's direct control.

In RECENT ANTITRUST PROCEEDINGS, subpoenaed Intel documents showed that Intel secretly bribed and threatened EVERY major computer manufacturer around the world.

For example:

1. Paying 6-billion in bribes (combined with threats) to get Dell not to sell AMD-based products.

2. Bribing/coercing IBM 130 MILLION not to launch an AMD-based product line.

3. Bribing/coercing HP with hundreds of millions to keep AMD's market share at 5% or less.

Intel is an unusual competitor - that can't be beat through innovation. Corrupt bribes and threats seem to be Intel's highest-margin product.

Intel apparently can't be stopped.


The day of the conspiracy theorist

The facts you listed are interesting and I don't doubt them. Intel is definitely known for their crappy business tactics. I have internal knowledge of several other organizations that have even worse tactics regarding getting sales. I always thought it was quite humorous that an an engineer, I was forced to take an ethics course in the university, but the salesmen who worked their way up through the ranks of major companies never did... possibly if they had, they would never have been able to make the climb.

But please don't confuse the facts here.

Pentium 4 was a major design disaster for Intel. It was a processor that failed on so many levels, it was just awful. Among a few is :

- They overheated like mad!

- They never had a proper chipset solution

- They had a very long pipeline (though we consider that part good these days) which wasn't tied tightly enough into the cache prefetch mechanism so a branch could easily cause a full pipeline miss and trigger a L1 cache flush.

- It had terribly bus performance

- It lacked an internal memory controller to compensate for the terrible bus performance

- It used enough power that quiet systems were completely out of the question

so forth and so on. During the dark years of post Pentium 4 and pre-Core, AMD made MANY steady improvements to their processors. A tweak here, a pop there. Intel focused almost entirely on Itanium for 64-bit and AMD introduced x64. If Intel could have sold the world on a new architecture (instead of charging $3000 a chip for it) Itanium was by far a superior architecture to more or less anything on the market. Even now, it's technology is possible the best there is. But technology doesn't sell computers. Never did. These days, a new instruction set would be fine in the same price range, but back then, we were locked into x86.

Well, Intel is really amazing sometimes. Thanks to being the massive multi-gazillion dollar a quarter company they are, they had the luxury to fix what they did wrong with Pentium 4. And that's why your statement is wrong. Intel invested 5 years into running (if I recall correctly) 7 independent development teams around the world to reinvent the x86 processor. They were all in competition with one another and there were some basic requirements.

1) It had to be good enough that the world would forget the Pentium 4

2) It had to be good enough that AMD would not be able to catch up again anytime soon

3) It had to be solid, there has to be a way to fix bugs in the field.

4) It had to be 100% compatible with existing software.

So the 7 teams started off competing against one another. Eventually the Israeli team won the competition and the next few years were spent integrating the best of the best from the other 6 teams into the new Israeli chip making it that much better.

The result was the Core series of chips.

What makes your commend truly just crap is your obscene ability to hold a grudge this long. From a technological perspective, if AMD isn't just full of marketting BS, it'll be the first time in years where they are competitive against Intel again. Intel's sales assholes definitely hurt AMD, but let's be frank. The Core 1 and later architectures have been clock for clock clear winners in performance and performance per watt over AMD for a long while now.

Since you don't seem to care about things like actually technology and numbers, I'll avoid explaining things like AVX and such to you. You'll just have to come up with a reason to dislike those on your own. But, an intelligent person wouldn't discount them too quickly.

If you're interested in actual facts, let me point out something here.

Bulldozer has half as many floating point units as integer ALUs. This means that while it may perform really well for things like business apps and virtualization, it couldn't hold Intel's jock strap in high performance computing. So, if all you're doing is running a farm of web servers... great bulldozer might do wonderful. In fact, I'd even go so far as to say that NVidia will offer something quite a bit better when they get around to releasing the ARM chips. But when it comes to raw, brute number crunching performance, Intel will be king a while longer. No, don't bother talking about GPUs, they have a different purpose altogether. If you don't know how the code differs for them, don't go there.

BTW, you're the second idiot I've come across today peddling bullshit conspiracy theories that are so full of shit that someone has to respond just in case another person decides to read your post and believe it.

P.S. - I actually like AMD. I use them for lots of cases where I don't need great performance but just need a cheap cpu.



Intel is under much greater scrutiny, but don't kid yourself that the days of backroom "incentives", both financial and punitive, are over. It is the only way Intel knows how to operate, and their psychopathic, paranoid corporate culture is such that real change is probably impossible.

I think it is incredible how much praise you give Intel over their development processes. You say Intel has 7 development teams. Yet AMD, with a single development team, still produces a chip that has 95% of the performance, and even matches or exceeds the competition in some benchmarks and applications. If Bulldozer comes close to Sandy Bridge it will be a great achievement, because AMD's limited engineering resources have been working on THREE CPU architectures on two processes simultaneously. If it's faster, it would be incredible.

Bulldozer has AVX. In fact, the original design included more advanced SSE5 instructions but they had to be scaled back to be compatible with Intel's lesser set (you know, 'cause of a handy dominant market position). Bulldozer has more instructions available for users to use, and you could argue are more advanced, than the competition.

AMD has said that their analysis of the FPU (presumably in their Opterons) showed them that the FPU of a core is massively under utilised for the number of transistors required. This analysis told them that if they could share the FPU between two cores the FPU utilisation for a given number of transistors would go up and the transistors saved for other applications. AMD feels that this trade off will end up better overall, even with lesser IPC performance than if you had a full FPU for each core because those transistors saved can be used for additional or bigger cores. Another mitigating circumstance is that Bulldozer is designed (apparently) for higher clock speeds over the K8 and Phenom generations. Lower IPC, but higher clocks and more/bigger cores.

The Bulldozer FPU isn't a single entity where either one or the other core in a twinned "module" has access but not both. The Bulldozer FPU has TWO pipelines which each core can access. In addition, a single core can access both pipelines if the other core isn't using it.

Future Bulldozer-derived processors will have a Fusion graphics or graphics-like core integrated. I'll address this after the next point.

Your ignorance regarding the capabilities of GPGPUs was the point that finally convinced me just how silly your arguments are. Graphical cores, because of the massively parallel tasks required of them when rendering textures, are FP monsters that shame the capabilities of traditional CPUs. A GPGPU simply kills a CPU FPU, single or double precision. A quick search reveals this in a nice table for you: http://www.sisoftware.net/?d=qa&f=cpu_vs_gpu_proc

So getting back to a point I made earlier: given a GPGPU spanks a traditional FPU under most circumstances, and a future Bulldozer will incorporate a very capable GPGPU, what do you think will happen to the FP performance of Bulldozer in the future? A further point: given Intel lacks a GPU capable of GPGPU tasks, or even the capability to make one in the short to medium term, what do you think future Intel FP performance will be compared to Bulldozer?

You are only partly right regarding the "different code", and to that I reply: for now. By the time 2013 comes around and AMD has integrated a GPGPU into their CPU, GPGPUs and associated APIs will be a mature and known proposition. Also, many of the disadvantages that current GPGPU implementations have compared to a FPU (see "most circumstances" condition above) will not occur, as the GPGPU will be on-die instead of at the end of a bus (is Fusion the Rosa Parks of GPGPUs?).

It was with your GPGPU argument that I knew you had drunk the Intel "kool-aid", because you simply regurgitated Intel's arguments. What a shock that Intel dismisses GPGPUs after the massive failure of Larrabee. What a coincidence! I'm sure when Larrabee is finished (that is, never) Intel will embrace GPGPU and develop a half-decent GPU OpenCL implementation (instead of the half-baked, unsupported rubbish they currently push out).


Good God man! Get a clue and read something other than Intel press releases. You didn't know that Bulldozer fully supports AVX (30sec Google). You didn't know how the Bulldozer FPU actually worked, or why AMD made these choices (another 30sec Google). You don't seem to understand how GPGPU works (facts often can't overcome a convinced mind). You come across as a snide, patronising ass (no one can help you with that). And yet you berate others.

I have no idea if Bulldozer is going to be good or not. I have no idea if the "module" approach is going to work in real life. But I have at least attempted to educate myself about the basic facts before putting down others with erroneous assumptions.


AMD is great at ...

talking .. less able at execution and production ..

seems a bit early to get excited .. 16 cores requires a lot of overhead for communication between the cores ... what are the chances AMD can get good yields with all 16 cores working ?

3.5ghz ? .. I wouldn't count on that in quantity either ..


We are Intel of Borg

Resistance is Futile. Prepare to be Hyperthreaded.


@AC & @Cheesy... You both have valid points...

...but I believe the truth lies somewhere in the middle. Here's why.

Way back in 2000 AMD was executing extremely well, beating Intel to the 1 Ghz punch with the Athlon (K7), while Intel was woefully behind. Intel apparently rushed the Pentium III 1.13 Ghz CPU to market to counter the excellent performing Athlon 1Ghz processor with its super low latency high speed on-die memory controller. Unfortunately for Intel, the PIII architecture proved to be unable to run sucessfully at that speed and the 1.13Ghz CPU was immediately recalled.

Intel continued to make mistakes by abandoning the PIII architecture altogether for the deep pipeline Pentium 4. Yes, the P4 never performed as expected and ran as hot as Haides. To be fair, P4 was designed to achieve 5 Ghz plus speeds as competition encroached, which was the reason for the deep pipeline. However, excessive heat prevented P4 from ever reaching anywhere near that threshold. It was what Intel did next when they realized that AMD was going to take the CPU performance crown away from them that you both have expressed some valid points.

Intel did indeed decide to take the low road. Intel pulled out all the stops to try to keep Athlon from coming to market by threatening and bribing virtually every manufacturer in the PC market it did business with worldwide, even the motherboard manufacturers. So here we had an excellent new CPU architecture, but no motherboards to install it in. I know. I was one of those waiting for the motherboards to hit the market. I remember that some of the manufacturers got around that by selling 'white box' unbranded motherboards. It took years for the depth of Intel's disgusting and unprofessional monopolistic anti-competitive practices to surface. Japan and the EU both found Intel guilty and fined them, but not nearly enough, in my opinion. Back in the US, Intel got a slight slap on the wrists and eventually had to pay AMD some mad money.

It's true that AMD lost an immense amount of income from the loss of sales due to Intel. But even still, once motherboard manufacturers stepped up to the plate, AMD did incredibly well, in spite of Intel's best efforts to thwart it. The problem was that AMD enjoyed the performance crown way too long and subesequently sat on their arses while Intel developed the excellent Pentium M (Core) line of CPUs that regained the lead from AMD and Intel hasn't had to look back since.

I personally feel that Intel should have been slapped down substantially for taking such a cowardly way of trying to exterminate AMD to compensate for falling behind in the market. OTOH, AMD hasn't executed well in years, even though they enjoyed supreme success with the Athlon K7 architecture for a long time. This was, again, in spite of Intel's best efforts. AMD's Opterons kept them up even after the architecture began to get stale. AMD not only killed their own lead in CPU performance and marketshare, but have also managed to kill an excellent GPU competitor, i.e., ATI. I wonder how much more advanced the state of GPU development would be had AMD not purchased ATI and competition with Nvidia had allowed to proceed as usual. It just boggles the mind that GPU architecture is still in the sub 1 Ghz, single core days of yore.

The bottom line is Intel, with its seemingly unlimited R&D spending, has indeed been producing excellent products since the P4 fiasco and has no reason to stoop to such disgusting tactics as before. I'm sure had AMD had not been cheated out of so much income that it obviously would have had more money to invest in R&D and possibly bring new technology to market faster. The big question is, would it have made much difference since AMD, having suffered years of incompetent leadership, has been more like a rudderless ship? As an example, both Bulldozer and Fusion have been in development for a long long long time. At least they're both due to be released this year. I wish AMD much luck as competition in the marketplace benefits us all.

Silver badge

Not sure what you mean about GPU architecture

"It just boggles the mind that GPU architecture is still in the sub 1 Ghz, single core days of yore."

I'm not sure where you're getting your information from — to the extent that anybody cares about clock speeds, GPUs exceeded 1 Ghz a long time ago, and they're all in the hundreds of cores nowadays. The top of the range workstation GPUs from NVidia (such as the Tesla C2070) have 448 cores, and those, like those on the consumer cards, are fully programmable in various C-like languages such as CUDA and OpenCL.

So I genuinely think I must have misunderstood your comment.


@ Jimbo

I'm the Coward who replied to The Clown (in a moment of less than calm weakness). I wish the Intel/AMD and ATI/Nvidia arguments were a bit more civilised, and I had more self-control.

To address some of your great points.

I may use Intel products, and I may like them, but I find Intel management offensive. I think an objective, historical examination of any monopolistic company (or country, for that matter) would find a management that felt entitled to maintain their position by means often unethical and sometimes illegal.

Although I think the AMD settlement was pathetic compared to the damage Intel did, AMD received significant concessions from Intel regarding the terms of the x86 licence which would be worth many, MANY billions more to AMD into the future. It is also debatable if AMD could even have capitalised on the superiority of the K8 over the P4 and free market conditions without Intel interference given that AMD was supply constrained during this period. AMD had to contract out additional production to Chartered Semi (now a part of GloFo) because their own fabs were running a maximum.

AMD processor development is rather difficult to defend. Consider that the K7's debut was in 1999 the basic architecture has lasted AMD for 12 years so far, has been shrunk from 250nm to 32nm (in Llano), and refined through generations as well as having major changes including an integrated Memory Controller to a full graphics core. Their smaller engineering resources might be a contributing factor, as well as AMD's insistence on so-called "proper engineering" (multi-core: X2 and Phenom) instead of getting a product quickly to market (MCM: Pentium D, C2Q). Such engineering changes that were firsts for x86 (IMC, dual-core, quad-core) proved quite a challenge for AMD, and one change that coincided with a process shrink was a disaster (Phenom). Along with the processor development, these process shrinks were getting harder and exponentially more expensive. In the mean time, Intel has gone through several architectural changes and develops along an ambitious (and very expensive) tick-tock strategy.

It is interesting that Bulldozer is showing P4 characteristics, with long pipelines and high clocks. Will Bulldozer fare any better?

You can't compare GPU and CPU clock speeds because they are designed for very different functions. GPUs have hundreds of "cores" although they are called shaders. Additionally, CPU clock speeds are calculated by the "base clock" x multiplier. In previous generations the base clock (the "real" clock speed of CPUs) were roughly 200 to 266MHz or so. GPUs don't have multipliers, so you can see the 600 - 900MHz GPUs are running is a blistering pace.

You may ask why CPUs don't run at such speeds, and it has to do with the basic architecture. CPUs are extremely flexible (other than Intel's Atom!) in the type and order of calculations and so are very complicated internally. GPUs are inflexible and their shaders are very simple internally. It is for this reason that CPUs take so long to come to market while GPUs go through a major revision every year. A CPU is largely custom built, while much of a GPU can be copy-paste hundreds of times.

Your AMD/ATI point is debatable, but AMD hasn't destroyed ATI. In fact, I would argue that AMD has improved the ATI side. We may despair of AMDs CPU execution but the independent ATI sucked at getting their products out without long delays. Since AMD has been in control, not only have ATI been executing flawlessly they have been better than Nvidia in their engineering and schedule. The last ATI series was the HD2xxx cards. They were hot (no they were H*O*T), slow and the GPU was huge. The architecture was partially fixed with a die shrink on the HD3xxx series, but it still sucked.

With the HD2xxx such a failure, AMD decided to re-evaluate the way GPUs were made. Every generation was bigger, hotter and with more transistors than the last, with each side brute-forcing greater performance. With this evaluation, AMD engineers decided on a radical new approach; one that took MASSIVE, swinging balls to initiate. They decided to make GPUs smaller. O*M*G!

The result was the shocking HD48xx. Anandtech did a terrific write up on the behind the scenes development of the HD48xx with interviews with the development team. It is one of the best tech stories that I have ever read, and I fully recommend a read. You can find it here: http://www.anandtech.com/show/2679

So with AMD/ATI taking a new approach to GPUs, I might agree with your assessment of the state of GPUs if it wasn't for the fact that for the last 2 or 3 generations AMD has been out executing and out engineering Nvidia. And making faster cards, if not GPUs. Nvidia has been making themselves look like Intel's GPU team for the last couple of years. Unthinkable!

It is interesting, because the new "module" approach AMD is taking with Bulldozer might be a CPU version of the strategy they took with the HD48xx graphics cards.



Dear Jimbo in Thailand,

I was with you for the most part, right up to the point you said AMD ruined ATI and GPU competition, which also seems to be the point at which you lost tune with reality...

The last time i checked nVidia was sucking wind trying to catch ATI, Please see this El Reg story titled: "AMD claims 'fastest graphics card in the world'" dated just 3 weeks past..


I will admit AMD has had some trouble executing in recent years, but what most everybody fails to take in to account is the damage that intel did to AMD's R&D budget! this goes far deeper than AMD just failing to execute, its more like AMD did the best they could considering the strangle hold intel put on their revenue flow. Considering the 800lbs gorilla AMD had to wrestle with, i'd say they've done pretty damn good.

Through this entire saga, what appears to me is that AMD does its best design when its back is against the wall. From the beginning of time (in the computer ages) AMD had only been a small non-threatening player in the CPU world. having virtually no market share and even less revenue, seeming to have no chance what-so-ever to prosper. What they did have was a man with a vision, and the drive to realize that vision, and few good engineers. With that vision and those few good engineers, AMD set out to accomplish one single goal, to change the face of modern computing for all eternity, and they did just that!

Whether you people realize it or want to admit it, AMD is responsible for all the luxuries we enjoy today in modern computing. that man with a vision, Jerry Sanders and his lead engineer Dirk Meyers, changed the world of computing forever! they did so in several ways. The first of which was bringing to market the venerable AMD Athlon (K7), A whole new design in processor architecture, it was fast, it was efficient and it was powerful! They took the world by surprise, nobody, including intel ever figured AMD could bring to market a CPU that could not only compete, but out perform the mighty intel. but they did and they didn't stop there, they beat intel to the 1ghz mark, which really chapped intel's hide and brought out their dark side, which was another win for modern computing (i'll explain that in a minute).

AMD continued to to make a mockery of the PII and PIII architecture with the wonderfully refreshing Athlon XP(+), to which intel countered with the P4. AMD answered that with one of their biggest game changers AMD64 and Opteron. AMD did such a good job with AMD64 intel copied it verbatim and baked it into their future chips. intel continued their dark practices of bullying OEM's and board makers, even stepping up their efforts. AMD despite this, continued to bring game changing technology to the table, next up, multi-core computing, enter: AMD64 x2! the face of modern computing has just changed yet again!

intel, realizing that their bully tactics were failing them, decided to change tacts, they realized that brute force was no going to make AMD go away. What happened next was the second biggest game changer AMD brought to the table. intel started innovating again! intel dropped their plan to kill off x86 and force itanic on the world, and began dumping money in to x86 R&D, ultimately leading to the CPU's we're seeing today.

But through it all, the biggest game changer AMD brought to the table was competition to the 800lbs gorilla, they brought products that were good, that performed well and were priced at a reasonable price to the table, forcing intel to sell their products based on merit and for a fair price, this in turn also opened the eyes of regulators world wide, and began the process of bringing intel's dark side to a close, making the world aware of exactly how intel operated and bringing some accountability to their plate.

So whether you people are willing to admit it or even see it, AMD is a hero of biblical proportion in the world of modern computing! Without AMD and their drive, their willingness to fight the fight against all odds, their will and desire to stand up to the brutal and unforgiving war machine that was intel of days past, we wouldn't be enjoying the levels of computing we are today!

This is no bullshit.


and programmers are just learning to use multi core processors?

Come on Dudes we were writing multi threaded programs that allowed multitasking on the Commodore Amiga in the 1980's, on single core 68020 processors with 2 Mb of ram or less, (a lot less on the original A1000 /A500) and your saying that programers are now having to learn how to write software to multi-task on modern multi core servers? LOL

E 2
Thumb Up


Yes indeed, I do agree! I guess my beard is not as venerable as yours must be though ;-)

The claim that programmers are only now learning multi-threaded programming (and that it is so terribly difficult) is laughable.

I wrote my first threaded program in 1998 using solely M$DN for reference, produced a completely stable and problem free Win32 service app that ran two work threads and a control thread on a dual socket Pentium Pro box. It took me 1.5 working weeks to learn how to write a threaded app.

This stuff is not rocket science, it just requires attention to a few more details.

E 2
Thumb Down


"Bulldozer has half as many floating point units as integer ALUs"

I call FUD! I accuse you of imprecision at best and shilling at worst.

The FPU in a Bulldozer can execute a single 256 bit op; or simultaneously a pair of 128 bit ops, four 64 bit ops, and IIRC 8 32 bit ops.

Which amounts to having an FPU for each integer unit, given that past FPUs did 128 bit op, pair of 64 bit ops, etc.

I do not think BD will be weak at floating point at all, unless the programmer chooses really stupid compiler options...

(Written by Reg staff)

Re: @CheesyTheClown

Stop accusing people of shilling (as per the comment guidelines). It is silly. Thanking you.

Paris Hilton

@ThomH...sorry dude...you're just way off track!

From AMD's website, their flagship single chip graphics card, the 6970, has a max Engine Clock of up to 880 Mhz. Here's the link: http://www.amd.com/us/products/desktop/graphics/amd-radeon-hd-6000/hd-6970/Pages/amd-radeon-hd-6970-overview.aspx

From Nvidia's website, their flagship single chip card, the GTX 580, does have a Processor Clock of 1544 MHz, but is limited to a Graphics Clock of only 772 MHz. The link: http://www.nvidia.com/object/product-geforce-gtx-580-us.html

When I mention cores, I'm not talking about CUDA cores, which is sort of the equivalent of adding more on-die memory cache to a CPU, I'm talking about complete multi-chip GPU's on one die. There are still no multi-core complete GPU chips sharing the same die to date. This is because current single GPU die sizes are HUGE, expensive to make due to low yields because of the HUGE die size, require HUGE amounts of electrical current and produce HUGE amounts of heat. To get around this, both AMD and Nvidia resort to adding two separate GPU chips to a graphics card resulting in double power consumption and corresponding amounts of unwanted heat. If you really want to cause brownouts in your neighborhood just add two of these dual chip 'solutions' in SLI mode.

Both companies are indeed still flogging ancient GPU chip architectures by simply adding more 'cores' and just faster memory. Had they gotten off of their dead arses years ago, you would see technological advancements similar to CPU chips have made, such as 2-4-6 cores running at 3-4 Ghz speeds and sipping a fraction of the juice required to run a single chip single flagship video card and running very cool as well. And soon we'll have 8-12-16 core CPUs. The fact that AMD hasn't improved on ATI's now ancient architecture at all speaks volumes about AMD's piss poor leadership and direction. Nvidia, unfortuately, has followed AMD's lead.

I hope that clears it up for you Sparky.


- Paris is crying because archaic GPU architecture is contributing to global warming.

Paris Hilton

To reiterate my previous points...

As I pointed out earlier, you have an erroneous concept of GPUs.

GPU frequencies - I addressed the issue in my previous reply. GPUs actually run faster than CPUs. In fact, the latest Sandy Bridge has a base clock of, wait for it...100MHz!

In the case of Nvidia, they have separate clocks for the core (or the GPU) and the shaders while AMD both have the same clock speeds. In the case you gave "Graphics Clock" refers to the core (the GPU as a whole) and "Processor Clock" refers to the shaders.

GPU "cores" - You haven't grasped what GPU cores are. There is no need to have multiple cores as you regard them. A GPU shader is functionally just a very simple processor, and because it is so simple large numbers can be placed on a die. Having so many simple processors is the reason GPUs are so fast at FP calculations. Having "multiple cores" would not only make no sense, but it would also make GPUs slower because of the unnecessary complicating of the architecture. There are no longer fixed-function shaders so they are all identical.

A similar argument might be that AMD and Intel aren't innovating because they selling "archaic" quad-cores when they can stick "2-4-6" of these on die and give us 8-24-48 core processors. That argument makes no more sense than yours, because there is nothing archaic about current Intel and AMD processors (other than the need to maintain x86 perhaps). Similarly there is nothing archaic about todays GPUs with stream processors (shaders).

GPUs are hot and power hungry? Might that have something to do with the 3 BILLION transistors in the GTX580, the biggest current GPU? The latest Sandy Bridge quad-CPU has 995 million. Add to which, the higher frequency they run at compared to CPUs means that they consume more power which in turn generates more heat. So accusing a GPUs of being "archaic" because it is hotter and more power hungry is silly. More transistors, running faster = more energy consumption and heat.

GTX580 = ~320w. i7 2600 = ~95w. Look at that, three times the transistors and a bit over three times the power use!

New CPUs have extensive power gating to shut down unused portions of the processor. No use having power going to all cores if only one is in use. In the past GPUs were either on or off, or rather three modes: off, 2D or 3D. Recently both AMD and Nvidia have introduced methods to vary power use to GPU loads. AMD has done this in hardware, Nvidia in software.

As I said, you seem to have gotten the wrong end of the stick regarding GPUs.

Paris - Because she doesn't understand much either.

E 2

@All of you ppl

Just possibly GPUs have so many shaders (or cores or whatever you call them) because graphics workloads are embarrassingly parallel.

Throwing tonnes of little cores at a workload (graphics processing) that for the most part requires only MUL/ADD/SUB/DIV/MADD and LOAD/STORE - but not all the other crap found in an X86 - was the obvious way to improve performance.

No AMD/NV conspiracy here, no AMD/NV incompetence, nothing to see.

Also, Sarah, I hear and will obey.

Silver badge

We dont need bruteforce rendering from GPUs

This current trend of just adding more horsepower is crazy. All we get are hotter, noisier and bigger cards. Thats not clever.

What we need is for AMD or Nvidia to take on PowerVR tech and use that system for rendering as its far more efficient.

If we had PowerVR style rendering we could maybe have cards with the rendering power of the 6970/580 but with the power draw of a 5570.

It's all in the rendering.


Ah, PowerVR...

The graphical powerhouse behind such wildly successful video devices such as the iPhone and Intel's integrated graphics!

I'm certain that you are entirely correct, and that AMD and Nvidia don't do this in order to maintain their little duopoly over graphical rendering.

If only there was a company who had a huge amount of money and masses of engineering resources to devote to developing Tile-Based Deferred Rendering, then we'd have the most powerful graphics engine of all time. OF ALL TIME!!

Only...that is called Larrabee and it's not going too well is it? Not yet cancelled, but it has been quietly shelved while Intel hopes everyone will forget it ever existed. Like a drooling vegetable whose wheelchair has been turned to face the wall. No? To cruel? You have to remember it is just a failed GPU and not a sentient being with feelings and dignity.

I'm thinking you are like The Clown above and have heard Intel or Imagination Tech (who own PowerVR, and a good chunk of whom is owned by Intel) give a talk or interview about this great new technology that is going to revolutionise some aspect of your life. But you didn't get the whole story, I'm sure.

I'm no expert, but here's why computer graphics isn't going this way any time soon. TBDR is the answer to a question no-one has asked yet. The only reason you would do it is if the device was bandwidth limited, which is why it is used in mobile devices like the iPhone. It is the very pace of GPU development by AMD and Nvidia that is making it irrelevant for wider use. Every generation of GPU raises memory bandwidth higher and higher, making TBDR's only advantage useless. Larrabee must have been memory bandwidth constrained for Intel to be interested. If you want more information, you're going to have to research it for yourself because I don't know much more than this.

What TBDR doesn't do is reduce power consumption for graphics in and of itself. TBDR is used in devices that are small and weak that do not consumer much power, so the fact that they use TBDR is not for power benefits. You see: correlation is not necessarily causation.

This comments thread has really turned me into a sarcastic bugger today, and I get worse with every comment I post. But there is an awful lot of FUD going around.


@AC & @Wallyb132 & @?

Thanks guys for bringing up some excellent technical points. I realize parallel processing requires a different approach and technique. All I'll say about that is speed is speed. If the end result is that multipliers accelerate the process from sub 1 Ghz speeds to between 3~4 Ghz as opposed to 880 Mhz max for AMD and 1534 Mhz max for Nvidia's flagship then so be it. It obviously is effective. My point is that GPU architecture is archaic. We're talking immense die sizes even with die shrinks around or approaching the 32nm range and still requiring 300~400 watts peak power and disposal of the corresponding heat those watts generate. If it takes 3 billion transistors to provide acceptable high end graphics performance from these friggin' beasts then they've not been innovating like the CPU industry has and a completely new approach is called for. When I look at even the latest flagship video cards, they're still enormous monsters. I was really hoping that AMD was going to redefine the architecture blending CPU/GPU functionality with Fusion. I hope they do.

As far as AMD outclassing Nvidia during recent years, that wasn't much of an accomplishment. Jen-Hsun Huang seemed to have completely lost his mind for a time. I actually thought he was ready for the rubber room based on his incredibly nonsensical actions. After all, he was caught red handed renaming 8xxx video cards to 9xxx with the same old chips. He apparently also couldn't make a successful DX 10 card when M$ introduced it and effectively sabotaged that API by crying to M$ to detune it. Then there was the 8xxx/9xxx series bump material fiasco. But suddenly it appears he's changed Nvidia's focus to smart phones, tablets, etc. with Fermi. I still no longer trust the bugger, but he is showing some innovation and recognizes that a new market is exploding, unlike AMD under Dirk Meyer.

I stand by my previous statement that AMD has suffered immensely under poor management. They've been late to the game now in every endeavor for years. And under Dirk Meyer's blinders they missed out completely on the rise of netbook, smart phone, and tablet markets. No wonder the board sh*tcanned him. He may have been an excellent engineer, but he was obviously asleep at the wheel directing the company. If AMD has made some improvements to ATI's original GPU architecture that's great, but I've seen nothing earth shattering in GPU evolution in years. From the 1990's to around 2005 it seemed like there were new advancements galore. Since then what have we seen that is actually new? Oh yeah, more shaders and faster memory yielding incremental speed increases...yawn...yawn. And of course, that's at the expense of high power and high temperature penalties. At least you do get free space heater benefits for those folks in the cold regions.

As far as Intel goes, and just like you, AC, I've been impressed with their products and will continue to use them until AMD has a compelling alternative. That said, I will never forget the underhanded crap Intel did to AMD and I despise them for taking that low road. I used to be an AMD fanboi during the Athlon K7 heydey and it has pained me to see them fall and stay so far behind Intel; maybe that's why I'm so hard on them. Like I said, we need AMD to keep Intel moving (can't use the work 'honest', now can I?). Had it not been for AMD we would probably still be chugging along on Intel 16 Mhz 286/386 CPUs with separate math coprocessors. Yes, I would love to see AMD take the lead again.

There's another factor that we haven't touched on. The handwriting is already on the wall for these dinosaurs. The PC gaming industry is getting closer to flatlining every day since the advent of gaming consoles, tablets, smart phones, and even netbooks to a tiny extent. Fewer people really need these behemoth graphics cards anymore except for hardcore gamers and professionals doing CAD, video production, or any other commercial, etc. graphics intensive tasks. Personally, I hate to see it, but it is what it is.

Whether you guys agree with me about technical points, or not, I think you have to agree that, overall, there has been no new SIGNIFICANT innovations in GPU development for years. I do feel that is going to change soon, but not for the PC. No, the smart phone, tablet, etc. battery operated multimedia devices will drive that innovation. It's already being done with Fermi and hopefully a version of Fusion can play a part. If we're lucky, maybe some of that innovation will find it's way back to the PC. One thing is certain, whoever manages to provide the best battery life while providing outstanding graphics without a heat penalty will reign supreme. And that's something sorely missing in the PC world!

You guys have a nice day!

Bronze badge

Please, stop with ignorant PowerVR nonsense...

...for once - they tried discrete desktop market and they have failed so they have retreated to the R&D-wise much less OPEX-intensive mobile space very early on which, at least in my book, was a very smart move, that's why they own the mobile performance market crown now (leave me alone with NV's power-hungry, big etc Tegras, they are pretty much a giant failure for Nvidia as far as bottom line goes.)

FYI there was a good reason why their Kyro desktop cards back then: they sucked when it came to real-world tests, period. One can argue about elegance of rendering and all sorts of mighty buzzwords but the reality is that software dictates, period and if the performance of all engines generally suck on your architecture then your architecture sucks (hw/driver etc), it's that simple. In concrete terms: having TBR wasn't enough to make up the lack of T&L, sorry.

This topic is closed for new posts.