back to article AMD sued: Number of Bulldozer cores in its chips is a lie, allegedly

AMD lied about the true number of Bulldozer cores in some of its FX processors, it is claimed. Mini-chipzilla boasted that, depending on the model, the chips had either four, six, eight or 16 Bulldozer cores. A class-action lawsuit [PDF] alleges the real figures are half that. The troubled California giant is being sued in …

  1. NoneSuch Silver badge
    Facepalm

    Damn. Didn't see that coming and I've bought many AMD chips over the years.

    1. BillG
      Headmaster

      So if the floating point units (FPUs) were removed from the chips, them AMD's advertising would be correct, correct?

      To me, the FPUs are a core enhancement. The Plaintiff has no argument. Case Dismissed.

      1. Stumpy

        Indeed. In the days when I used to program down at the bare metal level, we didn't even have on-board FPUs ... you had to buy a separate Maths co-processor unit to get any sort of floating-point operations in hardware...

        1. Grease Monkey Silver badge

          So stumpy what you seem to be saying is that all processors should be judged by the standars of the 80286.

          1. Stoneshop
            FAIL

            80286

            So stumpy what you seem to be saying is that all processors should be judged by the standars of the 80286.

            If you're looking at x86 architecture, even the 486SX didn't have FP. But I've worked on systems that didn't just have an extra chip fitted, but had another half dozen boards added to the 25 that comprised the CPU, if you wanted floating point. For a price in the range of a family sedan, so you took a good look at your workload first to figure out if it was worth it.

            1. Anonymous Coward
              Anonymous Coward

              Re: 80286

              The SX was the cheap version of the DX. Actually, the 487 was a full DX that disabled the SX. But today the FPU is used not only for the old 8087 instruction set, but by all the SSE and later instructions heavily used for graphics and signal processing. Good luck doing without... a lot of software imply they're available.

              1. paulc

                Re: 80286

                back in them days, when testing the chips, if the co-processor was wonky, it was disabled and the chip was sold as an SX chip, if the main processor was wonky, the main processor was disabled and the chip sold as a co-processor... only if both bits worked was the chip sold as a fully functioning unit at correspondingly higher price...

              2. Whistlerspa

                Re: 80286

                Re 487 did you mean 486? I don't remember a 487 chip.

                1. Steve Todd

                  Re: 80286

                  There was a 386 CPU and 387 math co-processor. The 486SX was equivalent to the 386, the 486DX had the co-processor integrated on chip.

                  1. picturethis
                    Paris Hilton

                    Re: 80286

                    This is how I remember it as well.

                    In fact, I am looking at an 80387 Math coprocessor that is on top of my microwave oven that I keep there just as a reminder that I spent over $700 (USD) just for this single chip, back in the day. Just so that I could run AutoCAD (which required a math coprocessor) on my '386 box. This is the most expensive single IC that I've ever purchased.

                    I keep it there as a constant reminder of to never buy cutting edge tech unless I'm prepared to regret it later....

                    (PS - I also had purchased a CDC Wren IV 700 MB (yes MB, not GB or TB) SCSI drive for $3000 USD, around that same time. I no longer have the drive though)

                    Tech: live without it, unless you really, really need it.

                2. Anonymous Coward
                  Anonymous Coward

                  Re: 80286

                  Here's an image https://en.wikipedia.org/wiki/Intel_80487SX

                  I don't really know how many bought a 486X instead of a DX and later bought a 80487SX to add the FPU, but it was made and available - including boards supporting it.

                  It was the latest "coprocessor" made. SInce then, more and more software started to rely on an FPU being available, and switching to an emulation library would slow down them a lot.

            2. Deltics

              Re: 80286

              Strictly speaking the early 486SX's were simply 486DX's with the FPU disabled. To this you then added a 487DX which was another full 486DX which when installed disabled the original CPU.

              Truly 1 for the price of 2.

          2. John Hughes

            80286?

            Or maybe a 1904?

            Kids of today, no historical perspective.

            1. Anonymous Coward
              Anonymous Coward

              Re: 80286?

              Kids of today, no historical perspective.

              Then they grow up and become politicians.

          3. ToddR

            Actually 8086 had a sister 8087, which was the FP co-processor

          4. EyePeaSea

            Both the 386 and 486 came in variants that didn't have FP units. Or to be more pedantic, all 486 chips had FP units, but some were disabled. My understanding is that during the QA process, any full 486 chips that had defects in just the FP part of the die, just had that disabled, rather than the whole chip being discarded.

      2. td97402

        Reread the Article

        @BillG - Read the article. It is not just the FPU. Just a few paragraphs in you will find:

        "a single branch prediction engine, a single instruction fetch and decode stage, a single floating-point math unit, a single cache controller, a single 64K L1 instruction cache, a single microcode ROM, and a single 2MB L2 cache."

        Seems that much, if not most, of a "module" is single threaded. If you can only fetch one instruction at a time and you can only decode one instruction at a time then a module is not really a two-core unit. It seems that only at the end of the instruction pipeline do have a couple of integer execution units and a couple of load/store units. So, at best, I'd call it weird AMD hyper-threading.

        1. Paul Shirley

          Re: Reread the Article

          If you want to go down that rabbit hole things will get very messy. AMD did mightily screw up it's design and modules typically perform like 1.5 real cores but do manage to issue multiple instructions per clock. The problem is more that those ops then get stalled waiting on the shared execution units.

          If his argument is 8 simulations ops case over and no cpu guarantees to always achieve it anyway.

          1. John Hughes

            Re: Reread the Article

            So AMD "cores" are like Intel "threads".

            1. ben_myers

              Re: Reread the Article

              That's the way it reads to me. So the reputed 8-core Bulldozers would seem to be quad-core with Intel-style hyperthreading.

        2. P. Lee

          Re: Reread the Article

          Isn't the fetching part of a pipelining operation rather than a processing operation?

          Also, going back to the must-have-an-fpu-to-be-a-core point, that implies that anything which doesn't have a an FPU isn't a core... so the 486 sx had no cores?

          It may be a little tricky, but anyone who pays attention knows AMD chips don't match intel on performance and they don't cost anywhere near as much as intel charges. Maybe that's because more features cost more. Surprise!

          1. Anonymous Coward
            Anonymous Coward

            more features cost more

            but is it proportionate after everything or are Intel users funding more marketing? And is that 'more' also proportionate to what you'd expect in a company that does more business?

        3. Sproggit

          Re: Reread the Article

          I'm not a chip architect, but doesn't the presence or absence of duplicate numbers of those components depend entirely upon things like chipset timing?

          Specifically, is it not possible to have a pre-fetch unit that is running at [in practical terms] double the clock speed of the cores? Or put another way, is it safe to assume that the throughput of the pre-fetch unit is tightly tied to that of the processors?

          Let's put this another way...

          If you fire up any modern manager on an Intel Core i7 powered machine, you will see that the "core count" is precisely double what Intel claim for the chip, thanks to Hyperthreading. But what most but the nerdy aren't aware of is the fact that the Intel chips will typically "sleep" one ore more of these "cores" in order to manage the temperature of the chip... So [being argumentative] we could argue that Intel can't claim the number of cores they do if the chip isn't designed to use them all simultaneously?

          I'm not trying to pick an argument with you, I'm just try to offer a view that says that modern chip design has become so hideously complex that this entire [and seemingly frivolous] case seems to be built entirely on semantics.

          The irony here is that anyone truly concerned with this "nth level" of performance from their CPU is not actually going to count or measure things like this, but actually review simulations of performance from industry-accepted measurement and benchmarking tools like (I think) SiSoft SANDRA. [ I might be a bit out of date with that example!] So for someone to come along at this point with an argument like this is not far short of spotting an ambiguity in the documentation for a 10-year old car and thinking they can sue. Caveat emptor!

          1. bri

            Re: Reread the Article (@Sproggit)

            You are a bit incorrect there - if OS sees 8 runnable threads, 8-core Intel CPU will execute them all in parallel and all cores will run on advertised clock rate. The trick with sleeping some cores in order to pull up clock rate of others (up to "Turbo" speed specific for the CPU and number of running threads) is applicable only when there are less runnable threads than the number of cores.

            This feature doesn't support your argument, nor does your mentioning of Hyperthreading. What were you aiming at?

            1. Sproggit

              Re: Reread the Article (@bri)

              The original post from td97402 basically called out the fact that some of the AMD chip design involved sharing of some components between pairs of cores, seeming [to my mind] to imply that because the cores shared a branch prediction engine, and both the fetch and decode stages of the instructions, that this meant that the chip didn't really have properly independent cores.

              I'll repeat for the record that I'm not a chip architect and that what follows may be factually incorrect...

              However, what I wanted to say was that it is entirely possible that the sharing of the fetch and decode units across multiple cores is entirely reasonable. For example [here comes the fiction] suppose that, on average, each instruction takes 4 clock ticks to execute. Suppose that the fetch and decode units can each retrieve and decode an instruction in one clock tick and then switch between different threads in a second clock tick.

              If this theoretical model were in any way reflective of the actual CPU, then AMD might have been able to determine that one fetch and one decode unit [with adequate state switching] would be sufficient to "service" two processor cores.

              Terrible analogy: I drive a car with a relatively simple 4-cylinder 2-litre engine. The car has one fuel pump. That pump is essential to the engine, since without it those 4 cylinders simply won't get the fuel/air mixture needed for combustion. But the engine only needs one pump, since that pump is plenty capable of supporting all 4 cylinders. In a similar way [again, I have no way of knowing if this is true] the "effort ratio" between what the fetch/decode units do and what the processor does could *easily* be such that these two components can be very effectively shared.

              I really didn't want to pick an argument with the original post, just to point out that there could be all sorts of design reasons [and, in modern CPUs, there are all sorts of examples] of sharing or time-slicing components across the broader system design.

              To my way of thinking, in order to show that sharing single fetch and decode units between 2 CPU cores is deliberately misleading, someone would have to first show that having one fetch and one decode unit per CPU could actually produce more throughput. Without this, the plaintiff's case is based on conjecture and lacking a basis in fact. Now that presents a massive problem for the plaintiff, since the only way that they could demonstrate this would be to have AMD build such a chip. Which I can't see AMD inclined to do...

          2. Anonymous Coward
            Anonymous Coward

            Re: Reread the Article

            The difference is that Intel don't claim that a quad-code i7 chip (with hyperthreading) is actually 8 cores.

            1. Steve Todd

              Re: Reread the Article

              Hyperthreading isn't based on having two cores, it's one core with an alternate register set that gets switched in when the active set stalls for some reason (like a cache miss). AMD are providing two complete integer cores that can execute simultaneously, but share some of the logic that feeds them and an FPU. If memory serves then both integer cores are able to access the FPU at once, providing they limit themselves to 128 bits of math.

          3. The First Dave

            Re: Reread the Article

            @Sproggit

            I'm not a chip architect either, but I do know that there is not a one-to-one relationship between clock cycles and complete instructions, so not every component needs to be one-to-one either.

        4. Marcelo Rodrigues

          Re: Reread the Article

          "Seems that much, if not most, of a "module" is single threaded. If you can only fetch one instruction at a time and you can only decode one instruction at a time then a module is not really a two-core unit."

          Not quite. You see, the time used for fetching is far less than the time used for computing. IK am not saying that AMD chips are the best thing on earth (they aren't), but

          1) Usually a single fetch unit can keep busy two integer pipelines

          2) We can argue that an 8 module (to use AMD's terminology) CPU would be better than a 4 module CPU. No doubt about it. But that's not the question. The question is if AMD right in calling the 4 module CPU an 8 core processor. I believe so, but that's just my opinion.

          3) It is true that it shares a single FPU with two integer pipelines. But it shares a more powerfull FPU. Check it. The Athlon FX FPU is more powerfull than the Athlon FPU. If memory serves me right, its is 256 bits versus 192 bits.

          Now, real world information: I run the standard PovRay benchmark in a FX 6300. It was better than on a i5 of equivalent generation and clock. The run time was about 30% worse for the FX - it used that much more computing time. But this computing time was spread through its six cores. So, it completed the benchmark about 25% faster than the i5.

          And I ask: would be justifiable to cal this 3 module a six core?

          This looks to me like someone is just trying to get some easy money.

      3. Mage Silver badge
        Boffin

        FPU

        There are other important shared bits, not just the FPU. These modules are single cpus with some internal parallel parts to speed processing. They certainly can't work as two cpus.

      4. Anonymous Coward
        Anonymous Coward

        Today a lot of software heavily relies on floating point unit and the FPU registers, including the SSE (and successors) instructions. FPU is no longer an optional component of a modern CPU.

        1. Anonymous Coward
          Anonymous Coward

          FP

          "FPU is no longer an optional component of a modern CPU"

          Why? Does windows itself have any FP code?

    2. PleebSmash

      Didn't see that coming? AMD's Bulldozer "modules" have been long known to be ineffective, similar or worse to half the number of Intel cores. They are abandoning the CMT/module concepts for Zen, and claim to be able to improve IPC by 40%.

  2. Disko
    Trollface

    A bit of a Dickey move

    to sue over what amounts to marketingspeak or at best terminology. Not too long ago FPU's were an optional co-processor, an expensive add-on, that only the most demanding users needed. So what amounts to a processor core? My i5 registers as quad-core even though it is apparently physically a dual core unit that can run two threads per core. I think people who really need the oomph generally would know how to figure out which processor they need, regardless of branding, labels or hype.

    It doesn't look like AMD lied about the actual architecture of the chip, more like they marketed it as "teh shiny". A suspicious person might suspect Big Chipzilla might have had a hand in this one.

    1. Jim Mitchell

      Re: A bit of a Dickey move

      As far as I know, Intel does not sell hyperthreading as more cores.

      1. bazza Silver badge

        Re: A bit of a Dickey move

        As far as I know, Intel does not sell hyperthreading as more cores.

        But they do drop very heavy hints that it makes things faster, which is not necessarily so.

        It'd be crazy if this case is allowed to get to court, never mind win. If a sharing of resources means one cannot count all the cores sharing them, where does that end? Cache? Intel have shared L3 cache. Power? Memory? AMD actually do quite well there. PCIe bus? Not likely to be more than one of those. It's plain nuts.

        SMP as we know it could be outlawed by a court case, which would be the most ridiculous thing ever. So it's probably guaranteed then.

        1. Mage Silver badge
          Facepalm

          Re: SMP as we know it could be outlawed

          Nonsense. It's not about forbidding anything other than misleading marketing.

          The cores share parts(excluding cache and FPU) normally exclusive to a CPU.

          It's certainly nothing to do with SMP either

        2. Anonymous Coward
          Anonymous Coward

          Re: A bit of a Dickey move

          "It'd be crazy if this case is allowed to get to court, never mind win."

          Since this is in the US, there is really no question of this case being allowed to go to court.

          As to winning: I wouldn't expect there to be any correlation between the merits of the case and the outcome.

      2. joed

        Re: A bit of a Dickey move

        amd has never made a secret of this "news". Everyone that cared, knew this from the beginning, everyone understood potential shortcoming and everyone else purchased more expensive intel chip.The idea was that in mixed load (real life or not) 8 treads could be executed simultaneously. I won't recall if this best case scenario was more likely to happen than in hypertreaded intel and estimates of what would be performance impact of having this extra fpus present (disabling ht was easy to do and test).

        As usual, no love to lawyers and please, do not represent me in your class action or I'll sue you for damages to my preferred chip supplier.

        1. bazza Silver badge

          Re: A bit of a Dickey move

          I remember that the early Sparc T CPUs from Oracle / Sun had eight cores that shared an FPU. Never heard of anyone suing over that.

          The first Nahelem architecture Xeons were "deceptive" over many SSE registers they had. In 64 bit mode all the registers were there as billed, but the number mysteriously dropped by half in 32 bit mode. It made 64 bit mode code look terrific compared to 32 bit, but only because of this artificial limitation. Of course this was all written down in the data sheets...

          1. Steve Davies 3 Silver badge

            Re: A bit of a Dickey move

            Larry E has a bigger set of lawyers to fight it than AMD. Less chance of success so any good General (Lawyer) would go for the enemy weak points.

            This case should get tossed though.

            1. toughluck

              Re: A bit of a Dickey move

              Larry may as well have them, but Jonathan Schwartz might or might not have. SPARC T chips date back to 2005, which was well before Oracle took over.

    2. td97402

      Re: A bit of a Dickey move

      I've always considered AMD to be selling a weird, kind of weak variation of hyper-threading as two physical cores. I always divide by two when looking at AMD desktop processors. I think they've been misleading when selling those modules as two cores. Worthy of a lawsuit, well that is another matter.

      1. h4rm0ny

        Re: A bit of a Dickey move

        >>I always divide by two when looking at AMD desktop processors."

        Then you are estimating processor performance badly. FPU operations are a minority part of most use cases. The reasons why Bulldozer is slower than say, Haswell, are complex - scheduling problems, weaker branch prediction and other things... Not, in most cases, the shared FPUs. Even with FP operations, the FPUs have 256-bit width and actually can work on two operations simultaneously (one for each core), they just have to be 128-bit operations. Which is common enough.

        Basically, your logic is flawed. SUN make a 16-core chip that has one FPU between all of them. Does that mean each of the cores is really only 1/16th of a core? Perhaps they should be sued.

        This is a stupid lawsuit, by people suing over their own ignorance.

        1. Ken Hagan Gold badge

          Re: A bit of a Dickey move

          We live in a world where ISPs frequently sell "unlimited" connections without getting sued. It would be utterly perverse if AMD lost this case, given the wide range of published benchmarks that a buyer might use to "estimate" the performance of an architecture that has been openly published in detail.

          From the lack of market research on display here, it is clear that the buyer didn't give a stuff about performance until the weasels started whispering "no win, no fee" in his ear.

      2. Marcelo Rodrigues

        Re: A bit of a Dickey move

        "I've always considered AMD to be selling a weird, kind of weak variation of hyper-threading as two physical cores."

        Weird, but wrong. The AMD cores are far worse than the Intel ones - but the "dual thread" technology AMD uses is far better than Intel's HyperThreading.

        Let me explain: HyperThreading can gain you a 30% boost - at best. In some weird cases it is better to have it disabled (Intel says so, in case of virtual machines). Real world scenario, it will gain You something about 15% speed.

        AMD dual thread technology does not have this problem: it is recommended to keep it always on, and there is no situation where it would harm the speed.

        The difference in speed is not because of the HyperThreading/duas thread technology> it is because the AMD CPU architecture (today) is inferior.

    3. Eddy Ito

      Re: A bit of a Dickey move

      So the boy didn't read the cut sheet from AMD when he bought it. The question of cores is one that will have to address what kind of cores now that heterogeneous chips are increasingly becoming the norm. What does it mean to say something has X cores when the chip can combine CPUs (x86 and/or ARM), FPUs, GPUs, DSPs or even the occasional FPGA? Do FPUs matter if software is optimized for GPUs?

      Of course, such a ruling will be avoided if AMD proves its eight-core Bulldozer processors do not drop to four-core performance in multithreaded FPU benchmark tests.

      Another problem entirely as it may well depend on the bit width employed by the benchmark. What happens if it doubles the performance of a four core at 128 bits but not at 256 bits?

      Regardless, this kind of "I didn't understand what I was buying" crap needs to be bounced out. Would he sue because he bought a Tegra 4 because the five cores didn't work like they thought?

    4. a_yank_lurker

      Re: A bit of a Dickey move

      @Disko,

      Having built computers, I have learned not to be dazzled by the marketing fluff but to try to find out which chip has the best balance of features with price for my needs. I have found through years sometimes the best is by Big Chipzilla at other times Little Chipzilla is best. The specs and architecture are rough guides to actual performance (more real cores, bigger cache, cache type, etc.) when the box is fired up.

      Also, most chips are probably overkill for most users including office drones particularly when you get to beyond 2 cores. And the only reason for 2 cores is they allow more RAM to be installed.

      1. Anonymous Coward
        Anonymous Coward

        Re: 2 cores .. allow more RAM to be installed. (a_yank_lurker)

        "the only reason for 2 cores is they allow more RAM to be installed."

        With insight like that, you'd be better sticking to lurking.

      2. Naselus

        Re: A bit of a Dickey move

        "And the only reason for 2 cores is they allow more RAM to be installed."

        Yes, and the only reason to include a motherboard is that it provides something for the CPU fan to attach to.

  3. LosD

    That lawsuit is as American as it gets.

    1. Destroy All Monsters Silver badge

      Post eagle images!

  4. asdf

    AMD gets it both ways

    Remember when AMD tried to accuse Intel of not having true quad core cpus and then looking like asshats when their "true" quad cores came out and were significantly slower?

  5. Anonymous Coward
    Anonymous Coward

    It's not the number of cores that's the problem for AMD, it's the performance per core that's crap. They are so far behind Intel I don't see them ever catching up and if Zen delivers I'll eat my shorts.

    1. asdf

      Neither company matters in a decade unless they learn how to get by on high volume low margin manufacturing which is where the market is headed (thus layoffs both are doing). ARM for example is now fast enough for most things including games (check mobile game sales these days)

      1. Anonymous Coward
        Anonymous Coward

        @ asdf

        Yes, would very much like to get a Tegra X1 board and play some Linux on it.

    2. Zola

      Indeed, AMD do need to significantly improve their IPC. This is what Zen promises, so let's hope they deliver (and you eat your shorts) as a completely dominant Intel in the x86 space doesn't bear thinking about.

      1. asdf

        Odds are decent in a decade there is hardly a x86 space at all. Good riddance as that instruction set and its bastard offspring should never have made it into the new millennium.

        1. Mark Quesnell

          I am not too sure about a statement like that about an architecture that powers what? 90+% of the current computer landscape?

          1. asdf

            >90+% of the current computer landscape?

            Including all those (now billion(s) of) handsets huh? Not to mention probably %90 of the code running in the wild actually runs on micro controllers. "A typical home in a developed country is likely to have only four general-purpose microprocessors but around three dozen microcontrollers. A typical mid-range automobile has as many as 30 or more microcontrollers." The future is the general purpose cpu market looking more like the microcontroller market of today. Stupid IoT money grab is actually already blurring the two.

            1. asdf

              sorry

              Basically general purpose CPU = commodity already for most people.

            2. td97402

              You Know What He Meant

              The OP was referring to desktop computers and perhaps laptops. I don't think most people conflate smart phones and desktops into a single "computers" category yet. It is also true that desktops/laptops still run big-boy games and applications that would not be possible on a phone, at least not for a while yet.

            3. Suricou Raven

              My house: 13 general purpose, including phones and tablets. Micros... radio, other radio, three TVs, the STB, DVD player, microwave, cooker, washing machine, dishwasher, heating timer, 3d printer, scanner, printer, alarm clock, another alarm clock, bathroom fan...

              Not the fridge though. That's good old-fashioned electromechanical logic.

          2. tom dial Silver badge

            90+% of the current computer landscape?

            Depends a lot on how you define "computer landscape." Every automobile has several computers and has had for some years. Every smart phone contains a computer, as does every tablet. For some years, nearly every disk drive (either rotating or SSD) has had a computer, not to mention every router and most ethernet switches. Until quite recently, essentially none of those has been x86 architecture, and probably a relatively tiny fraction are even now. The "internet of things" is also unlikely to be built on x86 architecture. And there are quite a few Raspberry Pis free in the world.

          3. jonathanb Silver badge

            There are way more ARM chips out there than Intel chips. When Intel shipped their billionth chip a few years back, ARM licences were shipping 1 billion chips every year.

          4. Anonymous Coward
            Boffin

            There are almost certainly more ARM cores in your PC than Intel ones,

        2. Ken Hagan Gold badge

          Instruction set architecture hasn't mattered for about 20 years. Software compatibililty, on the other hand, will continue to matter as long as closed source is commercially significant. (In this context, I note that x86 emulation has been tried several times and has yet to catch on. I see no fundamental reason why it has failed, but merely note the experimental fact that it has, to date, done so.)

          Promises of the imminent demise of x86 (and x64) sound about as convincing as promises of commercial fusion power. Both will almost certainly happen eventually, but it is anyone's guess when (and, indeed, which will happen first).

          1. Anonymous Coward
            Anonymous Coward

            re: the demise of x86

            "Promises of the imminent demise of x86 (and x64) sound about as convincing as promises of commercial fusion power"

            Upvoted for the rest, but with this bit I disagree: the demise of x86 is on the way for sure, the only question is when it happens, and how quickly the demise occurs, wheareas only a fool would guarantee the arrival of commercial fusion power.

            Contrary to the earlier comment from td97402, who said "most people don't conflate smart phones and desktops into a single "computers" category yet.", most of today's and tomorrow's end users don't even want a "computer", they just want a device that does their emailing, web browsing etc. They don't care whether it's Intel Inside or some other chip. Outside the IT departments people don't even care about software compatibility (e.g. whether it runs Windows or not) these days, as the collapsing sales of desktop PCs (and laptops) illustrate only too clearly. And where the device manufacturers can choose something other than Windows, usually they choose something other than x86 too.

            x86 can't compete on its own two feet where software compatibility (mostly meaning Windows) is unnecessary. There are basically no volume embedded x86 systems, no x86 volume smartphones, no volume x86 tablets, no volume x86-based consumer or professional electronics (TVs, routers, test equipment, whatever)... you get the gist.

            So, the volume market for x86 client computers is already a dying market and non-x86 computers massively overtook x86 long ago. So what? x86 development will carry on, right?

            Don't be so sure about that. The volume x86 revenue stream that has made ongoing x86 development possible and affordable isn't going to be there that much longer.

            Elsewhere in the market, in the high performance numerical computing sector there are little things like high end GPUs being used in certain markets instead of high end x86.

            So where's the money going to come from to pay for top end x86 development (the nicely profitable stuff, once development costs have been recovered)?

            With a much smaller revenue stream from the volume market, and having lost some of the high performance revenue stream, etc, Xeons for the IT department are the only marketable option. But Xeons are going to have to be even more expensive to pay for the development costs. Some readers may recognise this as the Alpha challenge - chip development costs are the same whether you sell chips in the hundreds of thousands or hundreds of millions.

            I wouldn't want to bet on x86 being significant (in the way it is today) in seven (maybe even five) years time.

            And that's not good for Intel, as they don't seem to be able to get anything right in the last decade or two, except x86. As for the implications for Microsoft - you can work that out for yourself, I'm sure. I suspect they maybe already did.

          2. Alan Brown Silver badge

            "I note that x86 emulation has been tried several times and has yet to catch on"

            x86 emulation is _exactly_ what both Intel and AMD do.

            The cores haven't been native x86 for a _very_ long time (486 days or earlier)

            1. roytrubshaw
              Pint

              "The cores haven't been native x86 for a _very_ long time (486 days or earlier)"

              Have an upvote as it saved me from having to say exactly the same thing!

              However: <pedant>I think the "big break" occurred between the Pentium and the Pentium Pro.</pedant>

              It used to make me smile that the Pentium was marketed as this huge change, when it was basically just two 486DXs on the same die, but the Pentium Pro which was a HUGE change in basic architecture (micro instructions, out of order execution, pipelining, branch prediction etc etc.) was marketed as just a "better" Pentium.

      2. Alan Brown Silver badge

        "AMD do need to significantly improve their IPC"

        And their thermals.

  6. Zola
    FAIL

    Frivolous legal case, should be tossed out

    Can't see this case succeeding, nor should it. There is no doubt that Bulldozer has the AMD stated number of cores, the fact that some aspects of the design is shared between paired cores is well known, add to that if your workload is heavily FPU based you'd have to be an idiot (or a cheapskate) to choose AMD. I selected an 8-core/4-module FX-8350 specifically for kernel and OS builds, mainly because there is so little FPU action (and there is no doubt it has 8 cores).

    Unfortunately the guy bringing this case failed to do his homework and is now able to bring a frivolous legal action - I hope he loses and I'd like to think it will cost him a fortune (but it probably won't, which might be part of the problem).

    1. asdf

      Re: Frivolous legal case, should be tossed out

      Can't see this case succeeding before AMD goes tits up regardless.

      FIFY.

      1. Destroy All Monsters Silver badge
        Paris Hilton

        Re: Frivolous legal case, should be tossed out

        > I selected an 8-core/4-module FX-8350 specifically for kernel and OS builds

        But what about memory bandwidth?

        1. Zola

          Re: Frivolous legal case, should be tossed out

          > But what about memory bandwidth?

          At the time of purchase (about 3 years ago) I considered over 14GB/s of DRAM bandwidth to be perfectly adequate, and considering it consistently outperforms Intel i7 quad-core systems of a similar vintage the AMD memory bandwidth (or shared FPU) hasn't proved to be a handicap.

    2. Martin an gof Silver badge

      Re: Frivolous legal case, should be tossed out

      There is no doubt that Bulldozer has the AMD stated number of cores, the fact that some aspects of the design is shared between paired cores is well known, add to that if your workload is heavily FPU based you'd have to be an idiot (or a cheapskate) to choose AMD.

      It's a difficult case. To "Joe Bloggs", more is almost always better. The case probably rests on whether a typical non-technical punter would have bought an AMD chip rather than an Intel chip purely because it claimed to have "more".

      It's a stupid thing to claim. Your average punter shops first on price (unless they have a fetish for some particular fashion icon) and at the lower end of the price range for desktops, laptops and indeed bare processors(*), AMD has been cheaper than Intel for some time. And how many Joe Bloggses actually have workloads that require loads of independent, floating point-capable cores working simultaneously?

      I know the article doesn't actually mention the A-series processors, but the other thing at this end of the price scale of course is graphics and it's fairly well accepted that Intel is still playing catch-up with AMD's integrated graphics. After a bit of web browsing, watching some movies and maybe writing an email, some light gaming is often on the cards for Joe Bloggs.

      Oh, and doesn't the current version of Bulldozer double up on additional stuff (instruction decode? something to do with the FP unit being in two halves for some kinds of calculations?), so less stuff is shared than is shown in the diagrams in the article?

      M.

      (*)Just out of interest, Dabs currently lists 16 processors under £50. Only three of them are Intel. In the £50 - £100 bracket, 13 of 24 are Intel; the best Intel offerings being dual-core (four thread) i3 chips while AMD has the two module (four "core") A10 and three module (six "core") FX-something. I'm no expert but I would still choose an A10 over an i3 at the same price point, and if I'm building to a really tight budget, AMD is pretty much the only choice.

      1. h4rm0ny

        Re: Frivolous legal case, should be tossed out

        >>It's a difficult case. To "Joe Bloggs", more is almost always better. The case probably rests on whether a typical non-technical punter would have bought an AMD chip rather than an Intel chip purely because it claimed to have "more"

        Great, so I can sue Intel because I bought an i3 that runs at 4.2GHz and it's not more powerful than the i7-5930 that runs at 3.8GHz. I mean it should, right? Because we thought that this higher number means it's more powerful so Intel owe me money for deceptive advertising. I mean what they advertise is true, but they didn't protect me from my ignorance about CPUs so that means they're guilty in my book!

        1. Martin an gof Silver badge

          Re: Frivolous legal case, should be tossed out

          so I can sue Intel because I bought an i3 that runs at 4.2GHz and it's not more powerful than the i7-5930 that runs at 3.8GHz. I mean it should, right? Because we thought that this higher number means it's more powerful

          That is pretty much the point I'm making. In "the real world" most people, at first glance, probably would think that. Or indeed they might see that "i3" < "i7" (the number is bigger so it must be better). However, what they will also see is that the cost of the laptop containing the i7 is about twice that of the laptop containing the i3 (I'm guessing here) and since my theory is that the first thing people consider is usually price, that cost will probably trump the difference in numbers.

          The problem isn't that people don't understand the differences, the problem is that people don't want to understand the differences and therefore they have to rely on marketing.

          Bother. That almost makes it sound as if I'm agreeing with this idiot who's trying to sue AMD. I'm not. What I'm trying to say is twofold. Firstly, it's a non-case. The information was all there. The implication is that the bloke bought on price alone and so technical specifics weren't realistically part of his buying decision. Secondly, in bringing the action he implies that he "knows a bit" about these things, but the very first thing you learn when you start to look at the differences between the current crop of Intel and AMD chips is that the system AMD uses can be thought of in some ways as "hardware assisted Hyperthreading" so the very fact that he's brought the case proves that he does not know anything!

          Personally, as I said, I quite like the AMD chips. I don't like hyperthreading as a concept and in my everyday practice I "feel" (entirely subjectively) that a 2-module, 4-"core" AMD system is just fractionally snappier than an otherwise equivalent 2-core 4-thread Intel offering, and it's usually slightly cheaper too.

          Rambling. Sorry.

          M.

      2. Anonymous Coward
        Facepalm

        Re: Frivolous legal case, should be tossed out

        I sell a 16 cylinder engine car, but it's only 1 litre and only does 0-30 in half an hour.

        I publish all these stats.

        Customer buys 100 thinking it must be the fastest car in the world because "it's common knowledge more cylinders means a faster car"! Is it my fault or theirs?

  7. Crazy Operations Guy

    So like Intel's Hyper-Threading bullshit

    Seeing this architecture, I am having a difficult time differentiating between the Bulldozer chips and Intel's HT-enabled chips.

    So if AMD wins this one, could Intel start labeling their 12-core Xeons as having 24 cores?

    1. Zola

      Re: So like Intel's Hyper-Threading bullshit

      Unlikely because Bulldozer does actually have the physical cores (although as many as half of them may not always be fed with data/instructions, depending on the workload, and depending on who you believe, plaintiff or AMD) whereas hyperthreaded cores are entirely virtual, all of the time.

    2. oldtaku Silver badge

      Re: So like Intel's Hyper-Threading bullshit

      As you sort of pointed out here, a big difference is that Intel does NOT market their 4 core HT chips as 8 cores, even though Windows shows it as 8.

      1. Steve Evans

        Re: So like Intel's Hyper-Threading bullshit

        But it does have 8 cores in the 8 core model...

        Each core is capable of symmetric integer mathematics.

        Sure, there are some shared bits like cache and FPU, but most of those weren't even internal to the processor, if fitted at all only a few years back.

        Yes, the shared bits could cause bottlenecks, but now you're arguing about the performance of the chip, not the definition of what is inside (the claim).

        Just because "8 core" was read as meaning "will be 200% the speed of this 4 core intel chip we have" does not make AMD wrong... It makes the plaintiff naive.

        For an encore they could try going after GM, and claiming the V8 model isn't a V8 because it doesn't reach twice the speed of the 4 pot.

    3. Sorry that handle is already taken. Silver badge

      Re: So like Intel's Hyper-Threading bullshit

      Why do you think hyper-threading is bullshit?

      I get 100% speedups in some tasks.

      1. Crazy Operations Guy

        Re: "Why do you think hyper-threading is bullshit?"

        Virtual Machines.

        On the hosts for my company's VDI infrastructure, the VMs are supposed to get a single full core and its matching HT core, which works great; but many times it ends up with another full core's HT core which doesn't do so well, or ends up on two HT cores and becomes almost completely unusable.

        The problem we have is that while we can get the configuration correct on the VMs when they rest on a single host, the processor affinity goes right out the window when the machines get migrated to another host. We can prevent the mess by disabling Hyper-threading, but only get half as many VMs per-host, or just accept that users will be angry at the random slow downs when their VM gets mis-configured.

        1. collinsl Bronze badge

          Re: "Why do you think hyper-threading is bullshit?"

          Most VM activity (at least in our environment) is not CPU-heavy, so we can get away with having HT turned off.

        2. Sorry that handle is already taken. Silver badge

          Re: "Why do you think hyper-threading is bullshit?"

          Bugger!

  8. Frumious Bandersnatch

    It's hard to see how this can succeed

    1. I'm sure that technical documentation was available, both from AMD and review sites

    2. Marketing bumf (not direct from Intel) often counts hyperthreaded "cores" as full cores

    3. I'm sure that he had the opportunity to return the part as not matching what was offered, but declined to do so

    4. As the article points out, what constitutes a core isn't well defined (and perhaps main ALU and supporting stuff does count)

    5. Probably hard to prove AMD intended to mislead (though perhaps plaintiff doesn't have to go that far)

    6. Balancing actual loss versus what he's claiming, did the loss of half an FPU core per "real" core really affect him that much (who saturates their FPU units in desktop/laptop PCs anyway?)

    It's not nice when you buy something that doesn't live up to expectations, but seriously, I think this guy doth protest too much.

    1. Jim Mitchell

      Re: It's hard to see how this can succeed

      Regarding point 6, these AMD "cores" share a lot more than just the FP unit.

      1. Dr. Mouse

        Re: It's hard to see how this can succeed

        There is no need, whatsoever, to put the word cores in quotes.

        Bulldozer has 2 complete integer cores. If you are doing integer work, they work as advertised (albeit slower than Intel cores due to the lower IPC, not because of the shared bits). As the vast majority of workloads are integer, the shared bits make little difference in the real world. Many others have already stated this, and benchmarks have shown that they operate as complete cores on such workloads.

        This contrasts with Hyperthreading, which does not have any extra cores, just an alternative set of registers which the core can flip to if one workload stalls (waiting for data etc.)

  9. Pompous Git Silver badge

    He's a dickhead

    If he didn't check out what he was contemplating purchasing at Anandtech, Tom's Hardware, Ars Tecnica etc...

    1. asdf

      Re: He's a dickhead

      You could say maybe he's a dickhead for suing but not visiting those web sites (which even I haven't done in quite some time as current kit is good enough, see posts above) would also make %99 of the world dickheads including probably some of your relatives.

      1. Doctor Syntax Silver badge

        Re: He's a dickhead

        "You could say maybe he's a dickhead for suing but not visiting those web sites etc"

        Fair enough comment for that. But did he download and read the spec sheet from AMD before he bought. If the manufacturer's spec matches what he's bought then how can he complain? I bought the 1.6 turbo - why wasn't I given the 2.6 V6 4-wheel drive? Should I sue?

        1. Pompous Git Silver badge

          Re: He's a dickhead

          I take it then that when I am in the market for new computer bits I am the dickhead for checking out what a variety of commentators have to say about the hardware, especially performance. Since when can we rely on manufacturers and salesdroids to reveal all that we need to know about their product before making an informed purchase?

          Example: I recently purchased an ASUS R7250 video card when the fan died on my previous card. It supposedly is capable of 2560 x 1440 pixels. Nothing I could do to make it work at that resolution so I contacted ASUS support who told me that resolution is "only available on the digital interface". The interface I was using is DVI using a dual-link cable. Apparently that's not a digital interface these days.

          I glued a case fan onto the old card and put it back.

          1. Anonymous Coward
            Anonymous Coward

            Mr PG

            That is a limit of the cable IIRC not the graphics card. Effectively advertising a car can go 120mph, you cannot sue if the road is limited to 60mph.

            Most companies will offer a swap for a fitting model though. It's not really either of your fault as it's down to some of the changes in spec and usage between VGA-DVI-HDMI etc.

            Though I agree it's annoying when one card will do certain resolutions over one socket, and not another, even when both support it.

          2. asdf

            Re: He's a dickhead

            >I take it then that when I am in the market for new computer bits I am the dickhead for checking out what a variety of commentators have to say about the hardware, especially performance.

            No of course not. Research is good and only rational when spending serious dosh but where nerds on here do research and where other people's Grandma gets her information may be very different. Granny is probably not reviewing AMD white papers.

  10. Sven Coenye

    Mini-Chipzilla

    Shirley you mean Chimpzilla?

    1. Gordon 10

      Re: Mini-Chipzilla

      I thought we had previously established their nickname was Chipzooki?

  11. Brian Miller

    Once upon a time...

    (not so) Long ago, CPUs came without a FPU. That's right, you had to buy a separate chip for all of that floating point math. When I worked on the Celerity mini computer, the 1260 model could have two processor boards in it with, get this: one integer coprocessor, and two floating point coprocessors. Yes, that's right, there were three Weitek coprocessors per CPU!

    And of course, there were Weitek coprocessors for 386 and 486 CPUs, too.

    So: does a lack of a FPU coprocessor for each CPU mean that people were ripped off? If I had bought one, I wouldn't feel ripped off unless I was doing a lot of scientific work. The real question is, how flexible is the execution scheduling? For instance, say there are two processes that do heavy FP math. If they wind up on the same Bulldozer module, is the chip (or OS) smart enough to put them on different modules, or are they stuck on the same module?

    If someone were doing heavy FP and expected 16 FPUs for 16 cores, then I would say there were ripped off. Otherwise, I don't think it's that big of a deal.

    1. Martin an gof Silver badge

      Re: Once upon a time...

      (not so) Long ago, CPUs came without a FPU. That's right, you had to buy a separate chip for all of that floating point math.

      Even less long ago, lots of things were different. The first '386 computers I built came without any processor cache. You could buy a bunch of little DIL cache chips as an option for the motherboard. The '386SX came without half its data lines, the '486SX came without an FPU and the FPU "add on" was actually a full '486 with FPU, which disabled the original chip.

      And all the while there were companies such as AMD and VIA and NEC and others whose names escape me, who made "clones" or "compatibles" that were cheaper, faster and had more facilities than the Intel originals. Aah! Dallas. They made an 8086 clone which had a real-time clock and battery on board. Might even have had some general purpose RAM and ROM IIRC.

      The first computer I really got my hands on was an RML-380Z. Everything was optional on that, apart from the case and the backplane to plug your processor card, memory card, tape controller card, disc controller card, mono graphics card, colour graphics card, serial card, parallel card etc. etc. into.

      People think it's amazing that I build my own computers, but compared with back then (how many jumpers and DIP switches were on those 286/386/486 motherboards?) it's as easy as slotting together a piece of Ikea furniture these days. If you have the right number of screws, it (mostly) "just works". The most complicated part (just as with Ikea) is choosing between hundreds of almost-identical components!

      M.

      1. Sean Timarco Baggaley

        Re: Once upon a time...

        Ah, the RM 380-Z. The only computer I've ever used that came with a "Cassette Operating System". (I am not kidding: it really did say that on the screen.)

        It was a cheap Z80-based microcomputer, sold for insane amounts of money to schools because it came in a ridiculously over-engineered case that was designed to take a direct hit from a British schoolchild, never mind a nuclear missile.

        1. Anonymous Coward
          Anonymous Coward

          Re: Once upon a time...Ah, the RM 380-Z

          Ah indeed. I had a call from a headhunter about a design job with RM at the time, so I read up one their products first, and then declined to go for interview.

          It was as big, heavy and expensive as the industrial computer my company was building at the time. But that had space for up to 3 16-bit processor boards, not one weedy little Z80.

      2. Naselus

        Re: Once upon a time...

        " it's as easy as slotting together a piece of Ikea furniture these days. "

        Plumbing's just Lego, innit? Water Lego.

  12. This post has been deleted by its author

  13. Cameron Colley

    Colour me ambivalent.

    I tend to agree with above posters that the AMD documentation and, even, marketing does explain about the "cores" being in "modules" and that, historically, no FPU is required for something to be designated an FPU so I think the case ought to be dropped.

    I actually bought an FX-8120 (I know, somebody had to) and performance-wise it lives up to expectations, though it seems prone to overheating (but no more so when overclocked). I really have no idea which Intel CPU and motherboard combination I could have bought at the time which would perform better for the price since Intel seem determined to this day to produce so many overlapping products with this and that feature enabled or not that it's almost impossible to decide which combination will even support virtualisation never mind will perform the best for a specific workload. Well, until one starts looking at CPUs which cost more than two AMD CPUs and motherboards.

    Oh, the reason I am looking at Intel is that I know they do currently make better CPUs than AMD -- I just wish they'd make it easier to choose one.

  14. Tsunamijuan

    Single Precision vs Double Precision

    I am a little rusty on my architectures currently, but I believe the single FPU unit is configured with a bit width wide enough for Either full bore calculation at double precision for one core, or two cores at single precision. Which is similar to Intels Hyper threading. OR there was something along the lines for that in reasoning.

    Anytime you do double precision floating point the load is going to be higher regardless, FPU's have always been a pricey and complex part of CPU's.Under efficient use other logic calculations and math can be used that are more efficient use of cycles than making pure use of the FPU for some things.

    1. Shane McCarrick

      Re: Single Precision vs Double Precision

      Its not really similar to hyperthreading- which is a single CPU masquerading as two cores to try convince software optimised for multicore use- that there are multicores present.

      The only legitimate argument- is that there FPU is a bottle neck- however, its only a bottleneck in certain limited circumstances (which might include intensive gaming- or video encoding- but even then, to try and argue its flawed- is a bit mindless- as anyone with a modicum of intelligence will have checked out how the various chips are benchmarked before purchasing- unlike the fool who is now suing the company).

      Hyperthreading- itself has a very chequered past- think back to the original P4 and its limitations (I benchmarked a 1.4Ghz hyperthreaded P4 with 2Gb of Rambus- against a 1Ghz P3 with 1Gb of far slower memory- running video encoding software- the P3 won the race hands down- despite the extra memory in the P4 box).

      The big issue here is the fool spent 299 each on two processors- without apparently doing any research whatsoever- save looking at the marketing blurb on the box.

      As the saying goes- a fool and his money are soon parted...........

  15. dan1980

    "The lawsuit . . . claims it is impossible for an eight-core Bulldozer-powered processor to truly execute eight instructions simultaneously – it cannot run eight complex math calculations at any one moment due to the shared FPU design, in other words."

    Okay, so let's try to understand what this is saying. I am not really qualified in the area of processor design but I can read a sentence good.

    It seems to me that the plaintiffs are using the term 'instruction[s]' in a very specific, restricted sense to mean a "complex math calculation" and therefore one that must engage the shared FPU. Unless I am gravely mistaken, however, there are plenty of instructions that would not need to engage the FPU.

    In essence, the plaintiffs appears to be attempting to exclude such 'instructions' by definitional fiat. So, one suspects that this case will end up involving rather a lot of complex expert testimony regarding exactly what the definition of a 'core' is.

    I have an AMD proc on one of my home PCs, an Intel in the other. An Intel in my laptop and HTPC and an Intel on my work PC. I manage a shed load of servers and there is a mix there (though slightly favouring Intel). SO I really don't support one or the other and, while the chip business is a bit shaky at the moment, I think AMD is vital for a healthy industry - many here will remember the increase in power in Intel chips that came from the competition of AMD.

    In the end, I think it would be exceptionally arrogant for any judge to rule that the Bulldozer 'modules' do not in fact contain two 'cores' because to do so would be tantamount to legally defining exactly what constitutes a 'core'. Perhaps some might think that's not so dreadful an idea but I doubt a judge would inclined to do that - especially considering that, as a poster above mentioned, there are plenty of historical processors that had no FPU at all and there is always the possibility that future architectures will structure this relationship differently, as AMD have done.

    If you have a legally bound definition of a 'core' then that means processor manufacturers will be forced to design within that definition or risk having their processors viewed as inferior due to having fewer 'cores'. And that has the potential to stifle innovation.

    We are seeing a big rise in the importance of GPUs and one can imagine that the future will bring us architectures that meld these two together. What if these new parts don't fit neatly into a legal definition of a 'core'?

    Perhaps at that point we may need new terminology anyway, but the point is there - in such a highly technical, continually evolving field, legally defining what some technology or term is runs the risk of more-or-less forcing vendors to fit their research and production into that box.

    1. Marcelo Rodrigues

      "It seems to me that the plaintiffs are using the term 'instruction[s]' in a very specific, restricted sense to mean a "complex math calculation" and therefore one that must engage the shared FPU. Unless I am gravely mistaken, however, there are plenty of instructions that would not need to engage the FPU."

      Yes, but it is worse than that.

      The Athlon FX FPU is a 256 bits part. It can process ONE 256 bit operation each time. BUT it can process TWO 128 bit operations simultaneously - or even four 64 bit operations!

      So, it is a little more complicated than "on FPU equals to one processor" argument. Would he be happier if each core used a 128 bit FPU?

  16. James Loughner
    Pint

    Everyone knows

    That if you want to do serious math you use a GPU

    1. dan1980

      Re: Everyone knows

      That's, in a way, my point above - that perhaps we are moving towards some hybrid architecture where the idea that a 'core' necessarily must contain a dedicated FPU is not useful anymore.

      As a layman - and correction are very, very welcome - I wonder if such a hybrid architecture might have elements of this AMD part, which is to say that the FPU as a component of the CPU could be shared between several cores and used just for those functions and instructions that can't be efficiently offloaded to a GPU-style processor.

      The plaintiffs are essentially asking the judge to legally define a 'core' such that it must contain a full, dedicated FPU. To me that sounds a bit restrictive.

      1. Mark Honman

        Re: Everyone knows

        There is also the precedent of classic SIMD machines such as the Connection Machines CM-2.

        In the day it was always referred to as a 64K processor machine (or maybe to the pedants, 64K PEs), had one Weitek FPU per 32 processors, and being SIMD I'd assume there were no per-CPU instruction fetch/decode units.

        Oh, and BTW each processor was 1 bit wide...

        1. Destroy All Monsters Silver badge

          Re: Everyone knows

          But the instruction set for the 1-bit processors was not too refined. The ide was to have a "computing memory" IIRC. Well, the monograph is still in print... (also: review).

    2. Anonymous Coward
      Anonymous Coward

      Re: Everyone knows

      "That if you want to do serious math you use a GPU"

      Because an MBA is totally useless for even basic math.

  17. Sam Adams the Dog

    The lawsuit and charge are a lot of crap

    At one time, it was common to build computers without any floating-point units at all. By this lawsuit's argument, these computers had no cores at all. And by the way, there are lots of operations that don't require floating point, and it appears that each of what AMD calls a core can independently perform integer operations. So, for such loads, one can, at least in theory, get 8-core performance.

    1. Tom 7

      Re: The lawsuit and charge are a lot of crap

      Its a separate specialised core that does maths - the other core does binary. I'd imagine the two are asynchronous (even the 8087 used to go away and work on its own and send an interrupt when its had finished IIRC)

  18. joed

    what about i7

    for desktop and mobile version? And less knowledgeable decision makers picking equipment based on the sticker alone.

  19. Anonymous Coward
    Anonymous Coward

    "cores" is overloaded

    Consider the 512 "CUDA cores" stated on a recent NV GPU and what it really means:

    "The idea is that the CPU spawns a thread per element, and the GPU then executes those threads. Not all of the thousands or millions of threads actually run in parallel, but many do. Specifically, an NVIDIA GPU contains several largely independent processors called "Streaming Multiprocessors" (SMs), each SM hosts several "cores", and each "core" runs a thread. For instance, Fermi has up to 16 SMs with 32 cores per SM – so up to 512 threads can run in parallel."

    "Only one flow path is executed at a time, and threads not running it must wait. Ultimately SIMT executes a single instruction in all the multiple threads it runs – threads share program memory and fetch / decode / execute logic."

    If someone didn't read up and make sure what they were getting is what they thought, then it must not have been important enough for a lawsuit either, eh?

  20. DCFusor

    Not just the FPU

    At least Nvidia calls it what it is...to the confusion of many, I might add. And let's don't start about their double precision speed. Joe 6 pack might not care about floats or doubles, but you can bet any scientist does - especially if they're doing neural networks or simulations. Even audio/video editing software often uses floating point extensively these days as the internal format.

    At any rate, didn't I read above that in fact there was only one branch prediction unit, one instruction decoder, and so on, per module? (That's what set me off as an old and sometimes forgetful CPU designer) Forgetting the dubiously shared FPU - that to me would mean that most of the time it was really only one thread per module, unless somehow that stuff - which is required to execute ALL instructions, runs at double the speed so it can keep both pipes full....The single copy of the "support" stuff would have to do one, then the other and so on regardless. Maybe it's smart enough to keep feeding one if the other is currently busy for more than a cycle - I don't see that even being discussed.

  21. Zola

    APUs and "Compute Cores" muddy the water even more

    AMD have recently taken to counting the combined number of CPU and GPU cores as "Compute Cores" when describing their APUs, so for example the A10 PRO-7850B has 4 CPU cores and 8 GPU cores, or 12 "Compute Cores" in total.

    Although I'm a little uncomfortable with this marketing-motivated move I do understand the distinction but I'm not entirely sure it's necessary or helpful (which is not to suggest that AMD try to hide the number of actual CPU cores, as they don't). However our clueless, dickhead plaintiff would no doubt sue on the basis that he thought he was buying 12 *CPU* cores - after all, he did overhear someone speaking about CPU cores once upon a time.

  22. Herby

    Who says that a core needs to compute independently?

    One could argue that there is lots of silicon and how it executes is another story. The fact that they do is a secondary feature. Sure it is important, and most of the time they do execute independently. But consider that there is only one path to external memory (yes, there are caches). If you always fail at cache hits, you will hardly execute instructions "simultaneously". Granted this won't always be the case, but it could happen.

    This reminds me of the time when marketing droids touted the number of transistors in a radio (as if more was better), even though they were only being used as diodes, and then of dubious value. In previous incarnations the droids mentioned "tubes" where some were only dropping resistors to work on the AC line voltage (but they did light up!).

    1. Anonymous Coward
      Anonymous Coward

      Re: Who says that a core needs to compute independently?

      I actually have a microcontroller (to a toy :P ) that has a diode for one special use. To hold the charging cable in place. :D

      So I totally agree, number of objects does not guarantee their use is put towards what we want.

  23. Anonymous Coward
    Anonymous Coward

    I've done tests using AIDA64 extreme's FPU Julia, mandel, and Sinjulia tests, all with parameters set to use 4 cores for one tests, 8 cores for the other, and the 8 core result was 2x the 4 core result. Running HWbot wPrime on 4 threads vs 8 threads showed a significant difference with the 8 core result at 8.641 seconds on 8 threads vs 12.688 seconds on 4 threads. You may run these tests yourself on a Bulldozer and Piledriver CPU, and you should get similar results. The CPU that was tested is an FX-8320 overclocked to 4.2GHz. This should prove that the CPU does indeed have 8 physical cores, but share resources.

  24. cpvalence

    Strawberry Fields Forever

    There's a number of reasons why the lawsuit is total baloney:

    #1: A single core is no single core any more, and has not been for a long time.

    You see, AMD, Intel and ARM designs have all the same feature: they are superscalar. This means that one core is divided in many "pipelines", up to 8 in the case of Intel Haswell and AMD Bulldozer. Each of these pipeline is designed for a specific purpose: memory branch, floating point op, integer op etc... In many cases, there is more than one floating point pipeline or integer pipeline. So in theory, one core can do up to 8 ops. per clock. In practice it is more around 2 for different technical reasons. Here is a good read on the subject: http://www.lighterra.com/papers/modernmicroprocessors/

    #2: Somebody has to define what would be the performance of a legit 8 core CPU.

    AMD K10, which is the architecture prior to Bulldozer, was very competitive for its time (2007-2010). Bulldozer is an evolution from K10, because AMD doesn't just start from scratch at every CPU gen they make. When bulldozer was released, in 2011, it had 90% of the performance of K10, when compared at the same frequency and number of core. K10 was limited to 6 core, but bulldozer can have up to 8 cores (4 modules). And now new gen Bulldozer (piledriver, steamroller, excavator) all perform better than the "real" K10 cores.

    Trust me. If you buy a $$$$ 18 cores Intel Xeon and think it's gonna game like hell, good luck with the 2.3 ghz core frequency. But afterwards you can always sew intel... right?

    #3 The AMD CPUs work and behave just like real x number of cores (2-4-6-8).

    On ANY real world or synthetic benchmark, the bulldozer familly of CPUs behave just like Intel single threaded CPUs. One Bulldozer module performs as well as 2 equivalent discrete cores. If you putt a workload on one AMD core and it takes 8 seconds to complete, it'll take more or less 1 second to complete with all 8 cores active. These CPUs perform just like legitimate 2-4-6-8 core parts.

    #4 What is really happening

    Right now, if you buy a high end (high end in 2012) AMD FX 8350 8 Cores @ 4.0 GHz, you will pay it 169$ :

    http://www.newegg.com/Product/ProductList.aspx?Submit=ENE&DEPA=0&Order=BESTMATCH&Description=8350&N=-1&isNodeId=1

    Now try to find a intel CPU for 169$ that actually doesn't suck. Nope, sorry, nothing there for you.

    Unless you want an i3 4340 dual core... which the same price AMD CPU just obliterates.

    AMD never diddled anybody with is bulldozer CPUs. These CPUs always delivered more performance per dollar than any intel option anyways. I hope the lawyers there are also partial time microelectronics engineers. This lawsuit is a total joke. Like a guy in his living room buys two AMD FX 9000 and suddenly becomes an expert in computer science. There are billions of transistors on a single chip. You can not just create a definition of what a CPU core is. In 5 years it's gonna be obselete.

    Sorry for the mistakes. I'm not a native english speaker!

  25. Shane McCarrick

    I can dig out both Intel and AMD FPU boards from around 18-19 years ago.

    I also have working boards (museum pieces) which I could theoretically demonstrate both boards working sans the FPU units.

    If the plaintiff spent 299 each on two processors- as the article suggests- without researching what he was buying- he was a tool and a fool.

    I've probably bought a couple of thousand AMD chips in the past 2 decades- over half of which were the legendary K-6-2-500s. People buy AMD chips- as a compromise between price and performance- even these days. Life is a series of choices- when you buy something you similarly have choices. If you choose to buy something without elementary research- I'm sorry- but why the hell is the vendor at fault.

    If AMD had hid its chip architecture or blatantly lied about it- he might have a case- they didn't.

    Anyone who mindlessly spends hundreds of dollars (or Euros or Pounds- whatever)- without looking beyond marketing blurb- is a fool..........

    Lets guess- hard-drive makers and the 'reduced capacity' rather than advertised capacity- are next on his radar......? What a tool.

  26. Will Godfrey Silver badge
    Unhappy

    Call me cynical

    I can't help thinking somebody is backing this guy. I wonder who might want to distract AMD at a critical time?

    1. Anonymous Coward
      Anonymous Coward

      Re: Call me cynical

      If you are not familiar with the American legal system: There is a thing called a class-action lawsuit.

      I expect they lawyers behind this will ask to have this certified as a class action lawsuit, and the damages would be for anyone who ever bought an AMD chip.

      As a plaintiff, think of it as a lawyer you never heard of filing a suit on your behalf, without your knowledge, and keeping the damages.

      As a defendant, think of it as extortion.

  27. Tom 7

    What constitutes a core?

    I was using a high speed bipolar process at the end of the 80's and found a design for 16bit processor that used either 600 gates (or was it (cmos) transistors). I re-modelled it using the bipolar process and it would have been the fastest cpu in the world at the time (we could do 2.4GHz with ease then). Couldn't find any RAM for it thou...

  28. chris 17 Silver badge

    No chance of a court case like this in the UK.

    AMD could have marketed the fpu as additional cores turning the 8 core into a 12 core. Our advertising standards agency probably would have asked them to add the gpu cores too & then banned Intel and others from marketing likewise. It's what they did with ADSL vs Cable, permitting cable to be advertised as fibre whilst banning FTTC being advertised as fibre when the cable offering is essentially FTTC. virgin have never and have no plans to roll out fibre to the home, their system uses coax cable from the cab in the street.

    1. Anonymous Coward
      Anonymous Coward

      @Chris17

      Virgin are not perfect, but I regularly get 100Mbit/s from our router, whereas from our BT business line we get 8 (and it costs more). If I can get that without having a 30M trench dug in my drive, I won't complain over technicalities.

  29. Anonymous Coward
    Anonymous Coward

    Blackadder?

    Isn't there a Blackadder Goes Forth where Blackadder is before a Court Martial, and the prosecution brings a private case against the Defence for wasting the courts time? "Granted, Defence counsel is fined for turning up".

    In this case, should be plaintiff's .....

  30. silent_count

    This guy could win.

    So they're going to pull twelve regular people off the street and get them savvy enough about CPU architectures and designs to be able to make an informed ruling on this case. And then I'm going to flap my wings and fly to the moon.

    This case does have a chance of being successful. Not because it has any merit but because the jury won't have a clue about who is telling the truth.

    1. Spoonsinger

      Re: This guy could win.

      Nope. He might win through twelve regular people and true, but then there will be an appeal. A bit of bad press for AMD for the first bit and a bunch of dosh to pay for the costs on the appeal, but not really a case. (If it did remotely win I suspect Intel would be quaking in their boots because of their varied and somewhat misleading nomenclatures)

    2. Naselus

      Re: This guy could win.

      Nah, they'll bring in an expert witness or two, and after a couple of hours (once he's stopped laughing) he'll set them straight. There's simply no case.

  31. Peter Nield

    Caveat Emptor

    Someone expecting floating point performance should confirm the performance of what they are buying.

    Buying individual CPUs (not computers with those CPUs in them) indicates a certain level of ability that the general public doesn't have (at the very least, replace a CPU in a motherboard) - to replace or match a CPU with a motherboard is not something I would expect the general public to be able to do. And to make his position worse, he bought two CPUs - he MUST have been certain about what he's buying.

    I'd also relate this too buying packaged food - if you want to know what is in an item you buy at the store, the only relevant information on the packaging is the ingredients and nutrition information - everything else is marketing to induce you to pick that specific product over a competing product. Buying a food item and expecting it to have X in it (or more commonly now a days, not have X in it), and not reading the ingredients to confirm that X is present (or not present) is caveat emptor in action.

    When I have a specific application in mind, I review relevant benchmarking information prior to making a purchase. I wouldn't expect that of someone who's going to do their e-mail, web browse and play farmville, though.

  32. Anonymous Coward
    Anonymous Coward

    core has no real meaning in computing only in marketting

    Core at best would be read as CPU and as far as I am aware "cores" has never agreed to relate to FPU, cache or any other integrated electronic "organic" system blocks.

    The best view of the plantif was that he thought AMD CORES = intel CORES and as neither has officially published their own definition his whole argument is that oranges are not apples. Unless he can prove that AMD said their oranges were exactly the same as intel's apples then he has no case.

    he might find they were spoken of as eqivilent but never identical, by the same token if you want some fruit then either are applicable but both have addition benefits ouside of the defintion of fruit.

    in summary if he wanted apples then that is what he should have bought

  33. John Savard

    Not That Bad

    The single FPU can still perform one floating-point instruction for each of the two cores it serves simultaneously, as long as that instruction isn't a maximum-width vector instruction. So the only time the matter comes up is when doing AVX instructions - not regular floating-point, not MMX or SSE.

    Initially, when I heard about these chips, that wasn't clear from what I read, so I was disappointed, but on learning this detail, I don't think there's a reason to find fault with AMD.

  34. CarbonLifeForm

    This smells like an Intel financed taxtic

    Full disclosure, I worked at AMD during Bulldozer.

    If we get into misleading terminology, how often does Intel make clear that a hyperthtreaded core count is double the "real" core count?

    The FPU is shared if there are two threads and us then 128 bit wide. If not shared, it us 256 bit wide. For integer workloads, the smaller fpu doesn't matter. For vector it does and you schedule every other core.

    1. DropBear

      Re: This smells like an Intel financed taxtic

      Wrong. I always knew Intel hyperthreaded "CPUs" showing up under windows as "half as many actual cores" while AMD was always "as many actual, physical cores as show up / claimed". This development certainly blurs the line, but what it also does is shatter my confidence that at least I actually get what it says on the tin with AMD. As a rabid lifelong AMD supporter, this might just be the point where I turn my back and no longer care - something they might have done well to consider before going down this route...

  35. CarbonLifeForm

    Will red hat and Microsoft be sued next?

    Both report AMD's core count. Conspiracy!!

  36. Anonymous Coward
    Anonymous Coward

    Ignorance is bliss

    For the clueless, AMD's claims are 100% accurate as to core count. This subject has been beat to death for years and the fact is the core count is correct and yes all cores can process concurrently as has been proven over and over. Bulldozer is not the best architecture in the CPU world but it has it's advantages and disadvantages.

    Anyone who doesn't believe that the core count is correct can run Prime 95 or other apps to see all cores loaded and processing individual instructions concurrently. This case will die an appropriate death when it reaches court. In fact this is a frivolous case and the plaintiff should pay all of AMD's legal fees plus a punitive fine as this is a perfect example of technical ignorance leading to a meritless lawsuit.

  37. Anonymous Coward
    Anonymous Coward

    You don't buy cores, you buy compute

    The consumer is responsible for selecting a machine for their workload. "A core" is not a unit of measurement when it comes to how long something will take to run. Benchmarks, reviews and personal profiling should dictate a purchase. The architecture of a bulldozer core was never hidden from the public. The plaintiff will have to have to come up with an industry standard definition of a core and they won't be able to do this.

  38. Unicornpiss
    Meh

    Well, what exactly is a "core" anyway?

    It's implied by the lawsuit that a core is a completely separate processor, but that's not true in anyone's chip, now is it? If you're going to draw the line and equate a core with a complete CPU, then each core should have all of the guts it needs to operate utterly independently of each other core, including power regulation, thermal management, cache, etc. Nobody does this. Perhaps in some ways Intel's chips are somewhat more independent than some of AMD's. But you can't use all of the features of every core simultaneously unless each one has separate buses for for everything and enough intelligent management (including software and the OS) designed to utilize everything in a true parallel configuration. And again, no one on our world does this yet. The way this guy is screaming that he's been victimized, you'd think he bought 4 computers and discovered that 2 of them were just cases filled with sand. (I know silicon is basically sand)

    Intel chips have been proven faster in multiple benchmarks. But AMD chips often 'feel' faster to me in real-world use, though this is just one man's subjective opinion.

  39. lsces

    8x64bit processors or 4x256bit processors

    Is it not the case that these chips are basically 64bit processors, in which case there are 8 core processors. I had thought the fpu's were 128bit, but it seems from comments here that they are 256bit, so the chip is potentially a quad core 256bit processor, or 8 core 128bit unit? It's sold as 64bit and in that mode it runs 8 parallel streams ... end of story?

  40. E 2

    Complete BS law suit. The structure of these chips (1 module = 2 int/logic cores + shared 256 bit FPU and shared decode/cache) was made very plain from word go. This structure was a selling point.

    Let caveat emptor prevail: the suit is either malicious or put up by people too negligent to understand what they were buying.

  41. tabman
    Unhappy

    Confused - help please

    I hope you will all allow a very basic question from someone who doesn’t know too much about the differences between AMD and Intel processors.

    I have always bought Intel model chips for my PCs. I am a believer in the model which states that you buy the most expensive processor and motherboard and then build your machine around those components, I have never bought or knowing used a PC with an AMD processor chip inside it (and I doubt that I would be able to tell if I did. I have nothing against AMD but that is the way it has worked out for me. We use Intel machines at work, I use an Intel desktop at home and on the go I have an i5 laptop and an Intel Atom tablet.

    Ever since I have had MS Vista I have used a gadget which showed me the loading on each processor (not for any other reason than I thought it looked good). When using these gadgets, I noticed that on a dual core machine it showed 4 processors. On the system tab in settings it showed two processors but 4 processors with hyper threading. I had no reason to believe otherwise so I always expected that an AMD chip would do the same (i.e. if it had 4 processors, I would see eight cores due to hyper threading.

    Is this article saying that on an AMD processor, if it states 8 cores, the gadget I use would report 8 processors? Do Intel hyper-threads=AMD cores or have I missed the point of this entirely?

    I know that this is probably a dumb question but that is how it seems to read to me. Could someone try and explain in plain English for me?

    Example:

    Is a 4 4.0 GHz core Intel i5 (for example) = 8 core 4.0 GHz AMD

    I know this is a bit of a TL:DR but any help would be much appreciated.

    1. Marcelo Rodrigues

      Re: Confused - help please

      "Is this article saying that on an AMD processor, if it states 8 cores, the gadget I use would report 8 processors? Do Intel hyper-threads=AMD cores or have I missed the point of this entirely?"

      HyperThreading is quite different from what AMD has.

      Roughly (very) speaking:

      HyperThreading uses parts of the CPU pipeline that are unused, to process another instruction. It doesn't double the performance, but it helps.

      AMD went a different road: It made two CPUs, and stripped some parts of one of them. Then both where glued together. Is more complex than this, but...

      So. One AMD unit (sold as two cores) has:

      One decode unit

      Two integer pipelines

      One FPU unit

      One assemble unit

      The catch: It is faster to decode than to process. In most (no all, but most) cases there is no difference from having one or two decode units to two pipelines.

      The FPU is 256 bit wide. It can process one 256bits instruction per clock. But it can process two 128bits instructions per clock, or even four 64bits instructions per clock. And not all FPU work is 256bit.

      If memory serves me right, the assemble unit goes the same way of the decode unit.

      1. tabman

        Re: Confused - help please

        Thanks, I think.

        time for wikipedia

    2. Naselus

      Re: Confused - help please

      "Do Intel hyper-threads=AMD cores or have I missed the point of this entirely?"

      Absolutely not. But for your purposes, basically yes.

      In AMD's case, they're actual cores with piss-poor architecture. In Intel's case, they're virtual cores with very good architecture. Intel gives better performance because of the architecture differences. Generally speaking, if you have an AMD machine with 8 cores, it'll be in the same class as an Intel chip with 4 cores (the Intel chip will likely perform about 25% better on most workloads and cost about 50% more).

  42. Robin 12

    FX-8320 user and happy.

    This case should be thrown out.

    Writing this from a AMD FX-8320. CPU load monitor is showing 8 cores working at various loads. Balanced across the processors. Before purchasing this processor this summer, I did some research. I knew that there was only one FPU per two cores, so the issue is published so there is no surprise about it. I quit worrying about FPU's because much of what I want to do can be off loaded to the GPU's which are much faster.

    Looking at a different machine for gaming and debating between a i7 and an AMD. I am leaning towards AMD for cost purposes.

    All I know is I am going with and EVGA Nvidia Titan X Hybrid graphics card due to Linux support and got a fantastic buy direct from EVGA.

  43. Paul Shirley

    is he an uniformed buyer?

    Seems unlikely anyone would buy an fx9590 without knowing more than an average buyer. Beyond checking it fire his mboard this co has insane power and cooling requirement's, far beyond what many board can support. Beyond the PSU in typical consumer PCs. He admits visiting amd.com presumably for research. His case will likely fail on the basis he should have known exactly what he was buying having done research. It's not much of a secret how the fx series use core pairs or the performance problems that causes.

    There's something very fishy going on. Is he really just suffering buyers remorse after *not* properly researching the purchase?

  44. David 14

    Open the floodgates then...

    Okay... I get the point of the complaint... and to be honest, it is not an issue to most Reg readers. We understand, or at least comprehend the concepts, behind processor design. We know that it is a very complex process that balances a multitude of items, and are minimized to fit ACTUAL USE to keep costs down.

    The legal discussion mentioned in the article is that the "average consumer" is being duped.

    Okay then, we are in a world of hurt. Most technology things presented to the consumer market are "dumbed down" quite a bit. Consumers still believe that GHz = speed = performance, which we all know is over-simplistic often wrong. One would presume that a "Core" for marketing purposes would be equal to the number of the same-named components in a processor, would you not?

    For history, many have noted the old intel 286/386/486 chips and floating point processing... you do not need to go back that far. Oracle (formerly SUN) SPARC T1 chips, which were announced in 2005 had a single Floating Point Unit for every 8 processing cores. Yes, that was a decade ago, but still half the time since the intel 486 stuff being discussed. My point here... it is not uncommon in chip design to trade off more cores for less FPU.

    What other things are commonplace in technology that would be suspect to the masses? How about the size of HDDs? That one has been around the block a few times... number of bytes, versus the number of GB/TB ... we all know of the 1000 vs 1024 conversion, but it is not a concept that many outside of technology are aware of. How about how mobile phones seem to not bother taking about the RAM in the phone anymore? They always report the amount of Flash Storage, and the number of "cores"... and even battery capacity.... but we all know that more RAM will make a world of difference in overall performance.

    Will be interesting what happens in this case, as I do not trust courts with always understanding what they are judging... I hope this is NOT that!

  45. 404

    My current rule of thumb is pretty simplistic

    i7 = Gaming, engineering, programming, data manipulating

    i5 = Business

    i3 = Consumer

    AMD = Consumer, get_the_intel_instead_if_possible

    It wasn't always that way, I was an AMD fanboi for a long time. but let's face it, intel came back. Is that intel cpu thermal patent still valid? I've seen the magic smoke when an overclocked AMD or Cyrix processor popped, usually because the motherboard thermal sensor was bad or gave incorrect readings. Intel never had that problem due to internal thermal protection and would shut down rather than go 'pop'. When the Athlons came out, that scared the pee out of intel and they had the resources to pursue then surpass AMD. I mean AMD has done it twice to intel, back in the 486/586/686(Cyrix) days, then with the Athlon x64. Fast forward to today and AMD needs to pull a third rabbit out to stay in the game. I just replaced my dev workstation with an i7 Acer Predator I juiced up about 6 months ago, gave my wife my old one, an HP AMD Phenom II Black X4 945 3.0Ghz, and my work laptop is an i5 - major difference in speed between processors and I was pretty proud of my Phenoms.

    Enough. Wife is looking at me funny because she doesn't believe I can listen to her and type at the same time (I don't care for the Kardasians, honestly. But I nod and make appropriate noises)

    1. Robert Jenkins

      Re: My current rule of thumb is pretty simplistic

      AMD are _still_ doing it, if you look at performance vs cost rather than performance per core.

      The top systems in the Passmark v7 multi-CPU league were AMD Opteron based - each has more CPUs than the near-competitor Intel systems, but if you work out the actual component costs the AMD setups are cheaper that the Intel ones.

      https://www.cpubenchmark.net/multi_cpu_pt7.html

  46. Henry Wertz 1 Gold badge

    Jeez it's tricky...

    Jeez this is a tricky one. (I'll call the module "2 cores" here to keep the sentences readable -- I'm not sure which side of the fence I'm on .) So, the AMD design has seperate integer and load/store units per Ccore, but shared almost everything else, and call them seperate cores.

    A point toward them NOT being cores... hyperthreading. When this was added to the P4s, what happened was Intel added additional execution units to each core compared to older P4s, but found the scheduler was very frequently unable to keep a reasonable fraction of these execution units busy. So, they added the hyperthreading, where it would show an extra "CPU" that would only utilize execution units unused by the real CPU. They did not refer to a single-core with hyperthreading as a dual-core.

    A point toward them being cores... with hyperthreading, it was possible for well-optimized code to keep all (or nearly all) execution units busy on the "real" CPU, so the hyperthreading CPU would make little or no forward progress. With this setup, each core *does* have some dedicated resources so neither will stall.

    As for performance... again tricky. I mean, if they touted a certain per-core performance, and it regularly only gets 1.5x that performance on dual cores (instead of more or less 2x) that's not good. But, in this modern era, you've got CPUs allowing some cores to run faster if others aren't running, power gating, throttling to limit to a given TDP, and so on. Shared cache between cores is common; sharing branch prediction units is highly unusual but (perhaps) smart... most software is not branching constantly, so sharing a branch prediction unit between 2 cores shouldn't slow things down much. The shared FPU is odd; but it's possible they used a single faster FPU over giving each core a smaller, simpler, slower FPU, and although this means FPU performance can vary somewhat (depending on what is happening on the other core) that FPU performance ins overall better than it would be otherwise.

    Really, for this "the devil is in the details".

  47. Henry Wertz 1 Gold badge

    OK I've decided

    After reading John Savard's post that both cores using FPU only stalls one core when running AVX instructions, I think it's pretty clear that the AMD design can reasonably be said to have 2 cores per module.

  48. Anonymous Coward
    Anonymous Coward

    Where is the misleading info?

    Where is the misleading info? The AMD specs, and even the pre-release slideware for Bulldozer, made this architecture quite clear to anyone who bothered to read the material. (AMD got a lot of flack for the design even when only slideware was available.) The module / core design was spelled out in sufficient detail to understand the shortcomings.

    One thing I don't recall reading about...is the FPU pipelined or not? A pipelined FPU can have several instructions working their way through it concurrently.

    Also, Intel never advertised hyperthreading as another core. I've seen it described as another register set that can execute like another core if the first hyperthread is blocked on some slow operation.

    1. Anonymous Coward
      Anonymous Coward

      Re: Where is the misleading info?

      Good heavens, I do recall, that my Aunt Mabel vouchsafed me the startling revelation that the absolutely most modern thing for a well-situated young blade in the Metropolis, is to build himself one of those new-fangled PC contraptions.She was positively swooning over my dear friend Captain Fitzgerald when he whipped his out after a jolly afternoon session at the Lyon's tea shop near my darling cousin Jemima's digs .... he is such a splendid Bridge player, a true Colossus, but a deuce of a fellow after a few slices of Victoria Sponge and G&Ts ....

  49. ChubbyUnicorn

    So the guy bought a 9590 and felt cheated because of its architecture? The 9590 is a Vishera CPU, not Bulldozer...

  50. Anonymous Coward
    Anonymous Coward

    Even a damn fool gets his day in court in the land of litigation

    Paid liars in the U.S. reap fortunes from frivolous lawsuits as those in the judicial system take care of their own. Just because an ass clown filed a lawsuit does not mean it has any merit or a snowball's chance in Hell of winning in court. These Hail Mary frivolous lawsuits are intended to get a company to agree to an out of court settlement. That isn't going to happen in this case as AMD has not mislead anyone on core counts, which is easily proven. The fact that the plaintiffs are so technically challenged as to not know how to test and confirm that the AMD CPUs do in fact process independent threads concurrently that equal the core count, should be grounds for immediate dismissal of the case with prejudice and punitive damages paid to AMD for the foolishness of the clueless.

  51. Pseudonymous Diehard

    Wheres Apple in all this?

    Ok guys. Ive eaten 18 miles of strudel and in making said strudel I must have peeled and cored about 20000 apples. Not a single one had more than one core. Yet the Apple Store says their various products are multicore.

    What the actual fuck.

  52. dave 81

    is there a result yet?

    Is this still being litigated?

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like