back to article Inside Intel's Haswell: What do 1.4 BEELLION transistors get you?

Intel’s Haswell processor architecture - formally called the fourth-generation Intel Core architecture, which is what the chip giant prefers we call it - has been in development for at least five years. Here's everything you need to know right now. It first appeared on the company’s product roadmap in the summer of 2008 merely …

COMMENTS

This topic is closed for new posts.
  1. Nigel 11

    No bigLITTLE

    Am I the only person surprised that Intel hasn't used a bigLITTLE design? (ie, one with a much-simplified core for housekeeping when there's very little going on, sharing state with a much faster core to which it would hand over when things get too busy). Can they dynamically shut down so much of a core that they don't actually need to use silicon real-estate for a separate housekeeper-core architecture?

    1. Malcolm 1

      Re: No bigLITTLE

      I saw an analysis somewhere (might have been Anandtech or Semiaccurate - can't remember now) indicating that Intel were of the opinion that their new power management tech was now sufficiently good that this type of approach was unnecessary.; "Hurry Up and go to sleep" - ramping up to maximum power doing the work in the minimum amount of time and then ramping down again to practically zero power drain is more efficient than powering up a lower powered core for longer.

      Remains to be seen whether this is true in practice though, but it sounds plausible.

  2. John Smith 19 Gold badge
    Happy

    sandy -=> Ivy bridge. Should that not be

    Stonyey- bridge?

    1. Giles Jones Gold badge

      Re: sandy -=> Ivy bridge. Should that not be

      Absolutely :)

  3. Spoonsinger
    Coat

    Re :- What do 1.4 BEELLION transistors get you?

    Something like an AMD A10-5800K? (but not as good).

  4. Anonymous Coward
    Anonymous Coward

    Name for the new power state

    Since we are creating this new power state, wherein the CPU is not running much of the time but periodically awakes just enough to process a message, I suggest we need a name for it.

    We have hibernate, off, suspend, sleep, and awake.

    May I suggest this new state be called "apnea"?

    1. paulc

      Re: Name for the new power state

      Chillaxed would seem more appropriate

  5. Karlis 1

    Microsoft shill?

    May I humbly suggest that Intel will probably be more than pleased to allow Apple and the likes to use the new chips to run non-Microsoft software as well.

    Would only make business sense - associating CPU with Windows 8 would be a commercial suicide - what is the point in investing untold beelions in R&D to only sell 3 copies? ;)

    1. Anonymous Coward
      Anonymous Coward

      Sad how you have to bring this type of comment when the intelligent people that posted before you actually had something to say.....back to the cave pls k thx troll.

    2. Mark .

      Re: Microsoft shill?

      Not sure who you are responding to?

      Yes, it's well known and obvious that Intel would like their CPUs to be used as often as possible, whatever the OS. Though your last comment is a bit silly, with 100 million Windows 8 licences sold.

      1. Destroy All Monsters Silver badge
        Facepalm

        Re: Microsoft shill?

        > 100 million Windows 8 licences sold.

        HERP DERP!

      2. paulc

        Re: Microsoft shill?

        "Though your last comment is a bit silly, with 100 million Windows 8 licences sold."

        if Windows 8 was really successful, then they'd be crowing about the number of activations by users. As it is, they're reduced to bleating about licences sold to OEMs who have to buy huge quantities to take advantage of the cliff-tier pricing discounts for volume purchases.

  6. Anonymous Coward
    Anonymous Coward

    I would rather have two more cores than any of that graphics crap

    I prefer to get my GPUs from Nvidia or AMD.

    1. Mark .

      Re: I would rather have two more cores than any of that graphics crap

      So do I, but even with a dedicated GPU, laptops can now use the Intel GPU most the time, switching to the faster GPU when needed, saving battery life, and reducing overheating.

      And that's before considering how Intel HD is good enough for the majority of people, or that a separate GPU isn't feasible on ultra-portables unless you want poor battery life.

    2. h3

      Re: I would rather have two more cores than any of that graphics crap

      They won't give you the max amount of cores they can do unless you can get a Xeon E7.

      It will to continue to be that way unless AMD comes up with something very good.

    3. Anonymous Coward
      Anonymous Coward

      Re: I would rather have two more cores than any of that graphics crap

      Did you not read the "tablet" part of the article?

      When you're building a small thin low power device then you want low component count. Therefore SOC designs are the order of the day. Two chips instead of one in a tablet is rather daft. We all know that much of a CPU or GPU package is just to allow wires from the die to the board anyway. It's all wasted space.

  7. redniels

    RE: no BigLITTLE

    "Am I the only person surprised that Intel hasn't used a bigLITTLE design? (ie, one with a much-simplified core for housekeeping when there's very little going on, sharing state with a much faster core to which it would hand over when things get too busy). Can they dynamically shut down so much of a core that they don't actually need to use silicon real-estate for a separate housekeeper-core architecture?"

    in short: yes they can. we are talking about intel, the biggest chip company in the world.

    Big little is a strange concept to begin with. You only need that concept when you 're power management of the "big" core is not up to snuff.. In intel's case it seems like it is.. The biglittle concept always seemed to me to be very unelegant: you only "bolt on" a small core and waste silicon there when you can't power down the big core sufficiently. in my eyes the biglittle concept is the poor man's solution to a big problem.

    my 2 cents..

    1. GettinSadda
      Boffin

      Re: RE: no BigLITTLE

      Actually I see other uses for bigLITTLE. If you have a system that spends much of its time doing menial tasks, but then needs to do lots of beefy stuff, bigLITTLE makes sense. Like a smartphone.

    2. IGnatius T Foobar
      Boffin

      Re: RE: no BigLITTLE

      "You only need that concept when you 're power management of the "big" core is not up to snuff."

      Not true. One of the most popular ARM SoC's, the nVidia Tegra4, uses a 4+1 core design. The "baby core" is used for idle time, including when there's nothing going on other than video being played.

    3. Paul Shirley

      Re: waste silicon

      When your core is inherently tiny and a low performance version can dispense with many 'go faster' transistors by cutting back on cache, pipeline depth or fat register files, there's not much silicon being wasted. Even less waste if you can avoid complex power control logic and any extra logic needed to safely bring logic units up or down.

      In exchange you get a much easier chip to design, test and get working. Something a small company without Intels resources can manage. You also get designs that better fit the ARM licensing model, of cut&pasted together modules that any chip foundry can successfully manufacture.

      Intel have a harder problem because their cores are now so complex even extra, cut down cores will waste too much silicon. So they have no option but to deep dive into the architecture and try to tame its power sucking greed by adding even more complexity.

      1. Anonymous Coward
        Anonymous Coward

        Re: waste silicon

        Indeed, they could have focussed on getting instruction execution times down but instead keep adding stuff like MMX, SSE and the like.

    4. John Smith 19 Gold badge
      Boffin

      Re: RE: no BigLITTLE

      "in short: yes they can. we are talking about intel, the biggest chip company in the world."

      So how comes when it comes to design time mfgs find ARM designs use less power?

  8. h3

    Intel should make an extreme version of the galaxy tab 3 (In limited quantities) with one of these undervolted and underclocked. Probably a bigger battery.

    Intel's technology is light years ahead of anything that you can get a foundry to do for arm.

    (It is all very well designing it for someone else's process that doesn't exist.)

    1. Anonymous Coward
      Anonymous Coward

      Light years? I don't see the market at all given your average tablet user is a web surfer, facebooker etc..

      Intel is for the desktop where you need 3D rendering, digital audio workstations, CAD etc. Tablets aren't that market.

  9. TwoWolves
    Unhappy

    More complexity in those caches

    Looks like my atomic locked threading primitives are going to need a whole new level of benchmarking again.

    Damn.

    More desktop cores please Intel, less tricks.

  10. Anonymous Coward
    Devil

    Didn't AMD miss out

    Seeing the reviews and how well AMD does with the GPU combined part compared to Haswell, I would expect AMD to come out with a 7950 GPU capable combination that could run all future PS4/Xbox One games for a $100 over console budget.

    Compared to Haswell their 2 year old GPU/CPU tech is still in advance which is great for competition.

  11. Shades

    Either I'm confused, or Tony Smith is?

    From the article: "So when laptop’s lid is closed, the system will drop to a power consumption level existing machines reach only when hibernating - the S3 state - but the system is nonetheless sufficiently awake to be ready to use by the time the user has lifted the lid"

    Unless I'm very much mistaken current machines, when hibernating (the S4 state, not S3 as stated in the article) use zero power, or at most the same as G2/S5; so thats no power to the CPU, RAM or anything else that isn't directly related to powering the machine on.

    How on earth is it possible for a future (Haswell equipped) laptop to drop the the same power consumption level of an existing machine in hibernate (zero!) yet still be "sufficiently awake" to return to a usable state "by the time the user has lifted the lid"?

    Or did I miss the bit in the article that said Haswell equipped machines will be using non-volatile RAM?

    1. Brewster's Angle Grinder Silver badge
      Headmaster

      Re: Either I'm confused, or Tony Smith is?

      Smith's "mistake" is referring to S3 as hibernation. But, frankly, only pedants call suspend to RAM "sleep" and suspend to disk "hibernate". The rest of us tend to say hibernate when we mean S3.

      1. DrXym

        Re: Either I'm confused, or Tony Smith is?

        Hibernate is the term that Windows uses for suspend to disk. Arguably it's a bit of a misnomer but its the definition that people use and so it is confusing to apply it to some other power saving state which also has its own term - Sleep.

        Maybe we also need to add Torpor, Doze, Snooze, Sleep, Persistent Vegetative State, Coma, Brain Death, Reanimated and Undead for other power saving modes.

  12. Anonymous Coward
    Joke

    What do 1.4 BEELLION transistors get you?

    A headache thinking about the nearly endless switching capabilities.

  13. Anonymous Coward
    Anonymous Coward

    My Corei7 laptops already average 0.1% cpu.

    OK perhaps a little bit of an underestimation, but they're sitting waiting around, a lot, on the SSDs therein.

    The next generation of CPUs will need to address a helluva lot more than 128Gig of RAM to get my attention.

    There's simply no excuse for anything under a terabyte addressable, and even that's too small really.

    Regards,

    Hekaton Harry.

  14. Metrognome

    The last table tells the whole story....

    It's so sad to see that despite the delays there has been such little progress from 2008's Nehalem to today.

    Take the top of the range i7 from late 2008 and there's precious little to eclipse it in the Haswell range.

    I can't remember Intel showing so little progress over 2 complete generations ever.

    Pity

  15. Anonymous Coward
    Anonymous Coward

    Nice

    1.4 BILLION transistors?!

    Thats impressive, it is comparable to the density on each chip of a 4GB SLC microSD card.

    Wonder how many low end chips with one bad core or more will end up in netbooks etc?

    This approach was also used on the Cell IIRC, in fact some units had cores disabled on

    purpose to even out the thermal profile.

    Also an interesting note, some newer military chips use graphite evaporated directly onto the

    bare die in order to form a superior thermal interface than any paste on the market.

    Wonder if Intel have used this approach yet?

    AC x472

This topic is closed for new posts.

Other stories you might like