back to article And here's Intel's Epyc response: Up-to 56-core, 4GHz 14nm second-gen Xeon SP chips, Agilex FPGAs, persistent mem

In a highly orchestrated global maneuver, Chipzilla today launched, to much of its own fanfare, its second-generation Xeon Scalable Processors for servers – chips previously codenamed Cascade Lake. A while ago, executives at Intel-rival AMD, which made a big splash of its own with its 32-core Epyc server-class CPUs, told us …

  1. dnicholas
    Mushroom

    Is that 400 Intel Watts? That's about 1kW in real money

    1. Bitsminer Silver badge

      Jet speed

      And approximately Mach 2 airflow across the heat sink.

      1. werdsmith Silver badge

        Re: Jet speed

        And approximately Mach 2 airflow across the heat sink.

        That should help warm it up very nicely.

    2. Kevin McMurtrie Silver badge
      Terminator

      Step 1 - Invest in ulracapacitors. Lots of CPUs cycling between 20 to 400 watts is going to mess with the low frequency mains transformers unless there's a big capacitor bank on the intermediate power lines inside each server.

      Step 2 - Use wealth of investment to prepare for The Rise of the Machines.

      1. Anonymous Coward
        Anonymous Coward

        "Lots of CPUs cycling between 20 to 400 watts"

        It's likely to provide very similar load to existing systems - it appears to be Intel's take on two cores, one die targeting HPC. Intel haven't released socket information as far as I can tell (strange...) and Intel are suggesting that systems will be liquid cooled and compute focused (i.e not supporting maximum RAM capacities).

        On top of that, DC's are rarely space limited. They are either power limited (if designed correctly) or cooling limited (if it's not economic to upgrade cooling to match available power). If the DC's are power limited, you're likely just stuffing less boxes into a rack.

    3. richardcox13

      And a million air conditioning units cried out and melted.

  2. Duncan Macdonald
    Flame

    So - 56 cores instead of 64

    The EPYC Rome processors go up to 64 cores (128 threads) unlike the 56 cores which will be available in one SKU only (the 9282) or the 48 cores available in another SKU (the 9242) - all the other processors have the same or fewer cores than the current first generation EPYC which reaches 32 cores.

    As the previous commentator mentioned - a 400W Intel power consumption rating implies a much higher peak power draw. A PSU with over 1200W output is needed for each 9282 chip (an 8 socket system would need over 10kW of power supply - BEFORE peripherals !!!)

    Icon for the heat dissipation ->

    1. diodesign (Written by Reg staff) Silver badge

      "The EPYC Rome processors go up to 64 cores"

      Yeah - OTOH Rome isn't out yet. Will add it to the piece anyway.

      C.

    2. phuzz Silver badge

      Re: So - 56 cores instead of 64

      A 1200W PSU isn't that unusual in the server world. A blade enclosure will typically have multiple PSUs of that power level, so 15kW for a single enclosure is entirely do-able (although that's spread across 6U). eg

    3. muhfugen

      Re: So - 56 cores instead of 64

      There are massive architectural differences. The number of cores which share L3 cache for one. And the ability for large VMs (or threadpools) to do work without having to span NUMA boundaries and the incurred latency penalties for another. To the number of sockets which they can scale to.

    4. SNAFUology
      Devil

      Re: So - 56 cores instead of 64

      hmmm with 64 cores it might melt or require a hurricane for cooling

  3. druck Silver badge

    Patching nonsense

    They can’t patch all of it, because the only way to completely get rid of it is to completely get rid of speculative execution in caching, and if you do that, your shiny modern Core i7 performs as well as a ‘286,”

    What nonsense. Speculative execution didn't even come in until the Pentium Pro. An i7 without it would work more like the older non-speculative Atoms, which is bad enough, but still orders of magnitude faster than a 286.

    1. Lee D Silver badge

      Re: Patching nonsense

      And you don't need to completely remove speculative execution.

      You just need to make sure that when you do speculatively execute, that you are completely applying the same memory security principles as when you don't.

      The problem Intel had was not "You're trying to think ahead", it was "When you think ahead, you're doing so by bypassing all the security".

      It might still mean a change in chip design, rather than a software fix, obviously, but it's not as drastic as "you can't speculatively execute".

    2. Roo
      Windows

      Re: Patching nonsense

      I suspect the killer for Intel is the cost of the validation. It can't be cheap (or quick) to validate changes to access validation and speculative execution with a huge ISA like x86.

  4. Shadow Systems

    But can it run Crysis?

    I'll get my coat. It's the one with the pockets full of memes. =-)P

    1. Totally not a Cylon
      Alert

      Re: But can it run Crysis?

      I would guess so,

      but can it run Crysis in VR?

      1. Timmy B

        Re: But can it run Crysis?

        If you keep one eye shut

    2. Philippe

      Re: But can it run Crysis?

      Crysis? That's nothing. Try and run Vista on this thing.

    3. Anonymous Coward
      Anonymous Coward

      Re: But can it run Crysis?

      I'm wondering if it'll run Raspian and, if so, how many lines would be usable on a typical console after all the raspberries were displayed.

  5. YourNameHere

    Die Size

    56 cores, so that's 23 cores per die. Wonder what the die size is and the yield? I bet the reason for 23 core is one for redundancy at test...

    1. Duncan Macdonald
      FAIL

      Re: Die Size - ERROR

      If it is 2 dies then 28 cores per die - check your maths 56/2 does NOT equal 23.

    2. Anonymous Coward
      Anonymous Coward

      Re: Die Size

      Seem to recall being told die size same as Sky Lake?

      1. TeeCee Gold badge

        Re: Die Size

        Would make sense. IIRC Intel's planned die-shrink has been canned as a) AMD are already fabbing at smaller sizes than their target[1] and b) they couldn't get their current architecture to sample in quantity.

        [1] ...and if you really must run the Red Queen's Race it's bad form to come second.

  6. Mikel

    Look at the thing

    It looks like you could fry bacon on it.

    1. Ogi

      Re: Look at the thing

      > It looks like you could fry bacon on it.

      I was thinking that it could be a good method of keeping my tea warm. It looks exactly the right size to rest the base of a mug on.

      1. Anonymous Coward
        Anonymous Coward

        Re: Look at the thing

        I think it would do more than keep it warm, might be a good replacement for your kettle.

  7. tcmonkey

    BGA? On a $$$$$ 400W TDP part? Gross, no thanks. Prepare for a billion RMAs due to failed solder joints.

    1. phuzz Silver badge
      Flame

      I suspect that'll be a lot less of a problem than you might think, because these will be used in servers and will probably only be powered down a handful of times in their entire lives. Also, BGA is fine if it's manufactured well.

      It's much more of a problem if a cheaply built chip is in a games console that's going through big thermal cycles every day.

      1. tcmonkey

        True, although you will still get thermal stress induced by changing loads on the chip and the sudden energy burnt when the workload puts its foot down.

        It also has the hugely negative downside of not being able to replace/upgrade the two components separately which does sometimes happen, even in server-land. We did CPU upgrades some VM hosts last year for instance.

  8. cb7

    "They can’t patch all of it, because the only way to completely get rid of it is to completely get rid of speculative execution in caching, and if you do that, your shiny modern Core i7 performs as well as a ‘286"

    A slight exaggeration, but I'll say it again. There's merit in developing cheaper memory that doesn't need 16 clock cycles to get dressed everytime it's asked to go fetch some data.

  9. Old Used Programmer

    Where I'd like to see Optane go...

    If Optane is anywhere near as good as they claim, I'd like to see microSD cards using it.

    1. TeeCee Gold badge

      Re: Where I'd like to see Optane go...

      I wouldn't. If you've ever used anything with SDIO (SD for peripherals) you'll know how piggin' slow the SD presentation is. It'd be like putting a Cosworth DFV in a Trabant.

    2. Anonymous Coward
      Anonymous Coward

      Re: Where I'd like to see Optane go...

      As far as I'm aware, the Optane secret sauce (for performance anyway) isn't in the memory cells, it's in the interface and the position in relation to the CPU to reduce latency which is why it requires CPU's that support Optane.

      So no, you won't see it in microSD.

  10. John Smith 19 Gold badge
    Unhappy

    Optane. Sounds impressive. Is proprietary

    So hoping to lock in enough customers before they discover it's' not quite as good as it's claimed?

    Of course it might really be as good as they say it is.

  11. Robinson

    Price?

    I'm guessing these will be way more expensive than the AMD equivalents.

    1. Wade Burchette

      Re: Price?

      And I am thinking that because of AMD's design, they could sell a 64-core Epyc for half this and still make a large profit. Sooner, not later, Intel is going to have to go AMD's chiplet route.

  12. IGnatius T Foobar !

    dozens of cores and oodles of memory...

    ...it's basically turning into a mainframe, which makes sense because that's what a cloud data center really is. With a chip like this, a hosting provider (a real one, not AWS) can fit into a rack what used to take up the entire room. Commoditization is a wonderful thing sometimes.

    1. _LC_
      Alert

      Re: dozens of cores and oodles of memory...

      Bear in mind that those "56 cores and 112 threads" are usable in single user environments only. Thanks to the multitude of Spectre bugs this chip cannot separate users (Intel is affected much more than others are as they cheated the most with "speculative execution"). In other words, if you are running a big box with various compartments, this isn’t for you as your users would be able to access each other’s data. ;-)

      1. Korev Silver badge
        Big Brother

        Re: dozens of cores and oodles of memory...

        The article says Intel are claiming hardware fixes. Although, there are probably more vulnerabilities yet to surface.

        1. _LC_

          Re: dozens of cores and oodles of memory...

          They are claiming fixes for only a few. Others have already been described as "not fixable" by the researchers. That is, they would require a change in hardware design in order to mitigate the problem. The change would have to be more drastic than what Intel wants to put itself through.

          1. doublelayer Silver badge

            Re: dozens of cores and oodles of memory...

            First, at least some of those have been patched in software. Second, it doesn't really impact the main point because people are currently doing multi-user environments on the existing intel chips with the same vulnerabilities. For all of those people, the security landscape is the same as it is right now. If you don't care about the vulnerabilities enough to stop using a multi-user system with the old chips, the compression allowed by all these cores could be useful. It might also help if you have a relatively large datacenter as well, as you could compress multiple internal servers onto a smaller number of VM hosts running on these. I'm not sure it's worth the investment, but it makes some sense.

  13. Korev Silver badge
    Boffin

    Memory bandwidth

    There doesn't appear to be a significant increase in memory bandwidth, it'd be interesting to see if the massive core count translates into good throughput in real applications. There's also a high likelihood that network, storage etc. will just become more of a bottleneck.

    Bring on the benchmarks :) -->

  14. Will Godfrey Silver badge

    Does anyone have a motherboard for this?

    I would imagine it would have to by made of ceramic to withstand the heat.

  15. BugabooSue
    Thumb Down

    Buying Intel?

    Nope, still not buying.

    If it were not for the likes of AMD, ARM, and others, providing some competition, Intel would not be even selling processors at this level. They would be still be strangling the end-users for every damn dollar they can using lesser silicon.

    I’m not saying other firms are any better, but competition obviously works. I will continue supporting the ‘underdogs’ as my long-term future in computing depends on it.

    If Intel had got their way, I truly believe that we would not be above 2GHz dual-cores on the desktop, let alone the 3GHz+ Ryzen monster I am running today.

    I’m all for making a profit, but stifling innovation (and milking the dumb users) to do it - that really sucks.

  16. zanginator
    Meh

    Its in the Swiss bank account!

    Skip the wallet my dudes, these puppies are going to require you hand over your life savings to Intel.

    Intel may want to be competitive in terms of core count (and "gluing" stuff together) but I bet they aren't competitive on price.

  17. johnnyblaze

    All I can say is, go AMD. EPYC will offer far more performance for the buck, and it wouldn't surprise me if AMD's 64C/128T EPYC is half the price of the high end Xeon Platinums. AMD are on to secure 10% of the lucrative server market - and growing. Intel's monopoly days are over, and they're now actually having to do some work.

  18. Lusty
    Flame

    Heat

    At 400W per socket, a measure I will call "Wockets" from now on, it's a wonder that cloud providers are not cashing in on the heat. My hottub only needs 3KW to heat it, so at 400 wockets we'd only need a couple of beefy servers to run it. Extend that out and we could have Azure health spas next to every data centre with hot tubs, saunas, steam rooms, pools etc. all heated by the DC. With the right setup and enough wockets you could probably have a bakery running using some clever heat exchanger. Or a pizza joint. Yes, the more cores the more pies you get cooked for free. This is the future!

    1. Steve Kerr

      Re: Heat

      Actually not a bad idea to do spa's and stuff next to datacentres to use the excess heat

      Most DC's are in pretty awful locations though so would to be lots of stuff done to hide the industrial nature of the areas

      1. smot

        Re: Heat

        Some datacentres already provide heat to local communities: https://www.eniday.com/en/technology_en/warming-swimming-pools-data-centres/

        or

        http://www.bbc.com/future/story/20171013-where-data-centres-store-info---and-heat-homes

  19. Anonymous Coward
    Pint

    True story*

    The toilet cubicles at Intel have all have glass walls, floors and ceilings. When someone complained that they felt uncomfortable with this, they were told by marketing and HR not to worry about privacy because, despite initial appearances, the cubicles are fully and properly segregated by walls, and most of the staff have learned to look straight ahead only.

    (* allegedly).

    1. msroadkill

      Re: True story*

      Maybe it's just me, but this seems profound - a kinda Rosetta stone of Intel management logic. If reality doesn't suit their model, assume it isn't there and proceed.

  20. Paul Shirley
    Coat

    encrypted DRAM

    Optane memory also features hardware-based encryption – something no DRAM device is capable of

    If you go dumpster diving for DRAM there won't be much data there to decrypt...

    Presumably persistent DIMMs are a problem for encrypting data before it leaves the CPU, if you ever have to read the DIMM somewhere else. Have Intel opened a whole new set of security 'opportunities'?

  21. SNAFUology
    Meh

    White Hats NOW

    I just wonder what kind of bugs they've hidden in this one - hand it over to the white hats before public release

  22. zb42

    Am I the only one cynical enough to think that persistent memory is inevitably going to lead to situations where you power the computer off and back on and it remains stuck in an unintended dysfunctional state?

    I'm sure they are unusual cases where it is really useful, I just can't see it being worthwhile for typical computer use.

    1. Anonymous Coward
      WTF?

      If I played the lottery and won, I'd like to have one with almost entirely NVDIMM init so I could take my Design and Implementation of BSD book, reset to ground, and build as near as I can get to an ACID-compliant OS.

      I differ from a lot of my contemporaries and their successors in that I've always thought of any system that stores a value somewhere/somehow as having a database and build accordingly. This would be taking it down to the silicon level, immediately or eventually. One of my Holy Grail projects and notin the Monty Python-esque sense. [Which is still one of my top favorite films.]

      Weird? Yep. That's me! General reaction to above? {See icon}

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like