back to article Who's been copying AMD's homework? Intel lifts the lid on its hip chip packaging to break up chips into chiplets

With Moore's so-called Law pretty much dead for now, and the shrinking of transistors proving more difficult, the name of the game today is packing multiple dies into chip packages rather than cramming more and more smaller transistors into the same area of silicon. Single dies packed with large numbers of cores and …

  1. Pascal Monett Silver badge

    So now we're going to have "chiplets"

    Since everything in the computer industry tends to get more and more user-friendly, I wonder how long it will take for me to build my CPU out of the various chiplets I think I need.

    I mean, I remember the first time I upgraded the video card in my PC, the first time I replaced motherboard/CPU/RAM, the first time I put in water cooling, etc. I look at how it works now, and it's a piece of cake today compared to two decades ago. So how long before you'll be able to say that you want 3 floating math parts, sixteen thread parts and six GPU units ?

    1. E_Nigma

      "Back when I was young..."

      Maybe you meant to say "three decades ago", but I'm not sure about that either. Two decades ago, building a new PC or performing CPU and RAM upgrades was as trivial as it is now and GPU upgrades were fairly similar to what they look like today as well (uninstall driver, swap cards, install new driver, cross your fingers). Replacing everything but the hard drive on a live system would have probably required an OS reinstall, whereas now Windows pretty much sucks it up, but it wouldn't be rocket science, it would just take an afternoon. And heck, I'm not even sure about three decades ago. In the early days, you often just popped expansion cards into the PC and software that was designed to support it just worked with it, without any drivers being required. Sure, maybe you had to set a few jumpers here and there, like finding an IRQ/DMA/IO combination that was available, but that wasn't as difficult a task as some suggest today, and with that you were often good to go.

      In short, while I'm not denying some improvement has occurred since those early days, PCs have always (by design) been fairly approachable and upgradeable, it's just that they used to be new and people were more mystified by them than they are now.

      1. Peter2 Silver badge

        Re: "Back when I was young..."

        I think that it has been very easy since MS DOS died off.

        I still remember writing the config.sys manually and having to manually set IRQ's etc. Since Windows it's actually been pretty easy as long as the drivers actually work, just plug devices in, turn the PC on, install the OS and then install the drivers and your done.

        Admittedly, being able to just download all of the drivers does help there. I have fond memories of a Dell PC that needed specific network drivers, and the CD driver also needed drivers, meaning you needed to use a floppy to get the CD working, at which point you could use the Dell firmware disc to get the network drivers, USB etc installed.

        Ok, i'm lying. Not fond memories at all.

        1. nematoad Silver badge
          Happy

          Re: "Back when I was young..."

          "...the CD driver also needed drivers, meaning you needed to use a floppy to get the CD working..."

          Ah yes, good old Win 95.

          Irritating but when you only have one to do but not too bad. On the other hand when you were doing it on a daily basis as well as all the other break/fix on a huge site then it was a real pain in the arse. In the end the people on my team just used Ghost and dumped the whole image in one go. Not strictly by the book but a lot easier than faffing around setting IRQs, sorting out config.sys and so on.

    2. Anonymous Coward
      Anonymous Coward

      Re: So now we're going to have "chiplets"

      "Since everything in the computer industry tends to get more and more user-friendly, I wonder how long it will take for me to build my CPU out of the various chiplets I think I need."

      I'd expect the opposite to happen. To maximise profit, I'd expect all major components to be put in one package, and PCs with discreet components to rapidly become an expensive niche. I'm fully expecting cheap (and expensive) laptops with no upgrade path, with almost everything in a single package on the PCB.... i.e. the CPU, an inadequate amount of RAM, an annoyingly small SSD, a slightly rubbish GPU and naff audio chip all in a single part.

      1. Korev Silver badge
        Joke

        Re: So now we're going to have "chiplets"

        You could call those laptops Macbook Pros

      2. eldakka Silver badge

        Re: So now we're going to have "chiplets"

        To maximise profit, I'd expect all major components to be put in one package, and PCs with discreet components to rapidly become an expensive niche.

        As is already the case, what do you think consoles like the PS4 and Xbox 1 are?

        And laptops. Many laptops are practically, if not literally, un-upgradeable. Laptops that can have limited upgrade-ability - RAM, storage devices - tend to be either more expensive or sacrifice size/weight to be bigger/heavier than the non-upgradeable laptops. And the select few that do allow GPU and/or CPU upgrades are even more expensive and/or sacrifice even more to increased size and weight.

  2. Mephistro Silver badge

    Who's been copying AMD's homework?

    Nobody. This is a very obvious solution to the issues caused by miniaturization in big dies. IMHO it's as obvious -at least for chip designers- as "rounded corners" is for mobe designers.

    Yeah, I know this is just a joke in the context of ElReg's "mock tabloid" style, but some naif/young/Intel hating readers may not get the joke, so...

    And Intel has enough troubles already! ;-D

    1. Bronek Kozicki Silver badge

      Re: Who's been copying AMD's homework?

      Anyway, the difficulty is in the protocol design connecting the chiplets - not in the physical separation alone.

      1. defiler Silver badge

        Re: Who's been copying AMD's homework?

        It's this the point I should be waving around articles on Pentium Pro? I know it was just the cache, but it was still a separate chip within the same package.

        Or go further back to the discreet floating point coprocessors back in the day, and you're on to the problem of making them talk to each other.

        I suppose if you really wanted to stretch it, you're pretty much looking at the CPU interconnects in old supercomputers, just made very short and much faster. (For given values of "just".)

        Nothing new in the world. Get off my lawn.

        1. Brewster's Angle Grinder Silver badge

          Re: Who's been copying AMD's homework?

          "Or go further back to the discreet floating point coprocessors back in the day, and you're on to the problem of making them talk to each other."

          FWAIT

      2. cornetman Bronze badge

        Re: Who's been copying AMD's homework?

        Indeed. Obviously making separate bits of silicon and plopping them onto a common substrate is pretty trivial.

        The really hard part is conceptually "glueing" them together with a connectivity fabric that is fast and efficient enough and that is what AMD's real innovation is here.

        There are numerous advantages with the chiplet design for product options and yield as well which has been discussed aplenty in other places. One interesting point made by YouTube's AdoredTV in one of his presentations was the possibility that you could end up with substantially faster CPUs through the ability to selectively bin chiplets. In any CPU, the fastest you can run it is determined by the slowest core. In a monolithic design, you are saddled with the core mixes that come out of production. If cores (or at least core complexes) can be separately binned, then you can mix/match core complexes to get the fastest CPUs, and the slowest for the budget end. A fast core complex on an otherwise shit CPU is a waste. There are so many possibilities.

        1. Aitor 1

          Re: Who's been copying AMD's homework?

          That would mean an OS that is aware of loads and capacities of different cores.. as in big,LITTLE, but instead different Ghz for each group of cores..

          You could have 2 high bin cores and 10 low bin.. for most ppl this would almost as fast as 12 fast ones, and certainly cheaper.. yet badly programmed/sequential tasks that are high priority could be run at full speed.

          Of course, the problem would be that all programs would claim "high priority", etc.

    2. Chris the bean counter

      Re: Who's been copying AMD's homework?

      Might be obvious but I bet there are several hundred related patents filed

      1. defiler Silver badge

        Re: Who's been copying AMD's homework?

        True, but the multiple-dies-on-a-CPU-package thing has been since before by Intel themselves. The multiple-modules-bolted-together-to create-a-complete-CPU thing has been done by ARM for decades.

        That covers the two biggest components, and they'll have been going so long that the patents will be up by now.

    3. Anonymous Coward
      Anonymous Coward

      Re: Who's been copying AMD's homework?

      But Intel made a point of rubbishing AMD over the use of multiple chiplets when the EPYC was announced didn't it? The headline is fair comment.

    4. Anonymous Coward
      Anonymous Coward

      Re: Who's been copying AMD's homework?

      Who does Jim Keller work for at the moment, and why does it matter here?

      https://en.wikipedia.org/wiki/Jim_Keller_(engineer)#Career

      https://venturebeat.com/2018/07/16/why-rock-star-chip-architect-jim-keller-finally-decided-to-work-for-intel/

      and it's even reported at

      https://www.theinquirer.net/inquirer/news/3031235/intel-hires-former-amd-cpu-architect-jim-keller-from-tesla (April 2018)

      Did El Reg cover it in much depth? Dunno, can't quickly find anything other than two short sentences at the end of

      https://www.theregister.co.uk/2018/04/27/intel_q1_2018/

      but maybe I'm searching it wrong.

    5. eldakka Silver badge

      Re: Who's been copying AMD's homework?

      Who's been copying AMD's homework?

      Nobody. This is a very obvious solution to the issues caused by miniaturization in big dies.

      What is old is new again.

      In the days of many CPU custom architectures, the 70's and 80's, when node processes were still large therefore bleeding-edge designs couldn't fit onto a single piece of silicon, most high-end computing (mainframes, super computers) were built around MCM's, Multi-Chip-Modules. The first Pentium Pro was an MCM, a package that contained 2 separate silicon dies. I believe, but am not sure, that IBM continued throughout the 90's and maybe even today possibly still uses MCMs in its mainframes as they tended to bond a lot of custom silicon for mainframe-specific tasks with more commodity (Power) silicon.

      So, today they are calling them 'chiplets', but they are just a variation on the MCMs.

  3. Electronics'R'Us
    Holmes

    Not a new concept

    The (then) Motorola SPS (semiconductor product sector) had their CSIC (customer specified integrated circuit) line of parts which were made from multiple die in the early 1990s.

    I was working with the really old 68HC05 series and when I went to download the assembler and linker, a questionnaire was presented to define the perfect microcontroller (type of core, peripheral set) and they took that information to fabricate what most of the customers apparently wanted. The programme worked - 68HC05/08/12/16 parts are still available today and were really popular in automotive for ECUs.

    The interconnect was not particularly fast (well, it was reasonable for the time) but interconnect has been the driver in the HPC space for many years. An on-chip interconnect is always going to have speed and signal integrity advantages over a second level (PCB) interconnect.

    There have been other products that took this approach (such as Tilera) but they really never hit the mainstream (mostly used in comms now).

    1. Down not across Silver badge

      Re: Not a new concept

      I was working with the really old 68HC05 series and when I went to download the assembler and linker

      That takes me back a bit. I did some designs based on 68HC11 and loved working with that MCU. Had some fun with BUFFALO and as11.

      1. Electronics'R'Us

        Re: Not a new concept

        The first project I did with that device was a state of charge analyzer for NiCad batteries (the idea was poo-poo'd but it actually is possible as I managed to prove).

        To aid in that, the 68HC805 I was using has a ratiometric A/D converter (it converted between two arbitrary voltages rather than between 0V and some Vref) so I was able to get the measurement to 'zoom in' for more accurate readings.

        Fun times indeed (and I loved the instruction set - perfect for jump tables among other things).

  4. Anonymous Coward
    Anonymous Coward

    "With Moore's so-called Law pretty much dead for now"

    Not quite yet, we have 5nm and 3nm next.

    1. Mage Silver badge
      Facepalm

      Re: Not quite yet

      It's been dead nearly 10 years. It was never a law, only an observation. Atom CPUs are less performance than many older CPUs. Also it originally referred to doubling the number of transistors every year (not performance), Intel was then a DRAM maker. Later every 18 months and then every two years. It was always an aspiration, never a "law".

      14nm and 7nm doesn't mean what 90nm meant. It now refers to the smallest important feature, not the general geometry, A 14nm part on same size chip as a 28nm part, ignoring connect area, ought to have x4 transistors. It doesn't.

      1. Baldrickk Silver badge

        Re: Not quite yet

        It was an observation, that then became a target, which kept the growth on track until recently.

        1. Mage Silver badge

          Re: kept the growth on track until recently

          Depends on definition of "recently" Also rarely mentioned is chip size and failure/defect rate in manufacturing. Hence chips sold with less activated cores than fabricated. A good idea proposed in 1970s.

          What size chip area is a full fat i7 2019 part vs a 2007 ARM?

          Flash storage is now amazing, but a lot of that is changes in how the bits are stored as well as vertical structures? As well as packaging?

          1. Electronics'R'Us

            Re: kept the growth on track until recently

            In the 80s, early Atari consoles had 8K of internal RAM.

            As memory device yields weren't that good, one of the manufacturers was selling 4108A and 4108B parts which were really 4116 parts where either the upper 8kbit or lower 8kbit had a defect.

            As I recall, there was a jumper on the board by each chip to select either the A or B part.

      2. Daniel Garcia 2

        Re: Not quite yet

        I bet in 20 years would be called "Moore's Age" and will appear in (Digital only) History textbook.

        1. Anonymous Coward
          Anonymous Coward

          Re: Not quite yet

          >I bet in 20 years would be called "Moore's Age" and will appear in (Digital only) History textbook.

          Assuming man doesn't do something silly and cart himself back to the stone age or further.

  5. Def Silver badge
    Coat

    Chiplets?

    Or as we Brits like to call them, crisps.

    1. nematoad Silver badge
      Joke

      Re: Chiplets?

      Ah, I see where youare coming from here.

      The US calls them chips and we call them crisps.

      Therefore if they want to call them chiplets then perhaps we should call the crisplets.

    2. Chris the bean counter

      Re: Chiplets?

      Intel chips run so hot that crisps is a good name for them

      1. brainyguy9999

        Re: Chiplets?

        Like a good "Your momma's so fat" joke.

        Entertainer: Intel chips run so hot.

        Crowd: How hot do they run?

        Entertainer: Intel chips run so hot we call them crisps.

        Crowd: Boo.

  6. Mage Silver badge
    Coffee/keyboard

    Samsung SC6400 family

    RAM, Flash, ARM SoC in three layers in a regular size SMT package for phones.

    TWELVE years ago. Intel couldn't use the tech on x86-64 because the heat would have meant the RAM and Flash wouldn't work.

    Allowed designing basic smartphones without any extra RAM and Flash more quickly as only the I/O needed to come out of the chip. BTW, the Wikipedia article on the original iPhone is very inaccurate and credits Apple for developing stuff that was bought in or already existed. The "innovation" of the iPhone that made it a success was partly Samsung's chip allowing fast development, but mainly the data plans. Till then only businesses could afford mobile data. Phones & PDAs used resistive rather than capacitive to allow business users to annotate and input. The iPhone was about consumption, hence could use an updated version of 20 year old capacitive touch input.

    Samsung's innovative three chips in one package barely taller than a regular ARM SoC was what allowed the iPhone to rapidly reach market after the concept was chosen. Hence original not having the well established 3G.

    Anyway, the Pentium II was multiple chips in a box. There isn't really anything new here, just incremental development and better packaging. See Wikipedia images.

  7. steelpillow Silver badge

    Bit slice

    Back in the day, bit slice technology was the only way you could build a chip-based CPU anyway. Kludging together a bunch of discrete logic chips and similar was so much better that soldering them up transistor by transistor that it enabled the development of mini-computers the size of a mere filing cabinet, such as the DEC PDP-11 (fond memories). When the "microprocessor" first arrived, things like floating-point arithmetic still had to be done either in software or a second ALU - Arithmetic Logic Unit - soldered alongside it and its cache held in working memory on the motherboard. When the Pentium II (I think) came along, Intel bundled the whole lot up in a sealed module the size and weight of a lead ingot. If you had a tower PC with the motherboard upright, the darn thing had a notorious habit of falling out under its own weight and Intel quickly followed it up with strap-down packs. Similar games have long been played with I/O circuitry such as the old analogue TV modulators. But hey, today's a new day, a new dawn, and new hype.

    1. John Savard Silver badge

      Re: Bit slice

      Putting a floating point unit on the same die as the CPU dates away back to the 486DX. The Pentium II is noteworthy for having an out-of-order floating-point unit and a level I cache on the same die, thus being a monolithic microprocessor microarchitecturally comparable to the System/360 Model 195.

      Since the Pentium II, there's been relatively little room for improvement; it's been diminishing returns, instead of the low-hanging fruit of making microcomputers run many times faster simply by multiplying 16 bits at a time instead of 8.

      1. Mage Silver badge

        Re: Pentium II

        The earlier Pentium Pro was better than a PII, as long as you didn't need any native 16 bit code. Windows 95 / 98 killed the Pentium Pro, well that and the expensive RAMbus memory needed.

        The PIII caught up with P Pro, for same clock and RAM with NT4.0 or Unix.

  8. HamsterNet

    When

    And so after years of Intel siting on their assess, price gouging and not innovating, they are now several years behind Ryzen architecture benifits. Whilst AMD is accellerating away with Ryzen 3 this year and 4 due next year, all before Intel has anything close to competing with the Ryzen 3. HOw many years will it take for Intel to bring the connector fabirc to market and designed and fab chips onto it? 5 years?

    Chiplets makes a huge difference in the overall cost, as you get vastly greater yeilds making lots of small chipsets on a new fab process (AMDs approach) than you do for single monster sized chips (Intel's approach).

    1. Chris the bean counter

      Re: When

      Chiplets less risky too as you can overproduce them and then assemble into whichever SKU is selling fastest

    2. Anonymous Coward
      Anonymous Coward

      Re: When

      As someone pointed out, there is already that Kaby Lake / GPU 2-dies in one package part.

      But that may have been a special.

      One of the things that Intel acquired with Altera was a chiplet technology - the Stratix 10 is available in a variety of packages in which the same FPGA die is accompanied by different combinations of chiplet I/O tiles.

      This is good, except that even the in-package links are slow compared to on-die communication.

  9. Benson's Cycle

    Back to hybrid circuits, in fact

    These things come in cycles. Back in the 70s/early 90s hybrids were a big thing - a ceramic substrate with ICs and discretes, packaged in a suitable box. IIRC some parts of System X (sign against evil eye) were implemented this way.

    Presumably it will continue until the next generation of fabrication techniques - if there is one - get us back to single dies again.

    1. Mike 16 Silver badge

      Re: Back to hybrid circuits, in fact

      Precisely my thought. Perhaps we should call this latest incarnation "dis-integrated circuits".

      Also a time to remember that, at least as I was told back in the day, Seymour Cray had more patents for packaging, cooling, and power distribution than for the more "CS-ish" stuff.

  10. LeahroyNake Bronze badge

    Makes me think of

    SLS being the monolithic chip and Falcon being the chiplet design. There are similarities in both cost and performance metrics.

    I know what version my money is on.

  11. Anonymous Coward
    Anonymous Coward

    Custom?

    "These are going to China they have the NSA chiplet.

    These are going to the USA, they have the Chinese chiplet.

    This is the low cost range. The package is bigger because we include ALL the spy chiplets."

  12. NeilPost

    ARM?

    Isn’t this what ARM has been successfully doing for years??

  13. Steve Todd
    FAIL

    "AMD is well on its way in using chiplets in its processors"

    You mean "you can buy one in the shops"? They have real products on the shelves now (Ryzen 3000 series processors) that use them.

  14. devTrail

    Three core CPUs

    Will it mean the end of CPUs getting on the market with one or more defective cores switched off?

    I mean will it be easier to get rid of the defective bits that came out of the process?

  15. JLV Silver badge
    Headmaster

    >Our vision is to develop leadership technology

    What kinda buzzword-heavy English language hell spawned this ill-considered abortion? How can you take anyone seriously after they say something at that level of marketing-speak?

  16. a handle

    No more Moores law -> maybe wafer-scale integration growth?

    Will wafer-scale integration become popular? "...The technique, christened Catt Spiral, was designed to enable the use of partially faulty integrated chips (called partials), which were otherwise discarded by manufacturers."

    80,160 or 240 MB SSDs, 15Kg, the price of a house in 1982.

    Transfer rate Asynchronous 3.0 Mbytes/second maximum, 2.5 Mbytes/s sustained

    Physical Size 8" disk drive form factor, 215.9 x 127 x 616 mm (8.5" x 5" x 24.3"

    Weight 15.0kg

    http://www.computinghistory.org.uk/det/4619/Anamartic-Wafer-Stack/

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2019