back to article Intel TOCK BLOCK: 10nm Cannonlake delayed to 2017, bonus 14nm Kaby Lake to '16

Intel has surprised the IT world by changing its plans. Let's break down what's happened. At IDF in 2013, Chipzilla boasted that it could get processors fabricated using a groundbreaking 10-nanometer process onto the market by 2015. It later pushed that date to 2016, and on Wednesday of this week it pushed the new chips back …

  1. This post has been deleted by its author

    1. Destroy All Monsters Silver badge

      Is it even economically feasible to reach 7nm?

  2. PushF12

    AMD tripped and fell

    Intel is taking an engineering breather, and might take higher margins, because AMD lost its CPU roadmap and is withdrawing from the PC market.

    Expect schedules to slide until 64-bit ARM alternatives begin eating into Intel's server market revenues (or they feel some other competitive pressure).

    1. P. Lee Silver badge

      Re: AMD tripped and fell

      Perhaps AMD will see this as an opportunity. If fab improvements are running into diminishing returns due to physics, AMD may look to play catch up.

      AMD need a USP. They've already got some ARM expertise - I'd suggest integrating ARM onto x86 hardware. A little hardware switch to suspend to RAM the x86 side and allow you to play with android coupled with a big screen and humongous battery, or perhaps a video overlay to run ARM over the top of the x86 display. Add a couple of bluetooth controllers (paddles [remember them?], joysticks, Wii) and you have a trivial games machine.

      1. Anonymous Coward
        Anonymous Coward

        Re: AMD tripped and fell

        If AMD can get a family of nice coherent unified A72 parts with ATI GPUs out of TSMC's 10nm process a year or so before Intel gets around to getting its act together, it could stir things up a bit. Which would be nice.

    2. Anonymous Coward
      Anonymous Coward

      Re: AMD tripped and fell

      Push F12 +1

      My thoughts too, Intel doesn't need 10nm yet so is sweating the assets. It will be interesting to see Skylake's new architecture benchmark numbers when compared to Broadwell.

  3. PleebSmash

    we're all doomed!

    Moore's law is over! Intel is still king of the hill!

    It's hard to be surprised. Shifts to Si-Ge or nanotubes are expected to keep chips shrinking. Cheaper EUV is needed to make these nodes easier.

    Forget about when we will shrink to 3 to 7 nm. The real questions are about 3D STACKED CHIPS THAT AREN'T TOO HOT. If we can do it with NAND, maybe we can do it with CPUs.

    1. Brandon 2

      Re: we're all doomed!

      I, for one, welcome our neural-net CPU overlords...

      1. Daniel von Asmuth Bronze badge

        Re: we're all doomed!

        "Broadwell parts are branded fifth-generation Intel Core chips (Core i3, i5, and i7) "

        You mean the commercial name is Intel Core Pentium?

        If they can't get the EUV process working, maybe we'll have to switch to Pentaquarks. For the moment the market is not demanding huge numbers of Windows 10 PCs.

    2. schlechtj

      Re: we're all doomed!

      Only a few generations of shrinkage to go. All the technologies we hoped would save us from this are not even close to being ready. So, no more packing in more transistors soon. Chip designers and programmers are going to have to re-learn efficiency in order for us to make progress.

      1. John Smith 19 Gold badge

        Re: we're all doomed!

        "Only a few generations of shrinkage to go. "

        If I've got the math right 14nm is about 60 atoms wide.

        But normally the oxide is 1/10 that.

        So about 2 generations unless someone finds a really clever way to make high aspect ratio conductors, like 20 atoms high by 1 atom wide.

        But I'm not sure how good insulators can be when they are 1 atom thick.

  4. gregthecanuck

    Tick, tock, tank

    I suspect at least 50% of the reason for the delay is for cost reasons - the sales are down, need to cut costs to keep pumping out those dividends.

    1. picturethis

      Re: Tick, tock, tank

      So, are you saying that the "clock is ticking" for Intel?

  5. ilmari

    It's hard to believe Broadwell was 2014 when availability is still so poor.

  6. Paul Westerman

    At worst, it is losing its lead over its chipmaker rivals

    Intel has rivals?

    1. Gideon 1

      Re: Intel has rivals?

      Yeah, the smartphones.

  7. Bartholomew

    If you can't go smaller, make more and/or build skyscrappers.

    My prediction is that the next step will be 64,128, 256, 1024, 2048, 4096, 8192, 16384, 32768, .... cores on a single piece of silicon, keep the size small, to keep the power usage low and the switching speed high. Also small means less complex clocking, where the entire circuit is much much much smaller than a wavelength of the clock, and the clock level can be approximated as constant over an entire core.

    Another option is to build the chips more in the Z direction. We use to have mostly single sided PCB's 40+ years ago (still used in PSU's), these days they are like 4-38 layers on PCB's. Each extra manufacturing step adds a risk of defects. What really needs to happen is new methods for defects to be repaired, tolerated or bypassed.

  8. Andy Tunnah

    Name suggestion

    They should call it tack, it fits in and it's tacked on, every body wins

  9. Richard_L

    Tick, tock, tick, tick, tock

    So, just like everyone else, Intel's had to add a leap second this year.

  10. imanidiot Silver badge

    Patterning, overlay and EUV

    Working from inside the lithography industry I suspect the problem for Intel is that they didn't expect to need the triple or more patterning to produce chips below the 14 nm node and didn't really put in all the effort. They were fully expecting to roll out with EUV litho. Once that plan fell through they went back to full bore development on multipatterning but are struggling to reach the needed overlay accuracy and throughput while having to catch up to where they would have been going that route in the first place. From the info I'm getting ASML is very close to reaching a production-worthy throughput on its EUV tools. I suspect Intel is taking the breather to allow itself to catch up with its multi-patterning on DUV tools and allow ASML to catch up with it EUV tools.

    EUV however is no trivial matter. It's all the fun engineering challenges of DUV immersion with the added joys of working in a vacuum with MUCH stricter geometry and patterning contraints.

    If anyone wants I can do a "short" explanation on just what it is that makes EUV so hard to do. (I'm an engineer actually working on parts of these EUV tools at a supplier).

    1. Anonymous Coward
      Anonymous Coward

      Re: Patterning, overlay and EUV

      Please do!

    2. Alan Brown Silver badge

      Re: Patterning, overlay and EUV

      ISTR reporting about x-ray lithography being on the drawing boards 20+ years ago.

      Given the features in question (14nm) are much smaller than the wavelengths of light being used (violet = 380nm, DUV is 248 or 193nm), it's not surprising that it's bloody hard to do lithography at this scale.

      The miracle is that it's possible at all.

      1. imanidiot Silver badge

        Re: Patterning, overlay and EUV

        EUV is not quite x-ray lithography (soft x-rays start at roughly 10nm wavelength. EUV is at 13,5 nm. Close but not QUITE xrays), but has a similiarly long gestational period. Problem with EUV is that until quite recently nobody had a clue how to do it in an economically viable way. Even now the boundries of what is possible need to be pushed in a LOT of areas to get things to work.

        I'll try to type up that "why is it so hard" post this evening, but no promises.

        1. imanidiot Silver badge

          Re: Patterning, overlay and EUV

          Gahhh, I've started this explanation about 3 times now and my brain keeps screwing me over, talking about all those little details that are completely unimportant. I apologise in advance for any spelling or grammar errors, I'm typing this on a way too old laptop without a spellchecker and I'm kinda tired of proofreading after starting again for the third time.

          So, lets try again.

          Lets start by thinking about what makes lithography itself hard.

          First thing is the size. Computer chip feature sizes are measured in nanometers (shortened to nm). That's 1000ths of 1000ths of a milimeter. That's hard to even comprehend. Take your own hair. Grows at leisurely pace of 0,3 mm a day. That's 3,47 nm per second! By the time you have read this sentence your hair has grown more than the distance between 2 features on an Intel processor.

          Quick term explanation: The silicon bit of a computer chip is produced many at a time on/from a wafer. A slice of mono-crystaline silicon. Usually circular, 300 mm diameter and 0,775mm thick)

          What we are trying to achieve is to project a pattern of lines onto a resist covered wafer. Easy squeezy you'd say, all you need is a fancy slide projector. At the very core that is sort of what an litho machine is. It shines a bundle of light through a slide (called a reticle) and then shrinks that image with some lenses and projects it onto a wafer.

          The earliest version of litho machines were called steppers/repeaters. Project an image of the whole die, move the wafer a bit, expose another die, etc. All well and good, but at some point someone decided they wanted to do it faster. And faster. And faster still. At some point, stopping the wafer every time to expose an image starts to take time. So a clever guy came up with the scanner. Project a slit of light, move the reticle underneath it and then move the wafer the other way simultaneously. Now you can keep the reticle going back and forth and the wafer moving in a constant SSSSS pattern. Result, you can expose even faster. Modern DUV tools are now producing over 100 wafers per hour. That means that in under a minute a 300 mm diameter wafer is completely filled with exposed dies (something like 6x6 or 10x10 mm) and ejected again. You can imagine that getting that scan synchronised between the wafer and reticle is extremely important. With ever smaller structures the accuracy also needs to get better and better. The chucks holding the wafer move at incredible speeds. Accelerations over 100Gs and top speeds over 25 m/s. And the reticle has to keep up.

          So how accurate does this alignment have to be? Well, given the feature size and the speed of the chuck the alignment has to be well below 1 nm. (and keep in mind we are talking about 2 physically completely separate items not mechanically connected in any way). So if we were to scale that up, it's like flying 2 jumbo jets at 700 km/h within 0,003 mm of each other. Again, and again and again, scan after scan.

          The projection itself is also a challenge. Remember that demonstration with the laser and the fine grating of parallel lines, creating a diffraction pattern? All those lines an a reticle do the same thing. What you get is not a clean image, what you get is a diffraction. So the lens system has to eliminate all the diffraction orders expect the ones you want.

          Then there is the matter of vibration. You're trying to project something very very small, very, very accurately. No matter how thick you make your foundations, vibrations are going to happen. From earth tremors, trucks moving outside or fat Mike from Accounting walking down the hall. So you have to keep those out. Usual solution for those in any industry is air bearing and voicecoils, but you can imagine that all shaking a wafer and reticle around business is going to cause a fair bit of shaking in and off itself. ALL that needs to be compensated.

          Further bit of trouble is temperature. Materials shrink or expand as they cool down or heat up. Silicon is no different. But what happens if I have a wafer that is cold on one side and hotter on the other, expose a die and then let the wafer change temperature? Well, simply put the next layer you expose on top is going to end up in entirely the wrong place and you have a very expensive bit of useless silicon on your hands.

          Now here come a few of the challenges in combining all of these things. We want to expose wafers faster. Pumping more light onto that wafer means the resist hardens faster, so I can move the chuck faster. Move the chuck faster and I get more vibrations (not to mention I still have to make sure the damn thing doesn't move relative to my chuck under all that load). More light also means more heat generated on the wafer, so now I have to cool it more. Faster exposure also means I have to get the wafer into and out of the machine faster. At 100 wafers an hour, that's a load and unload every 36 seconds.

          Last thing I want to touch on is overlay. Overlay is the accuracy of alignment between the different layers of a chip. All those different layers have to connect together so every time you expose a new layer, it has to be accurately positioned relative to the previous layer(s). But to process that layer you have to remove the wafer from the machine, do a whole lot of processing to it and then feed it back in. Then the new layer has to be within a few nanometers of the old one. Time and time again. This is achieved using special alignment marks. They are exposed and etched in the first layer. Problem then is that you still have 30 or 35 or maybe even more layers worth of exposing and etching and processing to do. You can't redo those alignment marks because you can't be sure you get them in exactly the same place but you still have to keep track of a mark that keeps fading and fading and fading into oblivion. (And somehow they manage to do this)

          So now lets look at EUV. All the fun we talked about previously with some added bonus hurdles.

          First off is how do you get the light? EUV is a weird sort of photon. Not quite Xray, not quite UV any more. Some bright spark somewhere found out you get these photons at a nice usable 13,5 nm wavelength if you convert very pure tin into a plasma by blasting it with a lot of energy. Like a CO2 laser or a pulse of high voltage. Aside from the fact that air would make creating and maintaining this plasma difficult EUV light does not travel through air very far. If your screen where emitting EUV light right now it wouldn't even reach your eyes. So you replace the air right? With what? Not many gasses are transparent to EUV. One of the few that does that you COULD use is pure hydrogen. Good luck with that, I'll be WAY over there taking shelter if you ever try this. SO you remove all the air and do it in a vacuum. Seal the whole thing in a nice sturdy jar, pump it down and blast away... (I'll get back to why this is not easy in a minute)

          But how do you get the light from that plasma onto the wafer? You have to somehow focus it. Big problem number 2, EUV light doesn't really do lenses, or mirrors for that matter. There is no known lens material that is transparent to EUV, has a usable refraction index and is economically viable for production. That leaves mirrors. The standard single surface mirror we all know doesn't cut it. It just doesn't do anything. EUV can be reflected by a so called multi surface mirror. Lots and lots of layers of alternating material. It's still not a perfect bounce though, only a part of the light gets reflected. I'll get back to this in a minute

          Only, how do you shoot tin with a laser and form a plasma? What happens to the tin after that? How do you direct the light? So here comes the next challenge. Several crazy and/or smart people have gotten involved in the matter. Cymer (US) and Gigaphoton both went with the laser produced plasma method, shooting droplets of tin with a high power CO2 laser. Xtreme (Germany) went with a high voltage method.

          In the end I believe Xtreme didn't quite make it and Gigaphoton is still working on it. Cymer got acquired by ASML and their source seems to be the main option right now. Having only ever been up close and personal with the Cymer system I'll focus on that one here.

          So how does a LPP EUV source work? Well, take a big vacuum pot. Shoot tiny, tiny (micrometer size) droplets of tin across it with a high pressure gas and use a very accurate targeting system to blast each droplet with a CO2 laser beam as it passes. The droplet superheats, explodes, produced a tiny bit of tin plasma giving off EUV light and a lot of tin debris. Catch any un-hit droplets on the other end, let the debris condense on the walls, where it'll flow down to a collection drain where you can then pump it up again to repeat the process.

          Then you stick a nice shiny multilayer mirror behind it to focus the light and bobs your uncle... right? One problem with multi-layer mirrors. They don't really like tin. Or plasma, fingerprints, carbohydrates, moisture, acetone, getting hit with EUV light (yes, really. Though only little bit), etc, etc. So you have to get this mirror really close to the tin without actually getting it into the tin. Then you have to catch ALL the tin debris flying around before it can hit any of the other mirrors you need to project that light on the wafer. How this is done unfortunately starts veering into NDA territory so I'm going to leave it here.

          1. imanidiot Silver badge

            Re: Patterning, overlay and EUV

            Part 2 due to character limit:

            Back to the mirrors. So now we have light. It's been nicely collected and concentrated by a collector mirror in the light source and now we have to project it onto a wafer. Remember that slide projector analogy for DUV tools? A DUV reticle is transparent and light is passed THROUGH the reticle to create the imagine in the light beam. Now look back at the bit about the lenses. That also applies to reticles. The solution is the same as the mirrors, you create a reticle that is basically a multi-layer mirror with the pattern etched into it and then reflect the light beam off of it. ASML has not come up with a way of doing this that requires less than (I believe, I don't know the exact number) 14 mirrors. The problem is that these mirrors are not 100% reflective. They only reflect about 79% of the light. Not a problem if you have just one mirror and plenty of light power. But you only get maybe 120 or 130 watt of EUV light from the source. That means that at the end of the line you have 120*0,79^14=4,4 watt! reaching the wafer. Not a whole lot.

            Then there is the wafer positioning. On the DUV tool we could use airbearings for eliminating shaking and moving around. Air bearings in a vacuum are not possible. Running into the NDA territory again I'm just going to say this involves magnets. Lots and lots of magnets.

            Then there is the challenge of building all this. The traditional method for building very clean (ultra high) vacuum systems is to build all of it out of nice sturdy non porous materials like stainless steel, make sure everything is resistant to a little heat and then bake the whole system at 120 degrees for a few days while sucking out all the contamination. This is not an option if you have a vacuum vessel the size of a luxury saloon car with lots and lots and lots of electronics crammed into it. You can bake it out before you put everything in, but not after. ASML had to define a whole new category of vacuum cleanliness for this called Ultra Clean Vacuum. Modules are built clean and kept clean because any contamination put in at assembly can no longer be removed after assembly is done. You'd think this is easy but let me just give a small example of what this entails for me, the technician.

            First off I'm in an ISO grade 6 cleanroom, fully suited up. Anti-static coveralls, hood, socks, gloves and shoes and surgical mask. Before I even touch ANYTHING on the machine I have to put nitrile gloves over the anti-static ones (possibility of sweat seeping through and leaving stains). Then I clean the area around where I'm going to be working with cleanroom wipes soaked in isopropyl alcohol. Then put down a piece of ultra clean plastic (doubled up) between which I can then place/prepare my vacuum tools and parts I'll be needing. Only then is it time to open the hatch I'll be working through. Before I touch anything that will be touching something inside the vacuum chamber (like tools, my own hands or parts) I put on ANOTHER pair of nitrile gloves OVER the first pair, taking special care not to touch the palm or fingers. Once I wear those gloves I can work in the vacuum parts. If I touch anything, like support myself on the edge of the chamber, or scratch my nose, or just idly leave my hand hanging by my side and touching my coveralls I have to put on a fresh pair of those second gloves. You can imagine this gets tedious. Especially since parts come nicely packaged in plastic, but that packaging is NOT vacuum clean. So I can use 8 pairs of gloves JUST for swapping a part with 3 bolts. I've had days where 2 of us went through an entire pack of gloves (50 pairs) in a single day. Anything cleaned for vacuum can never touch anything that is not.

            As this is getting quite long I'll just leave some of the engineering challenges to a single line and let you figure it out.

            The multi layer mirrors are extremely accurate and flat, blown up to the size of Germany the largest bump would be a few millimetres

            Holding a wafer down in air is easy, just use suction. How do you do this in a vacuum where suction doesn't work?

            How do you get wafers into and out of the vacuum, positioned accurately enough that you can then expose them?

            How do you make things move in a vacuum if you can't use grease (normal vacuum grease contains Fluorine, which wreaks havoc on the mirrors) and steel parts touching directly will instantly cold-weld and fuse together?

            How do you keep the inside of the system clean?

            How do you keep the wafer temperature stable in a vacuum? Any water leak inside the vacuum could potentially destroy millions of dollars of equipment.

            Then also keep in mind that these systems are VERY complex. You can't just drop a box on your customers doorstep, hand them the manual and tell them to have at it. Even basic operation requires month of training. Basic maintenance and troubleshooting adds another few months, etc, etc. All that training has to be prepared and thought up by someone. Any problems encountered that the customer cannot solve must be escalated to a support team, who must be able to escalate to even brighter minds. Once a machine is in the field, someone needs to think about what parts are spares and can be needed in the field, then make sure they are available. Someone must write the manuals and instructions. Someone must build the control software that keeps everything running. There are hundreds of thousands of tasks involved with this type of machine that many people don't even start to contemplate.

            All of this put together means its quite an achievement ASML has got systems running at customer sites. A system that solves all of those engineering hurdles (and then some, with more still to go) that is barely larger than a 60 ft shipping container. Keep in mind that many said that EUV was simply impossible. Time will tell if all of this works out or if it was a swing and a miss. The shrink will have to stop at some point.

            1. PushF12

              Re: Patterning, overlay and EUV

              Somebody at El Reg should apply some editing grease to this comment and post it as guest article.

              1. imanidiot Silver badge

                Re: Patterning, overlay and EUV

                I wouldn't necessarily be opposed to that, but I'd then want to put it past those responsible for press contact. An article is slightly different from a full article. (I'd also have a few more things to add :-)

                Even if they don't want to publish this particular stuff, might be interesting for The Reg to get in contact with ASML and see if they can have a chat with some of the IT folks about what it takes to run the IT side of the business. It's not an easy feat. Some of the hardware running is already 10 years old but still under extended warranty/support. Keep in mind however that the hardware chosen when designing one of these systems is already "proven and reliable" by the time the design starts, let alone when it hits the market. THEN it still has 7 to 10 years of warranty and support left.

                Also remember that a lot of this support is done remotely, meaning there has to be a secure connection from the customer site to the support team. And we're talking about a bussiness where an hour of down time equates to hundreds of thousands of dollars in cost and where a single wrong press of a button can cause that downtime, so that connection better be VERY secure, yet available whenever it's needed.

                There are hundreds of service techs running around worldwide to provide support. All with a laptop, secure connections, phones, etc that need to be kept running at any time. (Just try telling your client, yeah sorry, you're going to have another day of downtime while I get my laptop replaced...)

                And that's just some of the stuff I can come up with as an non-IT trained "outsider". I'm sure the IT folks there would have a few war-stories of their own (if they are allowed to disclose some of it)

  11. John Smith 19 Gold badge

    At the end of the day though it'll still just implement the same Intel ISA we all know

    and Microsoft seem to love.

    8086 inside.(not TM)


  12. Alan Brown Silver badge

    8086 JIT compiler inside. x86 hasn't existed as a piece of silicon for decades.

    The actual chip is different. It'd be "fun" to expose the real internals to direct programming rather than have to go through an interpreting interposer at all times.

    (FWIW, the chinese x86 chips are MIPS inside with an interposer and AMD's original K5 series were 29000-series RISC chips with an interposer - the AMDs ran 50% faster clock-for-clock than Intel on integer operations)

    1. Destroy All Monsters Silver badge

      JIT compiler

      More like an interpreter.

      Warren Abstract Machine in hardware when?

    2. John Smith 19 Gold badge

      "8086 JIT compiler inside. "

      So like the "Machine Level Interface" used by the IBM AS400 and later iSeries machines.

      But there you could see the swap from CISC to POWER PC inside at work.

      But as others noted with the complexity of the 8086 ISA I think it's more an interpreter than a compiler

      And exposing it would of course mean you'd freeze the architecture.

      So the code museum runs on.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2019