Now, does anybody still believe they're not frightened of ARM (and ARM licensees)?
See title.
Intel has rewritten its product roadmap to drastically cut the power targets for its notebook processors and to expand those targets for its mobile-device processors. "We've decided, looking forward, that our roadmap was inadequate, and that we need to change the center point," Intel CEO Paul Otellini said during his keynote …
Anorexic-margin Atom SoCs to replace adipose-margin Xeons in the data centres? Sounds like the prelude to a round of cost-cutting and a trimming of chip transistors so that a Supermicro blade chock full of Atoms with attached power station can at least compete with a <insert name here> blade even chocker full of A15 SoC with hardware vector units powered by gnats' chuffs.
Set you shares to sell or is it time to embrace and extinguish ARM in an offer the owners simply cannot refuse?
See there is a use for that number.
While Intel might actually be able to make SOCs, the real challenge is going to be to handle diversity and customer demand.
ARM do that by making cores and allowing others to take those cores and design their own chips and getting those fabbed wherever. Doing this requires an infrastructure that supports such customization.
Unless Intel have the ability to allow their customers to specify custom chips they will never play this game.
Intel are also formed around the idea of making really high margin chips. If margins come down to a dollar or so per chip will they still be able to keep their heads above water?
I think they're seriously worried. Paul's keynote looks to be very much like a "stick with us, partners! We can do low power, too!" effort. I mean, that last Atom core is even named after a lot of hot air isn't it? And that lovely thin laptop-cum-tablet slide.. What on that slide can only Intel provide. In fact, what on that slide can competitors do right now and Intel not do?
If I were him, I'd be worried, too.
When your phone has a 32GB SSD, removable storage, always on fast internet, 2GB RAM, HDMI and a dual core processor with meaty (think PS3+) GFX processor you'll wonder what your desktop PC is meant for.
Windows on ARM. That's the fourth shift.
Windows on ARM? Think about all the extra hardware bits that the ARM would need, then ask yourself what the advantage would be.
ARM is not a processor. It's not a competitor for the latest silicon from from Intel, AMD or anybody else. It's a fundamental processor design that is available to anyone who wants to pay the licence fee and then incorporate it into their hardware. Intel can use use it (they own a licence) but said yesterday that they have no plans to do so.
Don't get me wrong - ARM is wonderful technology, but asking whether the ARM can replace the x86 is like asking whether the Ferrari can replace the Bus.
>ARM is not a processor
Where have you been during the smart phone revolution? Things like Android phones, iPads, etc. have shown that whilst an ARM might not be the fastest chip out there it's certainly plenty fast enough for browsing, email and some simple amusements which is all that most people want to do. The operative word there is 'most'. It shows where the majority of the market is. It shows where the money is to be made. Companies are interested in making money, end of. Any bragging rights over having the fastest CPU are merely secondary to the goal of making money.
So clearly compute speed is not as big a marketing advantage as all that. The features that allow one product to distinguish itself from others is power consumption and size. And that's where ARM SoCs comes in streets ahead of Intel.
Intel at last seem to have realised this and have been caught on the hop by the various ARM SoC manufacturers and decisions by Microsoft and Apple to target ARM instead of / as well as Intel. So they're responding with their own x86 SoC plans, and will rely on their advantage in silicon processing to be competitive. And they may become very competitive, but only whilst everyone else works out how to match 22nm and 14nm.
It's a mighty big task for Intel. They have to completely re-invent how to implement an x86 core, re-imagine memory and peripheral buses, choose a set of peripherals to put on the SoC die, the lot. There's not really anything about current Intel chips that can survive if they're to approach the power levels of ARM SoCs.
Also a lot of the perceived performance of an ARM SoC actually comes from the little hardware accelerators that are on board for video and audio codecs, etc. There's a lot of established software out there to use all these little extras, and the pressure to re-use those accelerators on an x86 SoC must be quite high. So there's a risk that an x86 SoC will be little more than clones of existing ARM SoCs except for swapping the 32,000ish transistors of the ARM core for the millions needed for an x86.
And there in lies the trouble; the core. The x86 instruction set has all sorts of old fashioned modes and complexity. To make x86 half decent in terms of performance Intel have relied on complicated pipelines, large caches, etc. These are things that ARMs can get away with not having, at least to a large extent. So can Intel simplify an x86 core so as to be able to make the necessary power savings whilst retaining enough of the performance?
The 8086 had 20,000ish active transistors, but was only 16bit and lacks all of the things we're accustomed to in 32bit x86 modernity. Yet Intel have to squeeze something approaching today's x86 into little more than the transistor count for an 8086! I don't think that they can do that without changing the instruction set, and then it won't be x86 anymore. They'll have to gut the instruction set of things like SSE anyway and rely on hardware accelerators instead, just like ARM SoCs. If Intel's squeezing is unsuccessful and they still need a few million transistors then as soon as someone does a 14nm ARM SoC Intel are left with a more expensive and power hungry product.
The scary thing for Intel is that the data centre operators are waking up to their need to cut power bills too. For the big operators the biggest bill is power. So they should be asking themselves how many data center applications actually need large amounts of compute power per user? Hardly any. Clearly there's another way to slice the data centre workload beyond massive virtualisation. If some clever operator shows a power saving by having lots of ARMs instead of a few big x86 chips, that could be game over for Intel in the server market.
In a way it's a shame. Compute performance for the masses is increasingly being delivered by fixed task hardware accelerators. Those few of us (e.g. serious PC gamers, the supercomputer crowd, etc) who do actually care about high speed single thread general purpose compute performance may become increasingly neglected. It's too small a niche for anyone to spend the billions needed for the next chip.
Intel had ARM. They won it in an IP spat with Digital. It was the Digital StrongARM, and was way ahead of it's time. Used it and it was unbelievable what we could do with it for so little power (although the software boys had a problem when they tried to do everything floating point!)
Then Intel decided that it needed changing. A lot of the Digital people left, the StongARM morphed into something much bigger and power hungry (and too expensive), and then Intel spun it out into what is now Marvel because they wanted something that was still a 8086 at heart.
I think Marvel has managed to make from ARM what Intel couldn't, and I would be very surprised if Intel manage to come up with something that can compete with ARM in terms of performance/watt.
Obviously, doubling the pace at which the number of transistors per chip increases - when people were thinking Moore's Law was about to run out of steam - requires Intel to be successful in its ambitious technological plans.
But being able to make chips for notebooks that have very low power requirements requires that the advances in technology be put to work reducing power consumption instead of increasing CPU power. That requires Microsoft to cooperate in limiting the resource requirements for new versions of Windows. We could already have very convenient notebook computers if Microsoft were still selling Windows 3.1 for them.
Even if ARM ends up being superior to Atom in every way, you still have the inertia of all that software written for x86. People won't switch to the new hardware if the software isn't there and people won't develop the new software or port the old software until people switch to the new hardware.
Not this hoary old chestnut again?
People have already shown they are more than willing to forgoe using their ancient and clunky old x86 (non touchscreen) applications on their mobile devices.
See Exhibit A: apple iOS
And for the very few, really ancient legacyware crapps that you simply can't do without there is always emulation.
See Exhibit B: "Linux Kernel runs in web browser"
If it is that old that it can't be ported over then it won't be terribly resource hungry (because it was written to run on a 486 or some such) and can therefore easily be run within full x86 emulation on a modern CPU.
"the Pentium in 1995, which introduced multimedia capabilities to the PC"
In '93 and '94 I was working for a small, hole-in-the-wall PC manufacturer that I will only refer to as "Chinaman Joe's PC Emporium" and we was sellin' multimedia PCs with 386 and 486 processors. It was sound cards, CD drives and video cards with more than 1MB of video RAM that introduced multimedia capabilities to the PC.
In those days the sound card default IRQ setting was shared with the printer port and the speakers were not powered, so you had to change the IRQ or every time you printed something your PC would make a farting noise.
it reminds me of the cold war race where spending was increased to bankrupt the competition. I don't see this winning for Intel since they've been pushing Atoms onto the newest die shrinks and still they follow ARM on older/larger processes. All I see here is at best Intel finally meeting ARM who'll be 2-3 shrinks behind them and still much cheaper and all this in 3 years?
I think we might see a processor market change. I was hopeful for Alpha but that didn't last and then it was PowerPC. Way to go ARM. Now we just need to see if we're talking Android+, ChromeOS, or even Linux in good numbers before Windows marketing promises suck the intelligence out of all of the press and everyone waits for a version of Windows that's worth anything on ARM hardware.
[ In his 37 years at the company, Otellini said, Intel had made such a fundamental shift only three times. ... the Pentium in 1995, which introduced multimedia capabilities... low-power, mobile Centrino... will be embodied by Intel's 22-nanometer Ivy Bridge and Haswell microarchitectures in 2012 and 2013. ]
These are junk statements.
There seemed to be 2 major moves in my mind:
- copy AMD x64 archtecture, instead of forcing Itanium upon the market
- copy Sun throughput initiative of multi-thread & multi-core on a single die
Intel did very well with their catch-up work, leading the market again, once they got their acts together!
... when the maths co-processor was swallowed up and became integral to the main CPU. IIRC this was around the time of the DX (as compared to SX) processors. There wasn't really much (PC) CPU competition about at the time and even for larger systems everything had discrete maths units.
This post has been deleted by its author
It's not packing 40 Billion transistors rather than 10 Billion in a CPU that counts anymore. It's packing everything into a cheap low power chip that counts
Intel needs a CPU with 500,000 transistors instead of 1Billion to 40 Billion
Then they have 1/500th core consumption and space for RAM, Flash, GPU, I/O on the chip.
Moore's law is pointless beyond a certain point for CPU, It's the RAM, Flash and everything else.
Lets see a graph of how many CPUs shipped, not Transistors. Compared with ARM. MIPS, Microchip, Atmel, Samsung, Texas etc.
Andy Grove's book 'Only the paranoid survive' is a great read. In it he explains that whenever there's a 10-fold change in some technology metric you have a 'strategic inflexion point'. Basically he's saying 10X is disruptive. ARM has that 10X improvement in MIPS/mW and some. So long Intel and thanks for all the fish.