>ARM is not a processor
Where have you been during the smart phone revolution? Things like Android phones, iPads, etc. have shown that whilst an ARM might not be the fastest chip out there it's certainly plenty fast enough for browsing, email and some simple amusements which is all that most people want to do. The operative word there is 'most'. It shows where the majority of the market is. It shows where the money is to be made. Companies are interested in making money, end of. Any bragging rights over having the fastest CPU are merely secondary to the goal of making money.
So clearly compute speed is not as big a marketing advantage as all that. The features that allow one product to distinguish itself from others is power consumption and size. And that's where ARM SoCs comes in streets ahead of Intel.
Intel at last seem to have realised this and have been caught on the hop by the various ARM SoC manufacturers and decisions by Microsoft and Apple to target ARM instead of / as well as Intel. So they're responding with their own x86 SoC plans, and will rely on their advantage in silicon processing to be competitive. And they may become very competitive, but only whilst everyone else works out how to match 22nm and 14nm.
It's a mighty big task for Intel. They have to completely re-invent how to implement an x86 core, re-imagine memory and peripheral buses, choose a set of peripherals to put on the SoC die, the lot. There's not really anything about current Intel chips that can survive if they're to approach the power levels of ARM SoCs.
Also a lot of the perceived performance of an ARM SoC actually comes from the little hardware accelerators that are on board for video and audio codecs, etc. There's a lot of established software out there to use all these little extras, and the pressure to re-use those accelerators on an x86 SoC must be quite high. So there's a risk that an x86 SoC will be little more than clones of existing ARM SoCs except for swapping the 32,000ish transistors of the ARM core for the millions needed for an x86.
And there in lies the trouble; the core. The x86 instruction set has all sorts of old fashioned modes and complexity. To make x86 half decent in terms of performance Intel have relied on complicated pipelines, large caches, etc. These are things that ARMs can get away with not having, at least to a large extent. So can Intel simplify an x86 core so as to be able to make the necessary power savings whilst retaining enough of the performance?
The 8086 had 20,000ish active transistors, but was only 16bit and lacks all of the things we're accustomed to in 32bit x86 modernity. Yet Intel have to squeeze something approaching today's x86 into little more than the transistor count for an 8086! I don't think that they can do that without changing the instruction set, and then it won't be x86 anymore. They'll have to gut the instruction set of things like SSE anyway and rely on hardware accelerators instead, just like ARM SoCs. If Intel's squeezing is unsuccessful and they still need a few million transistors then as soon as someone does a 14nm ARM SoC Intel are left with a more expensive and power hungry product.
The scary thing for Intel is that the data centre operators are waking up to their need to cut power bills too. For the big operators the biggest bill is power. So they should be asking themselves how many data center applications actually need large amounts of compute power per user? Hardly any. Clearly there's another way to slice the data centre workload beyond massive virtualisation. If some clever operator shows a power saving by having lots of ARMs instead of a few big x86 chips, that could be game over for Intel in the server market.
In a way it's a shame. Compute performance for the masses is increasingly being delivered by fixed task hardware accelerators. Those few of us (e.g. serious PC gamers, the supercomputer crowd, etc) who do actually care about high speed single thread general purpose compute performance may become increasingly neglected. It's too small a niche for anyone to spend the billions needed for the next chip.