Apple may - and we emphasis that last word - have decided to transition its laptops from Intel processors to ARM-based CPUs Intel certainly has a fight on its hands in the media tablet market, currently dominated by ARM chippery, but does it need to worry about the laptop space too? It will if the allegation about Apple, made by …
I don't buy it either.
The major pain point is performance. This becomes readily apparent if you try to use non-supported video formats with an iPhone or AppleTV. The claim in the article that no one would notice the difference is just mindless fanboyism.
The iPhone and the AppleTV are sufficiently locked down that most people are unable to run up against these performance limitations. That's not the case with a general purpose machine and applications that will try to exploit every cycle that the platform has to offer.
Compatibility is also a major issue, especially for proprietary platforms where most of the common tools are not available as source code anyone is free to start the porting effort.
Well, transistor for transistor and clock for clock comparisons do count. The ARM core, even today, is still about 32,000 transistors. Intel won't tell us how many transistors there are in the x86 core (just some vague count of the number on the entire chip), but it's going to be way more than 32,000. So if you're selling a customer Nmm^2 of silicon (and this is what drives the cost and power consumption) you're going to be giving them more ARM cores than x86 cores.
Then you add caches and other stuff. On x86 there is a translation unit from X86 to whatever internal RISCesque opcodes a modern x86 actually executes internally. ARMs don't need that. X86 has loads of old fashioned modes (16bit code anyone?) and addressing schemes, and all of that makes for complicated pipelines, caches, memory systems, etc. ARM is much simpler here, so fewer transistors needed.
What ARM are demonstrating is that whilst X86s are indeed mightly powerful beasts, they're not well optimised for the jobs people actually want to do. X86s can do almost anything, but most people just want to watch some video, play some music, do a bit of web browsing and messaging. Put a low gate count core alongside some well chosen hardware accelerators and you can get a part that much more efficiently delivers what actually customers want.
That has been well known for a long time now, but the hard and fast constraints of power consumption has driven the mobile devices market to adopt something other than x86. No one can argue that x86 instruction set and all the baggage that comes with it is more efficient than ARM given the overwhelming opinion of almost every phone manufacturer out there.
On a professional level needing as much computational grunt as I can get, both PowerPC and x86 have been very good for some considerable time. ARM's approach of shoving the maths bits out in to a dedicated hardware coprocessor will do my professional domain no good whatsoever! It's already bad enough splitting a task out across tens of PowerPCs / X86; I don't want to have to split them out even further across hundreds of ARMs.
Yes you are correct, and indeed users of other sorts of phones don't run in to performance limitations either.
What the market place is clearly showing is that most people don't want general purpose computing, at least not beyond a certain level of performance. Afterall, almost any old ARM these days can run a word processer, spreadsheet, web browser and email client perfectly well, and hardware accelerators are doing the rest.
Intel are clinging on to high performance for general purpose computing, and are failing to retain enough of that performance when they cut it down to size (Atom). ARM are in effect saying nuts to high performance and are focusing only on those areas of computing that the majority of people want.
Those of us who do want high performance general purpose computing are likely to be backed in to a shrinking niche that is more and more separated from mainstream computing. The high performance embedded world has been there for years - very slow updates to Freescale's PowerPC line, Intel's chips not really being contenders until quite recently and even then only by luck rather than judgement on Intel's part. It could be that the likes of nVidia and ATI become the only source of high speed maths grunt, but GPUs are currently quite limited in the sorts of large scale maths applications that work well on them and aren't pleasant or simple to exploit to their maximum potential. Who knows what the super computer people are going to do in the future.
"The ARM core, even today, is still about 32,000 transistors. "
That's no FPU and no SIMD instructions then.
"So if you're selling a customer Nmm^2 of silicon (and this is what drives the cost and power consumption) you're going to be giving them more ARM cores than x86 cores."
No-one sells square millimetres of silicon. They sell CPUs and these days they sell CPUs with multiple cores, but not too many because you simply can't get the data on and off fast enough to make it worthwhile. Look at Larrabee or Cell. These remain niche products because the bottleneck hasn't been CPU speed or size for some time.
"Then you add caches and other stuff."
Indeed. A modern desktop computer is a cache with an ocean of slow memory on one side and an excess of processing power on the other side. Your 32000 transistor core is going to be clocking at a few tens of megahertz (DRAM speeds) unless you spend about a million transistors on L1 and L2 caches.
"On x86 there is a translation unit from X86 to whatever internal RISCesque opcodes a modern x86 actually executes internally."
Actually this is an urban myth. There *are* a few x86 instructions that bail to microcode, but apparently CISC-y things like "add eax,[ecx+edx*8]" are implemented fully in the processor pipeline. The address generation stage has its own ALU and the argument fetch stage can talk to the L1 cache. In effect, x86 *is* the internal RISCesque opcode set.
"ARMs don't need that."
But if they are to get close to x86 performance, they'll need out-of-order execution, which will blow your 32000 transistor budget all the way to Pluto. This is particularly true because the ARM would require multiple instructions to accomplish the "add" instruction mentioned earlier. That's multiple live (architected) registers and multiple trips down the pipeline. If those aren't allowed to run OoO, you'll need to clock at some multiple of the Intel chip to keep step, and power consumption goes with the square of clock speed.
"X86s can do almost anything, but most people just want to watch some video, play some music, do a bit of web browsing and messaging. Put a low gate count core alongside some well chosen hardware accelerators and you can get a part that much more efficiently delivers what actually customers want."
Which is great until the world starts using different codecs, which it does every few years. Then you start wondering if it wouldn't have been smarter to spend the same transistor budget on making your general purpose CPU a little faster. Or smarter still to save on the R+D of those units altogether (which you'll have to claw back by selling the final product at a premium) and buy an off-the-shelf solution from Intel.
"No one can argue that x86 instruction set and all the baggage that comes with it is more efficient than ARM given the overwhelming opinion of almost every phone manufacturer out there."
Phones are a very specialised segment. You can get away with a fixed number of codecs, hard-wired, and there's very little other processing to do, so a feeble ARM core is a good design choice. A feeble x86 core would be good too, but Intel simply don't offer one and so we arrive at the present market segmentation for largely historical reasons.
OTOH, for a desktop core, instruction decode is a few percent of chip area these days, so in *that* market, what you describe as "baggage" is actually lost in the noise.
Within living memory, Intel have tried to replace x86 with something they designed to be intrinsically better. It didn't make enough of a difference to be measurable. They've also made ARM chips so if there was anything intrinsically better in *that* ISA, they'd presumably know about it. The evidence suggests that x86 just isn't bad enough to measure, let alone matter, except at the absurdly low end of the market and with "devices" getting more and more powerful each year, that's an end of the market that is disappearing.
In fact, you could say that ARM is moving up-market simply to stay in existence. Perhaps in 5 or 10 years time we'll look at tiny ARM chips the same way that we look at the 8042 chip. The ARM started as the CPU for a full-blown computer and then found its niche for a decade or so in less powerful products, eventually fading out of existence as even those products evolved to require increasing amounts of processing power.
Or maybe it is the desktop (and the x86) that will be replaced by tablets (with ARMs in them for largely historical reasons).
Silly fanboy nonsense.
> What the market place is clearly showing is that most
> people don't want general purpose computing, at
So people have stopped buying all of those cameras that have video formats that will make an iThing choke? They're giving up BluRay and DVD too? Don't think so.
While there are plenty of people willing to buy limited SUPPLEMENTAL devices, there's no real indication yet that people are willing to completely give up some means to deal with whatever content and devices are out there.
Sometimes you want to do something there isn't speciality silicon for. You don't even have to be that geeky to want such a thing either. Apple shills are trying to redefine "geeky" while ignoring Apple's own marketing history.
"But if they are to get close to x86 performance, they'll need out-of-order execution, which will blow your 32000 transistor budget all the way to Pluto."
I don't think bazza was arguing for 32000 transistor CPUs. The point was that the modest transistor budget can be spent on other things: more cores per die, more cache, specialised stuff, whatever. Steve Furber was last seen putting thousands of cores on a die - something that you'd struggle doing with any recent x86 offering.
You can argue about whether any of the above is sensible, but the point is that you get to do it if you haven't had to commit millions of transistors already. And does that flexibility matter? Of course it does: why do you think ARM has been so successful in the embedded space?
And, RISC isn't the be-all, end-all...
...well, at least if you use the, you know, "reduced instruction set complexity" definition, rather than the "load-store" or "all instructions are the same length" definition.
The fastest "RISC" processors nowadays aren't RISC by any definition relating to how complex the instruction set is.
If you're dispatching micro-ops, your ISA is no longer RISC, in my opinion.
But, that's not a bad thing - an instruction that turns into several micro-ops (assuming that instructions are the same length, which is true on ARM (except for Thumb), or that the longer instruction isn't too much longer, which is almost always true on x86) uses less memory than the same task implemented as multiple instructions that translate directly to micro-ops. Using less memory means that the instruction gets loaded into the caches quicker, and it uses less cache (except for micro-op decode cache).
All of this means that you don't need absurdly fast memory bandwidth to get good performance out of a CPU that uses these techniques, and you can use a little less RAM. This is why x86 machines could be fast in real-world use, despite atrocious synthetic memory benchmarks compared to various absurdly expensive RISC workstations.
Fun fact: ARM Cortex-A8 is no longer RISC, by my definition - the ARMv7 ISA includes some multiple load and multiple store instructions that are broken up into individual load/store instructions in the CPU. The micro-op instruction set is still ARM in an ARM CPU that uses micro-ops, and most instructions do still map 1:1 with an internal instruction, but there are a few that are broken up.
(Also, the Thumb decoder dispatches ARM instructions, but I believe every Thumb or Thumb-2 instruction maps 1:1 with an ARM instruction, so it's not really micro-ops, there.)
Ars seems to disagree...
... if you want to come close to intel's performance, there's no "magic dust" that ARM can sprinkle, and power consumption will go up regardless of ISA.
All said matches my biased expectations. If ARM wants to get performance that even comes close to what intel has now, they will have to implement everything that intel has already stuffed into their chips for years, among others out of order execution (isn't the A9 or the A15 out of order already? while the Atom is in order?), and at that poin they will loose all their power advantages, regardless of how wonderful their instruction set is.
Many of you questioned the usability of a Mac App store, but if this thing goes through, imagine the possibilities.
Apple is already hard at work forcing developers to go through the App Store. Design awards are only handed out to App Store programs, boxed software will soon disappear from retail. If Apple pushes OSX 10.7 through the App Store, not even Microsoft or Adobe could claim that it would be unsuitable for delivering 'mature' applications.
Say that you are on a MacBook Pro x86, and you buy a new MacBook Air running ARM. Simply log onto the App Store, and all your apps compiled for ARM will download. Because Apple already has all Apps on their servers, a re-compile will not cost them anything. With Castle, presumably even your preferences will carry over.
So yes, if Apple were to do this, any software issues will be non-existent.
That doesn't make any sense...
...unless you assume all software is written by Apple.
Even if Adobe and Microsoft sold their software via the App Store--and the App Store rules were changed to make that even possible--that would give Apple the ability to re-compile anything.
And anyone who says that "all you have to do is re-compile" is a blithering idiot who knows absolutely nothing about software development--and, yes, I include Mr. Jobs is that. Oh, and has a really short memory too, apparently forgetting how long it took to make the 68K to PPC and PPC to Intel transitions.
So, Apple are going to recompile old versions of their software for no profit? Microsoft and other software companies are going to do the same? Somehow, I doubt it..
No. Assuming Apple do this (and bearing in mind the boost switching macs to x86 gave to the sales of the mac, I'd be staggered if they did), they would need to write an emulation layer which would almost certainly cripple performance, regardless of the perceived advantages of ARM.
Long game going on here
Even if it does not make sense in the short term it is not suprising that Apple is looking seriously at ARM. Not becasue it is obviously better for the users but it is better for Apple.
Apple is still primarily a hardware company but the ongoing trend in hardware is for more and more of the important hardware to be combined into a single device package. Any company that sticks with Intel are going to end up putting a cosmetic case around the Intel package as all their competitors. This makes it hard to be different enough to charge much of a premium for your hardware.
Things are much easier if you license ARM and build your own processors. Even if most of the hard stuff comes from ARM you still have plenty imput into the design and you can take that oppertunity to make sure your software won't run on 3rd party hardware.
Now I know that Apple are the world leaders in getting people to pay a premium for hardware that is not very dissimilar to everyone elses but there must always be a risk that they might get landed with a legal ruling unambiguously legalising the hackintosh. The more markets they can move to the iphone/ipod modle the better from their point of view.
Size and battery life
Two things Apple cares deeply about with regards to portable computers and in which ARM solutions beat x86 hands down.
So long as the ARM chip provides somewhere close to the performance of the ageing C2D chips in the Air line-up this is a bit of an obvious play, with the App store taking care of the relevant compiled binaries being delivered for differing architectures as others have argued.
Not ANY time soon
Even with 64bit arm chips (next year), including quad cores and higher end GPUs capable of handling what people expect from Apple (iLife), Apple is not in the business of compromising the performance of their machines to fit a niche market. ARM doesn't support TB, doesn't support Display Port, doesn't have a SATA interface, and so much more.
Can they make OS X run on ARM? yea, iOS is a port of OS X itself... Can they do it in a compelling form factor, with compelling performance, comparable top their MacBook base model or Air? Well, since the price offset between the ARM and the i3 is about $25, and they might remove 10-20% of the battery in the process saving maybe an additional $10, I don't think they could reasonably shave moire than $100 total off the price of the machine, and it would fall far short of a full macbook performance. This might be usable for a dual-boot tablet sometime in late 2012/mid-2013, giving some limited access to basic OS X apps, but again, it might ride that price up enough to not be relevent, especially if still limited to tablet resolutions on the screen. At the same time, macbook prices are falling, and it;s reasonable we could see a $700 macbook by that point, if not less.
Yea, they're persuing it, apple keeps their options open, we found that out at the intel launch that they had from DAY ONE built and compiled every single piece of code on at least 3 platforms (Power, Intel, AMD, and a hint there were others too), so then going strong on ARM is a given, but will it result in a product? Not on current ARM architecture. When ARM is significantly more powerful that Atom, and can still be cheaper and more power efficient, it might replace the lowest end machines, but only if X-Code can cross-compile ANY app with a few clicks, otherwise those using ARM OS X may have a dramatic disadvantage in software availability, and emulating X64 on ARM64 just isn;t going to work well.
Re - not anytime soon
.... or apple could make sure they have enough ARM cores and encourage developers to use them, maybe dedicating them to specific tasks. They could be driving change here.
Increased battery life is always useful, particularly if your competitors don't have it.
Reducing the temperature means less energy wasted cooling redundant heat production and therefore increasing battery life
Any cash saved could be put towards some flash memory (increasing access speed & economising on power). if you still need a HD this could be switched off as it would be lower level storage than the flash memory.
This sound to me like quite an attractive machine
"doesn't support TB, doesn't support <pointless list>"
"ARM doesn't support TB, doesn't support Display Port, doesn't have a SATA interface, and so much more."
Er, you do realise that ARM defines a chip architecture and that ARM licencees are free to put on it whatever peripheral frippery they wish, do you? And ARM licencees have been doing exactly that for such a long time that Intel are unlikely to EVER catch up in the low-wattage market sector with any x86-based product, neither on performance (speed, wattage) nor price (except Intel can loss-lead with their price, though that on its own may not particularly hurt Intel).
"it might ride that price up enough to not be relevent"
This is Apple. If Apple build it, the punters will bow down and buy it, even if it sucks.
No need for an emulator. The App Store model means that the correct binary will be delivered to the machine. You'll be able to switch from x86 to ARM machine and when it connects it will download all the same apps you had bought from the Apple App store previously onto your new ARM machine. That is the great thing about the App Store, it removes the requirement for binary compatibility. By then most apps will be coming from the App Store. For those that bought outside the App Store though you're SOL probably!
I don't expect Microsoft Office and Adobe to deliver anything usable on non X86 chips.
I hava a laptop for Adobe Lightroom and Microsoft Office and an iPad for mail, calendar, contacts, notes and the occasional game.
App store Schmapps store.
You don't need a poor copy of apt-get to manage different hardware architectures for the same app. You can simply package them together and let the installer logic sort things out. Or you could even use fat binaries, but that wastes a lot of space on devices that don't really have any to spare. Either way, it's not a terribly difficult problem.
One application, many architectures
I have come up with a way to make Ubuntu / Debian .deb packages which are installable on multiple architectures with minimal bloat. (The technique probably could be modified for .rpm packages too, but my familiarity is with Debian.)
In order to make a convincing Declaration of Prior Art and so prevent Apple or anyone else from patenting this, let's just say that it relies on using postinst in a rather *ahem* creative way -- but one that should, nonetheless, be blindingly obvious to anyone who understands the gory details of .deb packages.
Is there a problem with x86?
I'm pretty sure this is the whole hardware industry not wanting to be at the mercy of Intel. It doesn't really have much to do with the chips being good or bad.
With ARM being an IP company licencing designs it means each computer manufacturer can build integrated systems around an ARM design.
Okay, x86 is a mess of legacy backward compatibility but it has worked until now.
Yes, there is.
X86 is shit, has always been shit, and the computing industry as a whole has taken a huge step back by sticking to the damned architecture. PPC, ARM, and basically every single RISC arch outperformed x86 by a lot; Intel then began jacking up the CPU clock 'till it matched the other processors. Of course, this means that current x86 suck a lot of juice, and run really hot compared to their RISC brethren.
The computing industry is now running stuff in the computer equivalent of a VW Beetle running on aviation fuel. Sure it can run real fast, but it's taxong on the engine!
A new architecture?
I'll believe it when I see it but I do not believe there are ARM chips out there that quite match intels apart from the atoms.
I would think ARM would need a better reference design (probably 64 bit, probably out of order, probably multi-core/threaded) to compete with intel.
Doesn't POWER already have all this?
Is it easier get something like a POWER or a G5 design adapted for whatever Apple have in mind than scaling up an ARM chip?
AFAIK no out of order ARMs exist yet... So it'll have to be a totally new chip anyways.
Why not bring back PowerPC's? Didn't Semi work on PowerPC's before too?
IBM stays the hell out of end user market
Of course it is not IBM couldn't deliver a 3Ghz G5 (check POWER5+ speeds), it is simply IBM didn't care for end user things. It is their corporate system.
Consoles are really different business but you can check how easy it was for MS to steer their own chip (similar to 970) to give excellent performance for xbox 360. Or Sony with the Cell.
POWER isa (much like ARM) is currently focused on enterprise servers, advanced scientific computing and game consoles. I think IBM got their lesson with G5 and figured they can never compete with x86 idiocy on desktop/laptop. I mean ship a light POWER7 running 5 ghz to market now, people/media will still talk about buggy sandy bridge.
For similar reasons, ARM will have problem on desktop too. Some people think Adobe will ship a Photoshop CS5 just with a recompile. Well, we (powerpc users) learned that it isn't working that way.
just need to match perf on laptops
the ARM solutions only have to match the performance of the Intel based laptops, not all of Intel's CPUs. With ARM chips already hitting 1.5GHz the current die shrink methods show it'll hit 2GHz in no time. Pair that with how small and power efficient multi-core ARM chips are as opposed to the best Intel can do you should see why Apple might go this route.
And if I were Apple and have seen demo's of what these multi-core high speed chips can do and had a tablet hardware they wanted to merge with their desktop over a few years then this is a no-brainer. ARM is already scaling up and it's pretty obvious that the future of CPU design is the multi-core method it makes sense and cents. I'm not much of a fan of Apple but they don't seem to be on the wrong path for profits and efficiency very often. Add their App Store for desktop and laptops and you also have a way to get ARM or x86 based apps to customers as needed without the customer knowing which one they need since the App Store infrastructure will figure that out and deliver the correct version for your hardware.
I also recall recent news of Apple teaming with Intel to build their ARM chips instead of Samsung doing it. If that means using Intel's new processes( 32nm, 22nm or even 3D 22nm ) then there's lots of things the faster smaller ARM chips will do.
It would be great to finally start to see ARM laptops as long as their boot systems are not locked to the OS. I also wonder how much Google ChromeOS has to do with all this since it could be what's triggered Microsoft Windows for ARM and that's triggered all the new interest in ARM in PC sized devices.
Fanboys arguing against an outdated view of the opposition.
> the ARM solutions only have to match the performance of the Intel based laptops,
Which includes Sandy Bridge.
Intel doesn't exactly stand still either.
ARM can't even match Atom and that's the stuff that other PC users snicker at. ARM has a very narrow area of appeal. Beyond that, it has no hope of competing against the PC on it's own terms.
Tablets are slow
Tablets are wonderful and amazing (I have an iPad) but come on, fast and powerful they are not. Compare a tablet objectively with any laptop made in the last 4-5 years and it's no contest.
Don't be fooled by GHz--2GHz for an ARM is like 1GHz for Intel (non-Atom). Those 2GHz you're hoping for from ARM will get them to the point of being slightly faster than a basic Atom netbook.
Much too slow
Look at the benchmarks. Per MHz, per core, ARM is ~1/2 as fast as a Core 2 at integer workloads and ~1/5 as fast at floating point.
Remember that many people consider the performance of the 11.6" MacBook Air (1.4GHz Core 2 Duo) unacceptable. Switching to ARM would make it less than half as fast. How would that work?
Intel already fought the CISC vs. RISC war 15 years ago and won. There is nothing magical about ARM, it's just 20 year old processor technology manufactured with a modern process. If they want competitive performance with Intel they will have to increase transistor count and power consumption to the point where there's no difference between ARM and Intel. Just like IBM, DEC, HP, SGI, etc. did a decade ago.
Not saying anything bad about ARM, it's great to have in a phone or a tablet, but why would you want one in your desktop or laptop?
Actually I've rarely heard people complain about the speed of a Mac Book Air. They do rave about battery life, weight and portability. And, because it's so light, they can also take an iPad along for browsing and e-mail for which the ipad is powerful enough. These customers generally don't need to worry about the price of devices.
But when it comes to power: nVidia have publicly announced that they expect to match x86 chips for performance with their summer releases. They have fabs and can offer GPU integration for SoC that will definitely outperform Intel's own SoC. Even adding a hardware x86 emulator to the chip isn't a problem so that existing apps will continue to run will be possible because the ARM designs excel at hardware specialisation and this is where most of the power performance gains against Intel's silicon can be made.
64 bit apart from memory for > 4 GB RAM is a red herring for consumer devices. Again it is the hardware extensions that will make things zip along and Apple already supports off-loading calculation intensive tasks to nVidia's CUDA architecture. ARM also makes multi-core more interesting: multi-task programs on different cores running at different speeds. An Apple with its own chip designers can probably contribute some expertise to an area which would mean easier to assemble systems - Apple TVs with a screen and big batteries and profit margins.
Apple now has several years' experience of cross-compiling the OS X core and applications (mail, browser, etc.) across x86 and ARM but as there have been no indications of ARM builds of Lion I guess we are unlikely to see a "Mac" branded product using ARM chips this year. However, we may well see an ipad pro or an ARM-based ibook for people who like the ipad but want to be able to do a little bit more than word-processing on it.
Apple will still want to segment the market so that any device it releases does not cannibalise the still very successful notebook line until it feels it has the chippery for a full migration. Though downward pressure on tablet pricing should help here.
It's not the speed...
It's the speed per watt.
Wikipedia tells me that the most efficient C2D chews up 10W at 1.6 GHz clock. Google tells me that a dual-core A9 can offer a 1.6 GHz clock with a 2W power draw. Make that a quad-core A9 and you have something with roughly the same integer performance as a C2D and less than half the power requirement.
As long as the end result is snappy enough for the user, you can build a computer that's lighter, smaller, and runs longer. If you can keep that ARM cool without a fan, you can save even more space and power.
Quad core != twice as fast as dual core
"Make that a quad-core A9 and you have something with roughly the same integer performance as a C2D"
Yes, when running a quad core benchmark. But not when doing most user-facing computer tasks (single threaded). I'm sure you know that if you sat down in front of a 3GHz dual core it would seem much faster than a 2GHz quad core unless you were doing something like video encoding or 3D rendering etc.
ARM has a long way to go to catch up to the perceived performance of an Intel-based laptop, and by the time they get there their chips will be big and hot too.
Better check top 500 computers list sometime. Or check a real, big enterprise and see what they use on servers.
ARM and RISC arch in general is 10x bigger than Intel, for example cpu per person. Computing is way beyond laptops/desktops now and in new way of doing things, it is just ./configure && make to move from one arch to another. That is why Intel is in panic mode, the "win" in wintel scheme sold them off, such crazy rumours can appear etc.
I got news for you too, check the very inner workings of chips you assume as CISC and 'won', you will be surprised. There isn't pure CISC or RISC anymore. Things became much more hybrid (including kernels, especially darwin).
Not dissing the arm...
I hope to see the ARM grow...
But like I said earlier, there's probably an architecture already mature enough for this market that perhaps just needs to be adapted/refreshed a little to fit in... POWER/ppc.
IBM has to keep this line up to date eventually (I'm sure its console sales are not small). Bit of pressure on them, perhaps?
@AC 18:49: "Look at the benchmarks. " (etc)
"Look at the benchmarks. "
Once you give people some specific examples of specific benchmarks on specific ARM and x86 implementations, we can have that discussion.
"Intel already fought the CISC vs. RISC war 15 years ago and won"
RISC didn't exactly lose the technical battle although it lost the marketing war. Inside any modern x86 is a RISC-like core being used for executing an x86 instruction set.
"old processor technology manufactured with a modern process."
That (leading edge process techonology building a trailing edge architecture CPU) is a perfect description of Intel's recipe for success (to date).
ARM do not build processors. ARM licensees do, in whatever process technology fits the job.
There are efficiency features in the basic ARM architecture which x86 cannot incorporate without abandoning basic x86-compatibility (things like code predication for smaller faster code).
"why would you want one in your desktop or laptop?"
Because sensible people have no inherent interest in Windows/x86 and the baggage it now brings, only in what they can do with it (or an acceptable substitute). As energy prices go up, we have a significant interest in boxes which are low cost to buy and low cost to operate, and (where applicable) have long battery life. If I can cut these lifetime costs by (say) 30% over three to five years, I'll happily sacrifice the rarely-relevant performance advantage which modern x86 boxes admittedly have; where high end x86 performance (and maybe >>32 bit addressing) is needed, there'll still be Dell.
So, let's see those benchmarks.
It is amazing to me that technically inclined people like the readership of El Reg can have spirited debates about processors without knowing anything quantitative about their relative performance other than what clock speeds they run at, which is almost worthless.
Personally I develop processor intensive software and I can't tell you much about it without revealing the identity of my employer, but it runs twice as fast per MHz on a Core 2 vs. an iPhone 4 (integer) and five times faster (floating point).
I'm sure you will cry foul on these numbers but I'm confident that they will be confirmed by other benchmarks--right off the bat a Google search reveals "Geekmark" and "EEMBC CoreMark" which both show Intel having at least a 60% integer advantage per MHz per core vs. ARM and it looks like usually more.
As for Intel having "trailing edge architecture CPU" I think you need to reevaluate--you are right that the instruction set is relatively stupid, but you are also right that since the Pentium Pro that has been [almost] irrelevant because the x86 instructions get translated to something else internally and executed like any other modern RISC core. And Intel has been as competitive as anybody at making those RISC-like cores, doing stuff like trace caches and microop fusion and SMT etc. etc.
As for energy prices etc., my quad core desktop computer with monitor on and CPU maxed out only needs as much power as a bright incandescent light bulb (~100W). If anything I am impressed with how little power it requires and wish it was somewhat faster--so if you told me I could reduce power consumption by making it 50% slower, well, that's the opposite of what I want.
The iPhone4 uses the now ancient Cortex-A8, so it is not surprising it is slower than Core 2. The Cortex-A9 is significantly faster, especially on floating point. So if you benchmark your app on the latest ARM CPU, you'll find the difference is now much smaller. Note also that much of the difference in Geekmark is due to memory bandwidth - mobile phones typically have a fraction of the bandwidth found in PCs. In a larger form factor one can use the same memory system.
You're quite right that modern out of order CPUs all work similarly (although not at all like RISC CPUs). However despite claims to the contrary, the x86 ISA still has a considerable penalty. One can fit several Cortex-A9's in the same space as an Atom core, and each of these runs faster while using far less power. If there was no cost to x86 at all, then surely Intel with all it's technical expertise and best process technology could trivially beat Cortex-A9 on power, performance and die area?
ARM doesn't need to outperform high-end PCs to make sense in laptops and desktops. It just needs to be fast enough for most people (which might not include you) while showing a significant cost and power reduction. And I would say that we have already reached that state with the A9. I've seen Windows 8 run on dual core Cortex-A9's - it runs pretty well. And Cortex-A15 is due out soon...
Didn't realise ARM had got that fast
"[benchmark] runs twice as fast per MHz on a Core 2 vs. an iPhone 4 (integer) and five times faster (floating point)."
To be honest that's better than I expected. Now compare size, transistor count, onboard cache and power draw in your benchmark. How long would the battery last for an iphone 4 that used a Core 2?
That the ARM architecture is suitable for HPC is confirmed by the interest in nVidia's Fermi line.
The A9 is not that much faster than the "ancient" A8. Even ARM's web site puts it at only 25% more DMIPS/MHz. I have tested on an iPad 2 and for my software, per core, it only seems to be a couple percent faster than an iPad 1. Please tell us how you're evaluating performance?
Also, yes, there is an ISA penalty at the Atom level and the Cortex-A9 is better in basically all respects than an Atom. But at the high end, notice that Intel has LONG had similar if not better performance (and usually simultaneously smaller die size and lower power consumption) vs. all high end RISC chips of the same era, e.g., UltraSPARC, MIPS, HP-PA, Alpha, POWER, etc.
"There are efficiency features in the basic ARM architecture which x86 cannot incorporate without abandoning basic x86-compatibility (things like code predication for smaller faster code)."
The jury's still out on predication, at least for general instructions. (It's clearly a winner for data movement, but x86 had CMOV about 20 years ago.) If AMD had wanted to add it as a general feature for x64, they could have defined a range of prefix bytes. (They probably still could. It's not like the x86 instruction has stood still in recent years.)
ARM also has basic in-efficiency features like fixed-size instructions, which mean the transistors (and power) you saved on instruction decode are replaced by the transistors (and power) you need to build the larger instruction cache. Fortunately for ARM, ISA doesn't actually matter anymore.
Not to mention you have to look at this as a natural progression of the computer, originally computers were huge and filled rooms and now you can hold one in your hand.
As computers evolve they become more mobile and severed from the mains.
Using ARM can double the battery life of a mobile device compared to x86.
Can you imagine a world where you could only listen to music at home on a hifi? probably not, so why limit yourself to only using a computer in one place?
Obviously they want 30% of everything that ever goes on it.
Users get the GPS and battery life and Apple get 30% of your Apple lifetime.
What about Bootcamp? And games? And business software?
What about the folks who went along with Apple notebooks because, at the end of the day, they could run their games natively under Windows? Or run their business apps? I know one Sharepoint consultant who was doing just that - he claims his MBP is the best Windows notebook he's ever had.
What happens when you tell them that, no, no more Wintel? No, don't tell me Windows 8 will run on ARM, because that doesn't tell me that older programs, like Total War or Sharepoint-related crud, will run on ARM+Windows 8.
Methink this rumor should have been reported on 35 days ago.
Let me tell you a secret
One of the reasons of Mac sales boom after Intel is the ability to run Windows, either natively boot (gamers) or virtual machine (engineers,business). Companies also liked the possibility to re-use existing code and easily optimise for a single arch.
Windows 8 will run on ARM but don't forget Windows scene is a closed source heaven so you must wait for all companies to release and optimize for ARM. It is not Linux or BSD.
Price matters as well
As someone who switched to a Mac when the x86 ones came out I agree with you up to a point - the ability to run Windows in a virtual machine was important to me. However, more important was that the price premium of a MacBook against a comparable Windows notebook was around € 1000 less than in the Power PC world, still more expensive but "acceptably" so.
Today I saw the first advert for an Android based notebook for less than € 100. Tiny and unergonomic though it may be this and other devices will start setting price expectations for notebooks with people happier to settle for Android based systems which remind them of their phones than they were with the Linux based netbooks. Apple and Microsoft will, at some point. have to respond to this market.
I'm currently very happy with my 13" MacBook Pro, due for replacement in summer 2012. Will be interesting to see what happens between now and then.
Google sure helps Linux market share
The Google marketting machine sure helped Linux expand out of its traditional behind-the-scene role, towards media-consumption devices (smartphones, tablets, and part of the netbook market), that certainly is a plus for Linux. However I fail to understand what you mean by "people happier to settle for Android based systems [...] they were with the Linux based netbooks."
"twice as fast per MHz "
What the **** has "per MHz" got to do with it? Or "per anything"? Why not "per dollar" or "per watt"? Who (other than ubergeeks) needs all that speed anyway?
"Core 2 vs. an iPhone 4"
Now that's more like it, real products (albeit with not much detail). But again, exactly who needs all that speed anyway?
"my quad core desktop computer with monitor on and CPU maxed out only needs as much power as a bright incandescent light bulb"
You mean, of course, a now prohibited in the EU incandescent light bulb :) Now stop thinking geek and start thinking energy getting much more expensive and corporates wondering why they're using so much electricity on PCs sitting idle most of the time, and then using even more electricity for aircon to dispose of the heat from those PCs. Now stop thinking geek and wonder why the main constraint in *most* folks desktop PCs is not CPU but disk, and typically then only while Windows is running an AV scan. Not everyone spends a significant amount of time ripping BluRays or whatever, but for those that do, there's x86.
My favourite low level benchmark is CoreMark. No ARM at the top end, almost all x86. So what. The available performance on ARM is more than enough for the vast majority of people the vast majority of the time. The rest can look elsewhere, especially once even MS have accepted that Windows/x86 is no longer the only option in the game.
A vibrant Mac App store is key
With over a year to go to this possible transition, gives Apple and it's ISVs enough time to recompile their apps to the ARM instruction set. I agree with those that have mentioned the importance of a Mac App store. Assuming Apple has trained it's users and it's ISVs that the Mac App store is the primary application distribution conduit then a transition from Intel to ARM for some of it's lower-performance cheaper laptops would be largely painless for it's users. I've written more about it here: http://nickager.com/blog/Mac-app-store-enables-future-ISA-switch/
... unlikely that I follow this link.
First I don't like click-baiting in general,
than you're writing style. Your loosing me with you're writing issues. Its worth then what I can take.
Third, I think that you're wrong. The Apple App Store has nothing to do with a possible architecture switch. There was no such store for Apple's previous 5 architecture switches, and it didn't seem to be a problem. Plus, I don't see any major software vendor giving in to Apple's ridiculous App Store policies without a serious fight: think Adobe and Photoshop, for example. The vendors of niche / expensive software (Matworks with Matlab, Wolfram with Mathematica, etc...) are going to be even harder to bring onboard. An walled app store taking a 30% cut might be viable for small low-cost, high-volume apps, but not for low-volume high-value ones.
depends on what the os is doing
Coz my lovely new shevaplug runs:
tor, nfs, squid, iptables and dansguardian, yet it performs as well as the 64bit machine it replaced.
I'll add more services when I see fit
18:07:13 up 1 day, 31 min, 1 user, load average: 0.35, 0.21, 0.22
For sure, I wouldn't be number cruching on it, but the point is, not every task NEEDS floating point ops.
I'm not a Brit, but I definitely dig ARM, as battery life is critical for me, not floating point operations!
Genuinely, who does such things on a laptop?
- Product round-up Coming clean: Ten cordless vacuum cleaners
- Vulture at the Wheel Ford's B-Max: Fiesta-based runaround that goes THUNK
- Worstall @ the Weekend BIG FAT Lies: Porky Pies about obesity
- Yahoo! blames! MONSTER! email! OUTAGE! on! CUT! CABLE! bungle!
- 'Snoopers' Charter IS DEAD', Lib Dems claim as party waves through IP address-matching