Feeds

back to article Surprise! Intel smartphone trounces ARM in power trials

The industry analysts at ABI Research pitted a Lenovo smartphone based on Intel's Atom-based Clover Trail+ platform against a quartet of ARM-based systems, and Chipzilla's system not only kept pace with the best of them, but did so using less power. "The benchmarks were impressive but the real surprise was the current …

COMMENTS

This topic is closed for new posts.

Page:

Bronze badge

first AMD now Intel

Those people that hate x86 must be fuming. First AMD dramatically extends the life of x86 with x86-64 a decade or so ago(and almost single handedly quashing Itanium), now Intel seems to be extending it even further with this stuff.

interesting to see, assuming the tests were balanced.

7
3
Silver badge
Boffin

True

Indeed, x86 should've died long ago. Though this tests might have had some "special sauce" tweaking so that the Intel chips would show up as better, given the "ABI Research provided no details on the content and construction of their benchmarks" part.

17
5
Silver badge

Re: first AMD now Intel

If true then Intel succeed in spite big time of x86 not because of it. It would be hard to come up with a worse instruction set for milliwatt mobile computing than x86. Intel had to throw a whole lot of money, engineering talent, and most important of all being a generation ahead in fab technology to compete. That is assuming this is all true in general usage. A big question not answered by this article is how much more the Intel part costs than the multiple sourced mass produced ARM part.

17
2
h3
Bronze badge

Re: True

The Orange San Diego seems to have better battery life than any Android smartphone I have seen / used.

(Can get 3 days with light use out of it.)

Not at all surprising these results. If Intel starts building Android with their superior compiler then they will get significant improvements. (Or gets the optimized functions from eglibc / uclibc). Be much harder for arm to do the same thing with Realview.

4
2
Silver badge

The waste of talent

"If true then Intel succeed in spite big time of x86"

Too true. Imagine what amazing chips Intel could make if they took all that engineering resource and put it into making their flavour of an ARM chip? The x86 must be giving them a 2x penalty. If they used a reasonable architecture they could knock other players out of the water.

Intel: why, oh why, did you sell of PXA to Marvell?

2
1
Silver badge
Devil

Re: first AMD now Intel

How did they measure the results? Internal phone "power draw" measurement as used in Android for the "what is using my battery" stats? That is waaaaaaay buggy and off.

I will believe this once I see the battery taken out, current meter inserted, the current measured and recorded. With pictures demoing how they did it as some of the devices in question have a soldered battery. While at it - all phones running Cyanogen same build to ensure that it is a CPU benchmark and not a "how much bloatware did I stick in the build" benchmark.

In any case, we can expect major suckage in a few years time. Not to worry. This Intel phone has a proper Imagination Tech GPU. Watch the show when it gets an Intel one.

11
2
Silver badge

Re: The waste of talent

But I doubt that's true - if they'd be better off making an ARM or even brand new CPU type, why wouldn't they do that? Why wouldn't they take the chance to "knock other players out of the water" - do you know better than the people making the decisions at Intel?

The advantage of x86 compatibility isn't that much for mobile (there's no Windows compatibility to make use of, and in fact x86 rather than ARM harms them due to lesser compatibility for native Android software). And I don't think it would help give them an advantage either against ARM (since even if x86 does well in mobile, Android will still support ARM for a long time yet).

No, the reason they do x86 is likely because that's what they do well - their engineers, their software, their manufacturing, is all geared up for it, and you can't just magically turn that into "making an ARM chip". This situation comes up all the time, I know it does in software - "If only I wrote this old stuff from scratch, I'd do a much better job", but the problem is the effort in doing something new is more than simply improving or even hacking the old stuff.

4
3
Bronze badge
Linux

Re: first AMD now Intel

Surprised?

The real world is about "Practical" and "Pragmatic", with a dash of luck and sense of opportunity.

0
0
Gold badge

Re: The waste of talent

"The x86 must be giving them a 2x penalty."

Uh? It's 2013! Out of order execution and micro-ops mean that the ISA's only impact on performance is instruction decode, and instruction decode is about 1% of die area. On desktop-sized chips, ISA hasn't been relevant since the last century. It would be fair to assume that it hasn't mattered on mobile-sized chips for quite a few years either.

What matters is where you choose to invest your development budget. Intel are now putting theirs into mobile.

7
0
Anonymous Coward

Re: The waste of talent

" the ISA's only impact on performance is instruction decode, and instruction decode is about 1% of die area"

Careful Ken. You don't think the code density of the ISA has any impact?

I don't know enough about x86 to comment about x86 vs ARM in this respect, but when comparing ARM vs classic RISC, ARM code tends to be smaller. This means more of the application fits in ARM cache, and you get more performance per MB/s of instruction memory (main or cache) bandwidth. And for a given memory size (ie cost) you can fit more "stuff" in on ARM. More performance per MB/s means a slower clock on ARM gets the same performance as a faster clock on a classic RISC, which in turn may mean cheaper batteries or longer battery life or...

The reasons ARM code is smaller include the Thumb instruction subset and the general predicated instruction capability.

ISA *does* matter in cost/size/power-constrained embedded systems, and it may well matter more than you seem to think in mobile phones.

The Coremark benchmark (sourcecode freely available) might shed some light on stuff like this. Couldn't see any recent Intel results last time I looked. Anyone else seen any?

"What matters is where you choose to invest your development budget. Intel are now putting theirs into mobile."

Intel have repeatedly put their development budget outside the world of x86 IT for many years, and repeatedly failed. Maybe they'll find a winner this time, after all it is closer to their traditional x86 comfort zone than (say) iAPX432, i860, i2O, IA64, embedded graphics, wImax (add your own to the list of Intel's non-x86 flops).

4
0
FAIL

Re: The waste of talent

Maybe because they were incapable of doing it.

Go on, google the performance issues of Intel's StrongArm replacement - the first X-Scales, and the other issues they couldn't fix with the later revisions.

As far as that table is concerned, it's meaningless without the test schedules, so that someone else can independently verify the results.

Besides Intel themselves have said that they can't beat ARM with current fabrications, just get close.

2
0
Thumb Up

Re: The waste of talent

For Coremark the most recent certified scores are 9.36 Coremark/MHz for a dual A15 and 6.61 for a dual core / 4 thread Atom N2800.

In addition to what you said, check the Atom die size: http://chip-architect.com/news/2013_core_sizes_768.jpg It seems to me Atom is a bit more than 1% larger than A15.

That's the "ISA doesn't matter" myth debunked once and for all.

1
0

Re: first AMD now Intel

If I'm correct Intel is not only a generation ahead in the incremental sense of the word, but has a fabrication technology that allows transistors to linearly scale down power consumption (I don't remember the exact details and source, I wondered if the was on TheReg). This is the big reason behind this achievement, and makes concepts such as Big.Little unnecessary. Which keeps me wondering, what would ARM processors be like with that production process? I guess even better. So unless the competition cannot replicate that it's just a matter of time before ARM reigns supreme again.

1
0
Silver badge

Re: first AMD now Intel

>How did they measure the results? Internal phone "power draw" measurement as used in Android for the "what is using my battery" stats? That is waaaaaaay buggy and off.

>I will believe this once I see the battery taken out, current meter inserted, the current measured and recorded.

You want multimeter readings? From six months ago:

http://www.tomshardware.com/reviews/atom-z2760-power-consumption-arm,3387-5.html

"...tore down tablets and identified critical points where microsoldering leads to a fancy version of a Fluke multimeter yields power consumption data for specific SoC and platform subsystems.

"Our own benchmark data, extrapolated, is consistent with Intel's. At idle, Nvidia's Tegra 3 imposes similar draw as the Atom. But as workloads become more demanding, Intel's lead increases.

"I encourage you to do to the same arithmetic we just did when it comes time to comparing platforms. In the meantime, seeing how Intel does its power consumption measurements by soldering wires under a stereo microscope has given me an idea."

2
0
Facepalm

Re: True

x86 is definitely not the ideal instruction set, but RISC vs. CISC or VLIW was never really what it was made out to be. There were just as many disadvantages to RISC as there was to CISC. Code size on RISC was huge. Then we ended up with the bastard step child of RISC being Thumb which was somewhere in-between.

These days, the instruction set means nothing in reality. It's all about efficiency in processing itself. It's about things like how the CPU handles cache coherency, how the CPU manages passing code between cores, how to handle multiple ring-0 contexts... effectively making Ring 0 the new ring 0.5. It's about handling SLATs. These are all things which matter. Then of course what matters is the ability to power down major parts of the chip. This is something which doesn't work well in a single die environment where 99% of the chip is synthesized from a common VHDL/Verilog code base which doesn't allow for the analog nature needed between units.

Intel's chips make use of x86 and x64 instruction sets, but no decent processor today will make use of that when executing code. Now the next generation of Atom is also doing away with x86 and x64 in the core and replacing it with a instruction set agnostic architecture. The CPU will instead attempt to recompile the code when it receives it in order to handle tasks out of order. In fact, to a certain extent, the nature of x86 and x64 lend better to this design since RISC groups everything into a single instruction where ever possible. Intel's nature is relatively granular and provides what will be easier to recompile on the way in and manage dependencies for. I can very easily in my head design algorithms for out of order execution of x86 instructions where ARM instructions require a second phase altogether to manage the instruction dependencies... though it's not particularly difficult either... just takes more transistors.

If you also give me a chip with AVX2 instructions, then I'm really happy. AVX2 is just damn sexy in everything regarding mobile phones. It would allow me to vectorize my code and make use of two-in-one-out instructions. If they make a new set which allows a single instruction for a 16x16 16-bit hardware transpose operation or an extra flag to access registers vertically instead of horizontally, I'd be in love. At the moment, a 16x16 transpose is the last missing instruction in AVX2 in my opinion.

2
1
Anonymous Coward

Re: first AMD now Intel

Point is, we would all be using Itanium if Intel could design a decent CPU properly.

To continue to tweak x86 is obviously much easier. You can do a before/after comparison of it the change made any difference.

So rather than say x86 is good and well done Intel, we should be questioning why it is that nobody can design a new processor family that is miles ahead of both?

1
1
Anonymous Coward

Re: True

The Reg's recent review of a Asus Padfone thingy was a perfect illustration of how you need to compare like with like.

The Padfone has a much bigger battery than any smartphone available right now. More comparable with an iPad.

0
0

Re: The waste of talent

That's not quite right. The x86 architecture needs a lot of memory interlocks like checking for writing into the code being executed and has much stronger coherency requirements besides being saddled with all sorts of strange operations. AMD's general manager of the server business unit said it took them more than ten times more money as twice as long to design an x86 chip than an ARM one. That's down to all the messing around and it'll tell now that the whole business of designing is getting more standard but the actual designs are getting more complicated

Those figures are interesting though as others said one can't say anything definite as no real details are provided but the one that struck me most was them saying the Intel chip had four times better memory performance if I read it right. That would probably explain most anything else and I'd really like to know how it was achieved.

I'm fairly sure Intel will be able to cream some of the high end market off when they get their new mobile chips out whatever about whether these figures one way or the other. Personally though I think the more worrisome strategy of Intel as far as ARM is concerned is that it is trying to get a better presence in the chip foundry business. If they could knock out the high end of the other foundries they could then start causing real trouble for high end competitors and properly protect that market together with their high margins which all this sort of work doesn't really.

0
0
FAIL

"ABI Research provided no details on the content and construction of their benchmarks"

Whatever happened to the 'death of the Reg' icon with the tombstone?

30
1
Silver badge

Just one question...

Is that Chocolate or Maple Fudge?

0
0
Silver badge

I thought a more appropriate question might be who paid for it? It could be completely genuine and they did it for giggles or publicity. They could also do a lot of work for intel or want to impress intel. It's a very interesting set of results, especiallly if its legit. Intel could spur arm on even more which is good for all of us. Then again if there was nothing to hide they should really have given more details on their methodology and any links to the companies involved.

3
0
Silver badge
Thumb Up

One swallow doesn't make a summer

While these tests are ostensibly impressive, I think we need to see a few more independent trials before we can say Intel has cracked it.

As long as Intel don't get a monopoly in smartphone CPUs like they did with desktops, as a consumer, I'm glad both institutions (Intel and ARM) are trying to out-compete each other's technology.

The last thing we want is another couple of decades of a Microsoft-like era where real innovation was stifled by pure economic (monopolistic) power.

28
1
Anonymous Coward

Re: One swallow doesn't make a summer

Intel's track record of flying straight isn't good. For US businesses it isn't about competing, it is about destroying the competition.

1
0

"which means that we, the consumers, will be the ultimate victors ..."

Don't think so .... We're about to enter an era where "The competition" isn't competitive enough and that will have 2 effects:

1) Price of tier 1 mobile devices will rise as Intel extracts is "pound of flesh" in order to recoup its zillion dollar capital investment.

2) The EU bureaucrats will declare Intel a monopolist citing their dominant market share as evidence and will issue fines for some yet-to-be-devised infraction of competition rules.

Strange old world: I'm off to build a 22nm Fab in my back-yard ... Oh wait... You can't!, its really hard.

Oh never-mind - I'll just build my superfast Quad-core Krait thing on TSMC's new 16nM process.

Nope , can't do that either because it doesn't work.

How about Samsung .... Nope, lost the plot 2 years ago.

And you're left with ....... an X86 chip in your Iphone.

1
16
Silver badge

The competition is embeddable cores. Samsung, Apple, et al pick the ARM core, the GPU and everything else, lay out the silicon (or let a computer do it) and hit print. It's very mix and match. As a result, competition is healthier than it has been in years. ARM is likely to persevere both on momentum and because you don't have to go begging cap in hand any time you want a custom fabrication.

4
0
Silver badge
Boffin

What about standby power consumption?

Phones spend a lot of time heating the pockets of their owners.

3
0
Silver badge

Re: What about standby power consumption?

While I'm surprised at this test (and frankly want proof before I believe), the power saving changes we already know about should have fixed standby performance, if it needed fixing.

Another aspect is how significant CPU power is in overall consumption. If you're gaming all day or playing video it's going to be important but the screen is still likely to top power use. My elderly and not too efficient phone with it's ancient Qualcomm MSM8255 Snapdragon shows battery usage of 43% screen, 19% WiFi, 23% standby+idle. That's just 15% power used for the couple of hours it was actually working hard (gaming+browsing). As screens get bigger an efficient CPU becomes even less important.

5
0
Silver badge

Re: What about standby power consumption?

> If you're gaming all day or playing video it's going to be important but the screen is still likely to top power use.

This Atom uses Power VR-designed graphics like many of its ARM competitors, so playing video might not be the area the biggest differences are seen.

0
0
FAIL

Intel sponsored "research"?

The benchmarks appear to show that a dual Atom can beat a quad A15 on CPU performance. That's quite a feat considering Atom is a 5-year old 2-way in-order CPU while the A15 is a modern 3-way aggressive out-of-order CPU!

However independent benchmarks show a completely different picture: a Galaxy S4 leaves the K900 in the dust as you'd expect from the microarchitecture comparison: http://browser.primatelabs.com/geekbench2/compare/1979365/1970335

So that suggests something is going on with the chosen benchmarks. There are a million ways to cheat with benchmarketing. If the quad A15's somehow have to do more work then it is no surprise they burn more power doing so...

Also the results only show current, which means nothing. For a power efficiency comparison you'd have to measure Watts, and even more importantly Joules (ideally just the CPU, not the whole phone as in this case). Total energy to complete a given task is what matters.

40
2
Silver badge

Re: Intel sponsored "research"?

I have no opinion on the rest of your comment, but your point about age is incorrect - Clover Trail was introduced in 2012, not five years ago (sure, the original Atom is a lot older, but that's like saying ARM is even older).

3
1

Re: Intel sponsored "research"?

CloverTrail is a 2012 SoC indeed, however all Atoms are based on the Bonnell microarchitecture which apart from a few minor tweaks is essentially unchanged since 2008.

4
1
C 7

P=I*E

And assuming they tapped in between battery and phone to measure I, E should be 3.7v for all of them, as that's standard for a LiIon phone battery. Granted that's making a few assumptions given the sketchy details of the testing, but they'd have to be real amateurs (or sheisters) to use I as their benchmark if E wasn't consistent across the board.

0
0

Re: P=I*E

Lies, damn lies and statistics.

0
0
Boffin

Re: P=I*V

No 3.7V is not standard. There are different kinds of Li-Ion batteries, and commonly used ones vary from 3.6V to 3.8V nominal. It's important to understand what nominal voltage means - it is simply the average between the minimum and maximum voltage. Actual voltage varies from ~4.2V when full to ~3V when empty. Also the battery age, temperature and current draw affect the voltage.

So no, one cannot just measure the current and assume voltage remains a constant. I'd say showing just current is admitting you're an amateur. To measure power consumption accurately you need thousands of samples per second.

3
0
Silver badge

Re: Intel sponsored "research"?

>Total energy to complete a given task is what matters.

That is the methodology that Intel have been pushing:

http://www.tomshardware.com/reviews/atom-z2760-power-consumption-arm,3387-5.html

1
0
Anonymous Coward

no mention of the elephant in the room

On android Intel chips are poorly supported for apps build using NDK

4
1
Bronze badge

Re: no mention of the elephant in the room

According to the NDK website

" These requirements mean you can use native libraries produced with the NDK in applications that are deployable to ARM-based devices running Android 1.5 or later. If you are deploying native libraries to x86 and MIPS-based devices, your application must target Android 2.3 or later."

That doesn't seem like a show stopper in 2013.

0
2
BXL
Happy

Re: no mention of the elephant in the room

Not any more. Our Marmalade SDK is now supporting native x86 C++ apps on Android. It was relatively painless to convert from using the ARM tool-chain to the x86 ones.

0
0

All well and good...

...but what happens to the comparison when ARM devices start getting fabbed at 22nm?

7
0
Boffin

Re: All well and good...

firstly .. ARM chip makers can't use intel's trigate / 3D FinFET tech .. which is why power draw is low .. so 20 - 22nm might be hard to achieve .. despite IBM lab examples .. Most new ARM and GPU chip process is a 32nm today and has greater volt leakage .. especially when cranked up to 1.5Ghz or more

secondly .. TSMC is having difficulty in scaling up 28 nm production .. GlobalFoundries has barely ramped it's 28 nm line .. both had claimed last year to be going to 20nm by the end of this year .. but that seems unlikely in quantity until well into 2014 .. Intel .. which has shown itself a *bit* more reliable on process roadmaps .. will be at 14nm in quantity by mid 2014

those that think intel won't gain significant inroads in the SoC business are just not paying attention to history .. Intel does not enter a market unless it sees a profitable future in it ..

perhaps intel likes to control the architecture of it's chips ... not license and therefore dependent .. or felt it couldn't compete .. with a profit against established players .. Qualcomm .. Samsung .. Xscale .. Apple .. TI .. Intel .. if you are paying attention .. doesn't bother competing in GPU either .. except as integrated in the chip die .. why compete with nVidia .. when they can co-operate and keep their common competitors .. like AMD .. suppressed together ?

1
0
Stop

Re: All well and good...

how is that relevant? the Intel chip in question is a 32nm part. Intel will be releasing 22nm mobile chips in the near future as well so if you want to see what happens at 22nm let's see what happens at 22nm for both platforms.

1
0

Re: All well and good...

"Intel does not enter a market unless it sees a profitable future in it .."

Maybe so, but the future is hard for even Intel to predict, whereas Intel's historical track record outside their x86 comfort zone speaks for itself - a lengthy list of failures, including those I just listed in my reply to Ken Hagan.

Is an x86 "SoC that isn't an SoC" in their comfort zone?

0
0

Re: All well and good...

"Intel does not enter a market unless it sees a profitable future in it"

Not quite true - Intel does occasionally try things for which there isn't even a predictable market let alone profit.

Look at the Larbtree / Many-core / Phi path - The latest Chinese super computer is looking good but who would have know all those years ago when Intel started down that path?

Also don't rule out the value of the bigger branding issue. Even a relatively poor product with their name on it can influence buyers for their main product line. E.g. Integrated graphics. No-one would say there's a profitable market for feeble graphics but it did keep Intel's low-end CPUs selling more than AMD's in the not too distant past.

1
0
Sil

No surprise there

Sure these facts need to be confirmed by other testers but it's not like it's a real surprise.

Anandtech came to basically the same conclusions a few monthes ago.

Also, while not the same product, most tests of haswell laptops have surprised independent testers with outstanding battery life ( such as mac air).

And the ways to improve processing power such as increasing MHz or say implementing out of order execution are known to take a big toll energywise.

So while it's not difficult to make a super slow low power processor the faster you want it to become the harder it is to be energy efficient.

5
1
Silver badge
Thumb Up

Re: No surprise there

".....So while it's not difficult to make a super slow low power processor the faster you want it to become the harder it is to be energy efficient." Anyone else remember how slow and much less power-hungry x86 used to be? I have an old 386 desktop in the cupboard that has a 40W PSU, whereas my current desktop has an 800W one! ARM may have started out as the low-power champion, but adding cores and speed has fattened it up, just as it has done with the x86 desktop CPUs. I just don't see why people are so blinkered by hatred of Intel not to take note of the fact that Intel have spent decades shoe-horning x86 into laptops, and they have experience from their own phone CPUs as well as experience with their own ARM designs. The inevitable growth in power requirements for ARM has allowed Intel to be competitive.

2
8

Re: No surprise there

There is no arguing that intel have done a good job getting performance up and power down in it's x86 processors.

Some of us remember that intel got the contracts (ratified) to build the x86 stuff for PCs on the backs of others. IBM demanded a second source.

Since the 386 when intel believed they could "go it alone" they have tried to sue the competition out of the business (486 time), used anti-competitive practices (buy the competition out of the market by bribing system manufacturers), they have used skewed benchmarks (read anandtech and toms hardware around the time of the athlon / p3).

Anything can be proven using the "right" benchmarks.

As for hating intel, naa, not really.. I just don't buy from convicted monopolists.

8
2
Silver badge

Re: No surprise there

>Sure these facts need to be confirmed by other testers but it's not like it's a real surprise.

>Anandtech came to basically the same conclusions a few monthes ago.

Thank you Sil, I'm glad someone has been paying attention to recent developments. "ARM is more power efficient" has become near dogma, when the reality is actually more interesting. Another bench-mark heavy site, Tomshardware, has been looking at this too.

I don't care what my next phone is built around, and I'm not saying Go Intel: I'm saying lets have more data.

1
0
Anonymous Coward

Knowing what benchmark test they used would be helpful. The test could have been rigged to make the Atom look better. Also, why did they list Amps? Watts is a better measure to use than Amps. With how chips to turn components off and change their voltage. Some of the Atom processors range from .75 to 1.1 Volts.

There has to be a reason why they listed Amps.

18
1
Silver badge

Testing a range of mobile phones using software benchmarks, you can probably only measure battery current; unless you do some tricky attachment of current probes onto very fine pcb tracks or chip pins. As far as the user is concerned, the only thing that matters is battery current, which determines life before recharge is needed.

Actual chip power measurements would be useful for high power desktop and server applications where heat production (and dumping) is a very important factor.

1
3
Boffin

Easy To Measure

If you are doing this kind of testing on consumer electronics, soldering flying leads to the battery terminals is no big deal. Wire the battery in instead of insert it into the compartment and splice your current measuring devices into the cable. Easy.

8
0

Page:

This topic is closed for new posts.