Yes, but when the screen is off and it's doing stuff in the background...
ARM's new energy efficient Cortex-A7 processor will bring computing to a billion more people, its CEO claimed yesterday. Which may or not be a good thing. For the rest of us, harnessed to one of their most powerful processors, the A15, it will also form the basis for a System on a Chip that will make two processors work as one …
Doing stuff in the background?
As in sending Apple all my location/financial/etc data? Granted, Apple isn't listed in the article, but I'm shocked at the liberties taken by IOS5.
How about a plug in co-processor
for my laptop?
8 cores of that with a hefty splash of ram should be able to quadruple the power of my laptop for less that a ton and not drain the battery too much more.
Great for non MS users.
Great for non-MS users ?
Until Windows 8 is released for ARM next year you mean ? ;-)
Beowulf cluster in a laptop
These devices are physically small, low-heat and low-power. You could fit a good 50 processors in a laptop case and make a nice portable cluster.
Written while waiting for a dual core Intel to take approx 1.5 hours to build a Linux rootfs image from source....
...like turning your display brightnes down by about 10%.
We are living in amazing times, aren't we. At some point you will hold the compute power and memory storage of a Cray Y-MP in your pocket.
At which point it becomes possible to run MS-Office 2010
Marry this CPU power with that of the GPU's these SoC's will have in 2 years time and you easily get a Cray from just a few years back.
Makes a lot of sense
... a dual core processor with unbalanced cores - one for high and one for low power tasks.
Software support will be the important marker of this scheme's success or not.
My reading of the article is that software is not really affected.
That is easy enough; just make sure that both processors have the same basic feature set; number of registers, etc, and then changing from one to the other is just a case of copying the state between one and the other. Perhaps the low-level OS may get involved to hint to the processor which core to use depending on likely processing requirements, but otherwise I don't see why it would need to.
I guess that they have made this more efficient than just having a single processor with accelerators like pipelining, multi-execute and HW FP, etc, and just turning off all the accelerators & dialing back the clock when you don't mind running slowly.
"However, the main energy cost on any smartphone will almost always be the display, which consumes around 80 per cent to 90 per cent of a handset's available power depending on usage, meaning that energy savings on chips, while beneficial, will have minimal battery impact from a consumer perspective."
While true, energy efficiency is only part of the intended appeal of the chip; another plus point being the tiny die size - hence an increase in manufacturing yield in addition to the intrinsic per unit cost saving. Not all the phones for which this core is an option will be running 9" IPS screens.
The problem seems to me that phones are getting improved performance and power management, but this is then seen as an excuse to slim down the phone more. Reducing battery capacity and leaving the phones at the same sort of battery life.
It's not just Apple who are doing it, Samsung are at it too. This sort of unhealthy competition to produce slim phones is rather annoying.
Correct me if I'm wrong.
I believe Apple is doing the opposite of this - the battery in the 4/4s is actually physically bigger than the 3G/3GS
Using the radios (wifi or UMTS) will give the screen a run for its money when it comes to running down the battery. And, while there is hope that screens may be at some point be able to use ambient light to reduce their power draw, I think it's pretty nigh impossible to reduce the power consumption of the radios. Isn't there some direct relationship between data transfer rates and power consumption? There is definitely a direct (square) relationship between the distance the signal has to travel and power required to send it. Unless it turns out that CERNs neutrinos have something to teach us on this!
IANARFE*, but I think that there is a relationship between data rate and required raw (rf) power; there definately is between distance and TX power (for given RX sensitivity).
However, it's not that easy. Recovery of the data stream from the RF is quite difficult now, and the processing cost of this is a big part of the power draw. If you look at the 1st gen of 3G chipsets, the power usage was HUGE compared to now, even though the datarate now is higher!
*I Am Not A Radio Frequency Engineer
I Am A Communications Engineer. "how much data can I push per unit time" is governed by signal to noise ratio (Shannon's Law). Increase signal to noise, you can push more bits per second. So, for a given set of channel impairments (noise, signal attenuation, antenna gain, etc.), every doubling in TX power gives you 3dB more signal, and roughly doubles the number of bits you can push. Of course, there are things other than increasing power that can help - better modulation schemes that more closely approach the Shannon limit, better antennas (including more antennas, such as spatial diversity systems like BLAST and MIMO), better receivers, better coding schemes that reduce the overhead of forward error correction, etc.
As I understand it ...
... there is a direct channel for transferring processor state from one to the other, so switching is pretty fast. As for knowing when to switch, I can imagine that the OS monitors processes: If a process running on the A15 spends most time being idle, the OS switches the task to the A7 and if there is little or no idle time while running on the A7, it switches the task to the A15.
It will also need to monitor heat buildup- these new Arm15 running at 2 or 3gig cannot dissipate the heat (even at 28nm or lower) for more than a couple of minutes before failing. So when it get too hot you need to use a slower core, or drop the clock rate.
Anand has a much more detailed writeup with slides
The reason for making it low power
It does help with battery life but also importantly it keeps the heat dissipation down. Important when your processor core is being integrated in a large SoC with all sorts of other stuff. Ultimately ARM's customer can use a cheaper package and make more pennies.
0.5mm2 ARM chip?
So it's about the size of a grain of sand?
Color me dubious.
Re: 0.5mm2 ARM chip?
According to ARM, a single core is 0.45mm^2
I'm not amazed the get the heat out of it...
...I'm amazed they can get the signals connections in and out of it. Realllly tiny solder balls? 20 micron bond wires?
the danger is
that if a process/task isnt locked to a core, then as people add apps that run in the background, the smaller core will be maxed out.
when the phone tries to do its load balancing thing, this will lead to the larger chip being on for more of the time, which defeats the point of having the smaller chip in the first place.
a GPGPU type of solution could work better, IMHO
Sounds like a fantastic chip, but the naming strategy is confusing. Apparently it's a higher performance than the A8 and possibly the A9, but less than the A15, so surely it should be an A10, A11 or A14?
Or perhaps there would be strong resistance from the ARM employees to those, given that the are probably stuck on roads of the same names trying to get in and out of Cambridge every day!
> At some point you will hold the compute power and memory storage of a Cray Y-MP in your pocket.
I believe that point would be Today.
Newer smartphones have 1GiB RAM. A common SoC implementation, nVidia Tegra 250 T20, has >5 GFLOPS in its GPU and two 1GHz integer cores.
According to wikipedia, the original Y-MP series topped out at 8 processors of 333 MFLOPS each (total 2.7 GFLOPS); and a princely 512MiB of RAM. The minimum configuration had 128MiB RAM and 666 MFLOPS.
So you can certainly have the power of *a* Y-MP, and arguably as much power as the biggest configuration you could order when Y-MP was announced. Not to mention a whole cluster of Cray-1's (4MiB RAM!, 250 MFLOPS if you really push it).
Later derivative models (which tended to drift away from the "Y-MP" designation) may eventually have gotten as powerful as a throwaway desktop available today, e.g. $900 Lenovo Ideacentre 7727-5DU with 3.4GHz quad-core i7-2600K (+ 3.8GHz turbo + hypothreading), 200 GFLOPS video chip (Radeon 6450), 12GiB RAM, 1.5TiB disk.
Yes I know GPU GFLOPS are talking about single precision and they're only about 1/4 as fast at double precision. So if your desire for a Y-MP includes double precision floating point vector processing, you'll still need to drag around a wagonload of Cray hardware to (slightly) beat your smartphone.
The Cray probably blows the socks off the desktop, not to mention the phone, in I/O bandwidth. Or maybe not. It didn't have a bunch of USB & firewire ports...