Where do we get a 300 millivolt battery ?
The IEEE is hosting its annual International Solid State Circuits Conference in San Francisco this week, and the start of the show will no doubt be the unveiling of some of the features in Intel's forthcoming "Ivy Bridge" processors for PCs, which cram a multicore CPU and a GPU onto the same 22 nanometer Tri-Gate silicon wafer …
Use an ordinary battery and a switch mode power supply.
Use an ordinary battery and cut it in 5.
Zn is -0.76 and iron is -0.44 on the electrochemical series, so a Zn-Fe battery ought to about do it.
This is impressive and I am impressed
Fast chips running off solar cells? Genius. This eliminates THE bottleneck to embedded intelligence / ubiquitous computing and makes shipping adequate compute nodes to villages in the steppes or wherever actually possible.,
On the other hand, the NSA must be fapping hard at the prospect of putting sensors everwhere and increase the compute density of their data centers without a higher electricity bill.
How will this increase the compute density? My reading was that you could power modest processors off millivolts provided you run them at low clock speeds. About the only really interesting thing here is that these are Intel x86 chips, and hence able to theoretically run the same binary code as full fat chips. But you do not want to see one of these puppies trying to run full fat software. He's talking about an original Pentium cpu running at 3MHz.
Great, my solar powered laptop can boot stock Windows ... but the sun will set before it finishes booting.
Re: Compute density
Actually, the sun won't set: boot bottleneck is IO speed, not CPU. So, if SSD-powered 3-GHZ machine boots in under 10 seconds, this baby will boot in well under 3 hours.
Oh, sorry, the days in winter are short :3
Re: Compute density
It won't specifically reduce the computing density when only considering computer equipment, however quite a chunk of physical space in data centres is taken up by cooling units, so lower power = less cooling = more space for more computing power
"variable precision floating point unit"
I thought Intel did that once before?
Now for a reasonable core design to complement it all. Too bad intel welded itself to x86. Wonder if they'll license the parts. Then get an arm core or four to run the bunch, and see what that'll do.
"Full fat" core is exactly right. Most open source software, though, doesn't need that; recompile it for something lean and it'll work just fine, and use less resources to boot. We have the compilers, we know how to do this. There's nothing inherently advantageous to x86 unless you happen to only have binary x86 code and no source. It's an outdated design that has accumulated quite a lot of cruft over time. We're ready to move on, but intel isn't. They bothched their last attempt so bad they won't dare try again for at least a decade. But an avoidable botch it was. It could've worked if not for, ta-daa, intel. Way to innovate, guys. These table scraps are nice, but you haven't served a new dish in ages. At least have the grace to not be anti-competetive, will you.
Note that NTV as they like to call it is not that new
It's one of the keys to how ECL chips were clocking at 200Mhz when TTL was clocking at about 10Mhz. But the hardware had *huge* power requirements with many more transistors per gate.
Note the reasons *why* people designed them that way.
Digital transistors with *big* switching margins are *much* easier to design on chips (compared to analog circuits) and their yield is better. At these thresholds your looking at the voltage variances produced by being at room temperature (kT anyone?). It has taken a *huge* increase in understanding the physics and architectural implications of doing this to make it viable.
Getting the operating frequency range is impressive as is RF with logic on *same* chip (I'll take a guess the clock frequencies of the logic are carefully *misaligned* to reduce interference but the article indicates they had to go much further). But doesn't SiGe already give this?
But honestly if you *really* want low power you need to go asynchronous so half your transistor budget does not go in clock drivers.
Bottom line. Do you want to watch Vista boot at 3Mhz? I'd prefer a chip with 0Hz as its sleep frequency.
On the upside they won't call them "bugs" anymore. They will "surveillance processors".
Glass duly raised.
Hm, this isn't completely related
But it gave me an idea.
Since Windows 8 has an ARM build, maybe they could add parts of the ARM build into the AMD64 mainline build - and allow the AMD64 CPU go into complete shutdown in idle periods. Obviously a machine is never entirely idle, and that's when you use the WOA code running on an ARM chip to keep the GUI updated, and so on, until x86 code was required, or anything heavy computationally.
I'm sure somebody at Microsoft is thinking about this.
Use Lead Acid ordinary battery for that. But you have to maintain it on weekly basis.
*Now* I remember why this all looked so familar.
It's Carver Mead's 1989 book "analog VLSI & Neural Systems" where he and his team described ways to deliver low power (but brain cell density level) computer systems with the dynamic range of biological systems (which is *formidable*).
Of course I'm sure that Intel's way of doing this stuff is *completely* different to how the Mead team did that.
You know what's in this jacket pocket.