So, Google borged a mystery chip designer that was working on "some kind of server," and the web is convinced the Chocolate Factory is merely interested in using this all-star startup to build a GPad. How quickly the web forgets that Google is the world's fourth-largest server maker. According to a New York Times source " …
"But an earlier Times story indicated that Agnilux was brewing "some kind of server.""
Whenever some kind of server is an active navigator, does search become an evolution and morph into an Operating System which positions Placement of Information Operating System for Future Shaping, with Beta Sublime Command and Control of Human Perception too, which Allows for Multiple Creations in Virtual Realities which are of Advanced IntelAIgent Design ..... [and when/if Artificially Alien, would them make IT and them, the Real McCoy, and extraordinarily render Human Beings, Virtually Programmed Machinery?]
ARM chips for Google? Do they have hardware floating point?
MapReduce is just a strategy for exploiting massive parallelism. You still need floating point horsepower to do the core operations that power the indexing algorithms. Does an ARM chip have floating point capability?
These guys have worked on Power variants, and ARM variants. Why couldn't you just extend the ARM CPU and tack on some FPU? For that matter, isn't SPARC an open-source CPU design? Can't they just modify SPARC to meed Google's very specific needs?
So many possibilities...
ARM Floating Point
Er.... yes... and support for it has been there for donkey's years. It's just that most ARM-jobs don't need it and so the cheaper ARM chips either don't do it or do it much more slowly. ARM was doing floating point back when 3DO and Acorn Archimedes still were knocking about. Any modern ARM chip can do floating point, if you buy the right version and/or you tolerate it being a little slower than you're used to.
Yes, Sun released their multithreaded (CMT) processors as open-source. http://www.opensparc.net/
For what google does, their own variant of a CMT chip may be a good choice for them.
Very flexible architecture including FPU, MMU, other co-processors.
...to show the world videos of cats flushing toilets. (Oh, and the search engine to help you find them.)
At least the ads are "relevant."
Google likes to be in control.
It might be Intel's largest customer but it still makes them a *customer*. As a previous Reg article mentioned an old Russian proverb is roughly "Middlemen are s^&t" As long as anything they need is commoditised with lots of suppliers they can play one off another they don't seem to mind. Processors and servers are closer to core needs.
Owning the IP and the company to implement it makes them an *owner*.
They like this.
Please don't use that Intac graphic. Of all the meaningless graphics out there on the Internet, this one is right up there as one of the most pointless.
IT'S NOT TO SCALE !!!!!!!!!!
Look at Verizon... Light blue, small square, 25,000 servers. How many of that size box will fit in Intel's box supposedly representing 100,000 servers? It's a lot more than four. Compare Facebook to OVH...
It's either been done as a complete joke or by someone who has absolutely no clue as to what they're doing. And El Reg is doing themself no favours by using it.
ARM? PowerPC? Uhm.
It'd be good if the world got rid of x86, google first or no. And with their own linux distribution, they easily could. But they could even stick there with suitable transmeta technology. That'd work with a different instruction set too, of course.
Still and all, with them caring more for parallelism than raw speed, I'd not look at arm or powerpc. I'd take a hard look at opensparc see if a bunch of really smart people couldn't improve its power efficiency a bit. Or a lot. It already does the parallel thing pretty well and has crypto accelerating support, something useful for things like gmail.
Yes, ARM does have hardware FP. There have been various FP coprocessors for ARM since around 1990. The latest sport vector FP operations as well as scalar operations. The misconception that ARM does/did not have hardware FP was that, until recently, mobile devices didn't need them, so few chips had FP coprocessors built-in.
I agree with the article that it makes sense for Google to develop an ARM-based server chip. For a tablet, they would most likely just adopt one of the existing MID processors such a Snapdragon, Tegra or Armada. But these are not necessarily well suited for server use, so it makes sense to develop a server chip around the Cortex A9 multicore CPU. That is just following the ARM tradition: ARM provides the cores and various coprocessors and chip producers add their own stuff around this to make complete SoCs.
What data centre would Jesus build?
Agni is also the genetive of the Latin agnus, meaning lamb. As in Agnus Dei, the lamb of God. Could the name mean "The Light of The Lamb" (of God)?
Maybe they're Christians.
Do they have hardware floating point?
Yes, they do.
Now, "You still need floating point horsepower to do the core operations that power the indexing algorithms.", do you?
The newer ARMs also have a NEON SIMD co-processor. That's possibly quite handy for crunching indexing.
The bigger issue with ARMs is the address space limit. Big search likely wants multi-GBytes of RAM.
Why not servers and pads?
Google don't really need to build a X-pad. They can wait for HP/Dell/whoever to build theirs and the world of FOSS will probably port Android onto it fairly quickly. You can already run Android on "PC" based hardware so it shouldn't be too much of a struggle IF the X-pad manufacturer releases their driver code. (where X is "i", "we", "g" or various other speculations)
That leaves Google to ignore the X-pad market to live or die as it chooses and concentrate their efforts on their data center plan.
Google will win either way (except on pad revenue, but they can claw some of that back through the "Market" for apps)
I bet Google know
an awful lot about how well their algorithms run on off-the-shelf processors that are (inevitably) chock full of architectural trade-offs. Trade-offs that try to run most application code well most of the time. You can bet G profile all their code like mad and so may well have some novel architecture ideas that suit their code better; that they should implement these ideas to maintain competitive advantage isn't such a surprise.
It's a more likely explanation, IMO, than a custom chip for a gPad - they don't need one because HTC will happily do the hardware (see Nexus One) if the whole issue of legal costs for patent battles with the big A can be sorted out. ARM cores and SoC integration are not exactly commodities but aren't a make-or-break niche either.
Compute in Cloud
I think the next logical step will be to limit & contain the processing done in data centres and 'outsource' some of the churn to 3rd parties. How better to achive this than selling Google Hardware to consumers and businesses with built in 'Google Processing'. They don't have to worry about power and cooling as this will be covered by the end user, in return for enhanced access to the G-Cloud services. All they will need to worry about is storing the data in enough locations so that information is still accessible even during multiple service outages. Think massively parallel P2P data storage, processing and serving - A hybrid PirateBay/Seti@Home.
#IFOWO Google Processing Overlords.
Bing Sings to Google?
"How better to achive this than selling Google Hardware to consumers and businesses with built in 'Google Processing'. They don't have to worry about power and cooling as this will be covered by the end user, in return for enhanced access to the G-Cloud services." ...... ElNumbre Posted Friday 23rd April 2010 12:43 GMT
How better, ElNunmbre? With Virtualware, which is not anything at all like vapourware, for enhanced output from G-Cloud services. Then would Google have no physical worries about anything, although when Access to Everything is Provided, does IT Present its Own Enigmatic Dilemmas and Opportunities for Great Work, Rest and Play Gamers, some of whom may have the odd problem, or three, or four, or more and some who would have none.
"Agni is the Sanskrit word for fire"
... but isn't it also the Latin for sheep (plural)?
Agni is the Sanskrit word for fire
Yes, agni is also the nominative plural, but I don't see why that would go with the singular noun "lux".
If they wanted the company name to mean "The Light of The Sheep" I reckon it should be "Lux Agnorum".
Who knows what Google is running?
From the "Unladen Swallow"  Python fork that Google has been working on there is some information about there servers; mainly x86 with no 64-bit specialisation. Ports to ARM will require some more work on the LLVM compiler but Google has been tweaking this.
However, why on earth would Google want to replace existing servers? If it already has a million of them presumably running on x86, how many more does it need to justify changing to a new architecture? Sure shrinking to ARM would make sense for anything new on that scale but I'm not so sure about migrating.
I guess that Google will replace it server park every 3, 4 or (max) 5 years. SO do the math that means (just replacement) 300.000 to 600.000 systems _each_ year!
.by their wheelie-bin for a year or so. No hardware dropped in of interest. All cheap 'white box' stuff.
However, a bloke from the Government's been there as long. Claims he's been looking for a DVD of UK citizens he lost on a train awhile back...
Custom hardware, custom software, efficiency?
To go a bit further with this idea, if we say that hypothetically, all Google servers are iron - no VMs. Unlikely, but bear with me.
Linux on x86, even if you pare down the OS to the bare bones, will still waste power - for example, if you have a farm that runs at 100% all the time and uses no VM sets, then the power management and VT instruction sets are wasted silicon, and probably wasting power.
Cut those out, you might get a 2% power saving.
2% isn't much, but 2% of a massive power bill is still a large number.
And if they rewrite the OS that the servers are running for that specific task to use every last drop of the pared down silicon, then they get near maximum efficiency from the available silicon - so performance increases, as they can put transistors for floating point operations where previously power saving/speedstep etc sets were - meaning more FLOPs per mm2.
Or am I reading too much into this, and showing just how little I know about instruction-set level hardware ops?
Still, it's an interesting idea - and Google certainly have the money and the people to do something of that ilk - and the scale of operations to make it a viable option.
PS: No need to flame me if I am genuinely talking rubbish - it's just a thought...
The Next Vertical Integrated Corporation ?
This does not really make a lot of sense to me. Even Google's management of Hyper-Geeks can't manage so many R&D projects.
What's next ? An alleged Google Germanium Foundry ? (For some Googly Avalanche Diode ??)
If Google used the chips themselves, would they have to pay ARM zero or reduced licence fee?
let's hope it's a server chip...
...xPads are so passe.
I think an arm and a leg would do.
Parky for spring, eh?
Writable Instruction Set Computers
One problem with using any general purpose CPUs is that they are limited to the operations that the CPU provides. There are many algorithms that would work better with custom instructions and likely search falls in that category.
Enter Writable Instruction Set Computing (WISC). WISC allows you to take a core and tack on special purpose instructions to build a CPU that does what you want.
While I expect Google already uses custom FPGA logic etc, WISC takes the integration one step further.
Perhaps that's what they're up to.
WISC is simply a CPU with a user-defined microcode. Which could be a good idea for some applications, especially if one can use compiler/profile statistics to optimize the user-defined instruction set.
Whether there are *real* performance gains to be reaped over RISC remains to be doubtful, though. The original CISC idea was to offer the largest number of operations so that programs should be able to execute efficiently. It turned out this was not a good idea.