back to article That Google ARM love-in: They want it for their own s*** and they don't want Bing having it

Google, Facebook, and likely other major tech firms are investigating ARM-compatible chips to drive low-power servers, which could ultimately shakeup the types of processors that sit inside data centres. We were moved to consider the pros and cons of moving away from trusty x86 into the problem-filled world of ARM after a …

COMMENTS

This topic is closed for new posts.
  1. John Sturdy
    Boffin

    If they're going to make their own CPUs, perhaps they could add search-related operations in hardware (or with specialized hardware assist) --- Boyer-Moore search might be amenable to this, for example (just the "search" stage of it, not the table preparation stage).

    1. Anonymous Coward
      Anonymous Coward

      That would kind of defeat the point of the RISCeyness of the ARM processor.

      1. Yet Another Anonymous coward Silver badge

        One of the big advantages of ARM is that the chip is so small you can stick it in the corner of the GPU/ASIC/custom lol-cat search combobulator to handle all the ancillary computer stuff while the special silicon gets on with the hard bits

        1. John Sturdy

          One of the big advantages of ARM is that the chip is so small you can stick it in the corner of the GPU/ASIC/custom lol-cat search combobulator to handle all the ancillary computer stuff while the special silicon gets on with the hard bits

          Yes, that's what I had in mind as the main possibility.

          Another possibility would be something like ICL's CAFS (Content Addressable File Store) which implemented search functions in the disk controller, matching the data as it passed the disk heads (without having to read it into RAM first). But I expect they're more interested in pulling popular blocks of data into RAM for faster repeated searching.

          1. Tristram Shandy

            CAFS

            Have an upvote for mentioning CAFS. I haven't heard that acronym for nigh on 30 years.

      2. BillG
        Megaphone

        That would kind of defeat the point of the RISCeyness of the ARM processor

        Nowadays, the RISC vs CISC debate really doesn't have any meaning anymore. Most modern processors are so pipelined that can run complex instructions in a single cycle. Modern ARM processors support multiple CISC-like addressing modes.

        Really, RISC should stand for Really Invented by Seymour Cray.

    2. EPurpl3

      “If they're going to make their own CPUs, perhaps they could add search-related operations in hardware (or with specialized hardware assist) --- Boyer-Moore search might be amenable to this, for example (just the "search" stage of it, not the table preparation stage).“

      ...or ads.

      1. Anonymous Coward
        Anonymous Coward

        @EPurpl3 - “If they're going to make their own CPUs, perhaps they could add search-related operations in hardware (or with specialized hardware assist) --- Boyer-Moore search might be amenable to this, for example (just the "search" stage of it, not the table preparation stage).“

        "...or ads."

        ...or better PRISM access for their NSA and GCHQ friends...

  2. Nigel 11

    What Google really wants?

    What Google really wants is an ARM solution fabbed by Intel at 14nm!

    Maybe Google is a large enough customer, that Intel might consider fabbing custom chips that aren't sold to anyone except Google. Google could buy any necessary ARM license and have it fabbed by any company willing to take their money.

    Also maybe a hybrid chip with both x64_64 and ARM cores is possible and useful to Google.

    1. Steven Jones

      Re: What Google really wants?

      If the following is true, the possibility of Intel fabbing ARM cores at 14nm is there, albeit that it would require a complete redesign to fit Intel's fabrication rules and techniques. However, that's pretty well universal, albeit maybe not so drastic - ARM cores aren't simple die patterns which are reused by the fabricators. They are all customised to each foundry's design and production principles.

      http://www.extremetech.com/computing/169853-hell-freezes-over-intel-announces-plan-to-fab-arm-processors

    2. Ian Michael Gumby

      @Nigel Re: What Google really wants?

      I would tend to disagree.

      Google isn't just looking at the data center.

      Think Chrome. Think Netbook and Wireless TV adapter. So that they can serve you video and everything so that they know more about you and can datamine your work. (Think about the fact that they attempt to sell the netbook at cost, which is below competitors for obvious reasons. )

      So they want to own the consumer period.

    3. Anonymous Coward
      Anonymous Coward

      Re: What Google really wants?

      Intel will be fabbing a 64-bit quad-core ARM SoC on 14nm - it just happens to have a stonking great FPGA attached:

      http://www.altera.co.uk/devices/fpga/stratix-fpgas/stratix10/stx10-index.jsp

    4. nerdbert

      Re: What Google really wants?

      If Google wants ARM in an Intel process then Google has to do it to get it done right. Of all the transistor shops around there are few that approach the level of NIH that's inside Intel. They already had StrongARM, the best ARM implementation at the time, and screwed it up badly before selling it off to Marvell, for example. Intel's got a great processor design team, and great fabs, but it's abysmal at taking other folks learning to heart.

      And, FYI, Intel chips aren't really CISC when you dig into them. Much of the reason that x64 never lost to RISC is that Intel has a massive unit that breaks apart CISC instructions into component RISC parts before dispatching them to the processors. That's a great way to keep the speed up and compatibility flawless, but there's a big power and area penalty paid for that. It's a great solution for desktops, and an acceptable solution for laptops, but in a cell phone it doesn't fly due to the extra battery draw.

  3. Robert Sneddon

    Google

    Google's the search engine and customer data harvesting business, right? Why in the thirty-two Hells of Carmack would they want to get into building their own hardware especially at the silicon level? They're planning to spend billions to save millions, as far as I can see.

    There are a bunch of startups building low-powered ARM servers for data centres, highly-compressed blocks of processing power eminently suitable (so they claim) for web service work where heavy number crunching and data IOPS aren't the priority. If Google want to diversify away from Intel, buying up one or more of those wannabees or simply placing an order with a lot of zeros after the first digit for their product would be less expensive and take less time to get the the new machines spinning up and going online.

    It may be this is just a sign Google's got too much money and no idea what to do with it, a bit like Apple building their $5 billion mothership in Cupertino.

    1. Destroy All Monsters Silver badge
      Black Helicopters

      Re: Google

      Or they are going for the True AI route under cover of contextual ad serving.

    2. Nigel 11

      Re: Google

      Why in the thirty-two Hells of Carmack would they want to get into building their own hardware especially at the silicon level?

      Because they can see a way to reduce their energy consumption by doing something differently in Silicon? Because of their massive scale, things may look different to Google compared to lesser companies.

      1. Robert Sneddon

        Re: Google

        Building their own silicon means building their own servers around that silicon and then restructuring their code to run on those servers. That's going to cost billions and take years. They've then got to provide power for these lower-power servers and the savings from that reduced demand will be in the millions, maybe tens of millions -- these servers still need power after all, hopefully less than before to meet Google's mission of delivering services and harvesting data overall. I don't see the payback being worth the upfront cost.

        If they're that worried about energy costs then why don't they spend money on building combined-cycle gas-turbine generating capacity and sell the surplus to the local grids where they operate? Yes I know they're building solar plants in a few places but they are intended to offset their grid consumption, they don't provide 24/7/365 operation power.

        1. James Hughes 1

          Re: Google @Robert Sneddon

          No, not billions (although they do have the money to spare).

          A custom ARM device with some dedicated HW blocks for Google specific purposes would be no more costly (in fact less) than a mobile phone SoC. Let's say $100M or so and that's probably over the top. Apple manage to design their own devices that are similar , and they do it for less than that IIRC.

          And Linux will simply run on it, like it does with relatively little porting required, on current SoC's.

          This seems like a good idea for Google. Custom chip, custom board, custom acceleration that does exactly what they need. And low power. What's not to like.

    3. Jason Bloomberg Silver badge

      Re: Google

      If Google want to diversify away from Intel, buying up one or more of those wannabees or simply placing an order with a lot of zeros after the first digit for their product would be less expensive and take less time to get the the new machines spinning up and going online.

      That's always been the age-old choice; pay someone to do it for you, buy up a business to become part of your own, or create a department to do it from scratch.

      Each has advantages and disadvantages, risks, costs and opportunities. It seems unlikely Google would not have considered and weighed-up all the options they have. Perhaps cost and time are not the primary concerns?

    4. Gaius

      Re: Google

      I think you may not appreciate just how much power & cooling costs in a serious data centre. Every Watt you put in, you pay for, then you pay double to take it out again. Savings here add up VERY quickly.

  4. Turtle

    Regarding The Picture Of Facebook's 'Cold Storage' Arrays

    It depresses me to see such advanced technology being used for such trivial purposes.

    1. frank ly
      Happy

      Re: Regarding The Picture Of Facebook's 'Cold Storage' Arrays

      I feel the same way about digtal radios being used for listening to that trashy modern pop music.

  5. Anonymous Coward
    Anonymous Coward

    "the problem-filled world of ARM" (in the article)

    Excuse me?

    ARM is everywhere.

    x86 is on desktops and in servers, where Windows compatibility is a requirement. It isn't anywhere where Windows compatibility isn't required (unless Intel can sweeten the deal with $$$). Even Intel's favourite boxshifter had to be bribed to stay faithful to Intel, back in the day when AMD were a credible competitor.

    ARM sales and ARM installed base massively outnumber x86. They've just not been visible until relatively recently. Now, all of a sudden, they're not just visible, they're interesting outside geekworld.

    ARM has a future. Intel has a past.

    1. Anonymous Coward
      Anonymous Coward

      Re: "the problem-filled world of ARM" (in the article)

      You seem to be forgetting about the large number of x86 servers and desktops running Linux or BSD.

      There are probably far more x86 servers running Linux than Windows.

      1. Anonymous Coward
        Anonymous Coward

        Re: "the problem-filled world of ARM" (in the article)

        I am reasonably aware of the numbers of x86 servers running *nix, . And in many cases they are x86 not because x86 is best fit but because x86 is most readily available, and it's most readily available because... they can also run Windows.

        In the vast majority of the volume market (e.g. anywhere the IT department don't make the design or purchasing decisions, because Windows is not relevant) x86 is near invisible. Just look around you, at things that have computers in them. Most of them aren't Windows boxes, most of them aren't x86 boxes.

        How many ARM chips do you think are in a half decent x86 server anyway? They may not yet add up to as many $$$ of revenue or profit as a single Xeon does,.. they probably never will.

        "There are probably far more x86 servers running Linux than Windows."

        I'd love that to be true but I've never seen any overall market share statistics coming anywhere near that. Have you?

        1. David Webb

          Re: "the problem-filled world of ARM" (in the article)

          I'd love that to be true but I've never seen any overall market share statistics coming anywhere near that. Have you?

          Dunno, how many servers run Apache compared to IIS? How many webservers are there? Seems like a lot. Sure, in the server room of a large company it'll probably be Windows but for the web, the servers tend to be Linux based.

        2. c:\boot.ini

          @AC 16:08 Re: "the problem-filled world of ARM" (in the article)

          Ever heard of netcraft.com ?

          Check this out:

          http://uptime.netcraft.com/perf/reports/performance/Hosters?orderby=epercent&tn=november_2013

          You can get the full picture here, but you have to pay, from what I understand ...

          http://www.netcraft.com/internet-data-mining/hosting-provider-server-count/

          Since Microsoft is a sponsor, I am sure they do not wish to publish the info directly ... my guess is 80/20 *nix/Win ... just how often do you come by asp pages these days ? Very, very rarely ... so it might even be less than that.

          1. Anonymous Coward
            Anonymous Coward

            Re: @AC 16:08 "the problem-filled world of ARM" (in the article)

            "Ever heard of netcraft.com ?"

            Yes thank you. I even know to search for "netcraft web server survey" which leads to (for example)

            http://news.netcraft.com/archives/2013/11/01/november-2013-web-server-survey.html

            So MS reportedly have, depending on date, somewhere between 20% and 40% of the **internet facing web server** market.

            But the server market in general (and even the server market inside Google, as per article topic) actually includes more servers than internet facing web servers.

            I was hoping someone might be able to shed some light on those numbers, probably outside Google from IDC, Gartner, y'know. Maybe for inside Google too (though isn't the answer there that they're basically all Linux?). We'll see.

  6. frobnicate

    ARM is being pushed in the HPC space too: http://www.montblanc-project.eu/ And mainly for exactly the same reason: energy consumption. Megawatt per petaflop is too high.

  7. Tim Parker

    "..the chance of Google being able to actually develop a better general-purpose chip than Intel is slim."

    Although perhaps not as slim as the chance that Google actually want a general purpose chip.

    "it's likely the technology will be sub-par compared to Intel or AMD, in terms of raw performance, but the power bill may be low enough to motivate a move."

    Sub-par ? Maybe, maybe not - you seem to be missing the point, which your 'contact' touched on, which is that using bought in IP like ARM gives them a great flexibility to create customs parts that work well for processing a particular type of data, e.g. scaling out the IO (as mentioned), memory hierarchy tweaks (also mentioned), attaching to dedicated hardware (e.g. custom ASICs for pattern matching, high-speed FPGAs) or whatever they want. With that degree of customization available it's entirely possible to match or surpass general purpose CPUs within a particular processing niche (I seem to remember a while back, the fastest 'computer' for one type of highly parallel algorithm was a basically a bucket of dye). Through-put is not the key however - a combination of through-put, flexibility, specifitiy and power-usage probably is a better metric, and the ability to have all this in-house is certainly not to be sniffed at.

    1. diodesign (Written by Reg staff) Silver badge

      "gives them a great flexibility to create customs parts that work well for processing a particular type of data"

      I absolutely agree - I've made that clearer in the article.

      C.

  8. Anonymous Coward
    Anonymous Coward

    The most obvious addition could be the use of a GPU for database style transactions. There are several articles already on the matter. Google can also put other features on a custom chip, like VP encoding and replaced h.264. Better encryption and RNG is another possibility. With the recent reports about the RNG in Intel chips, Google may not trust it.

    1. BlueGreen

      "...could be the use of a GPU for database style transactions"

      I must take that to be the most wonderfully subtle troll and upvote you, as the alternative is just too sad.

      1. James Hughes 1

        Re: "...could be the use of a GPU for database style transactions"

        Why bother with a GPU? No need in the data centre. You have complete control of what's in the chip - forget having to arse about with GPGPU code - just add custom HW aimed exactly at what you need to do. Lots of silicon area vacated by the GPU - that lot's of dedicated HW blocks for whatever DB transaction system you might want.

        You'd probably leave in the H264 encoder/decode, and add VP8 so you have fast transcoding of video formats.

  9. John 98

    Spurring Intel on

    I imagine Intel will now work a lot harder at cutting power requirements - Google probably think a million or two spent on that well worth it. If, as a bonus, their research suggests they can have custom chips which give them a hardware headstart over rivals, then that puts them in an even more desirable place. A place, which, of course, they don't want anybody else reaching first.

    The cost of blue sky thinking for a year or two probably barely registers on their financial radar at present; the interesting (and expensive) decisions will come - but later.

    1. Charlie Clark Silver badge

      Re: Spurring Intel on

      Intel already has done lots of work to get power consumption down. It's undone by wanting to provide x86 compatibility.

      ARM's big advantage is that you can have only the silicon you need, whether that's general processing (similar to x86), encryption, I/O, or whatever. Negaflops* mean Negawatts of power needed.

      * I think I just made this up but I'll defer to anyone.

      1. John 62

        Re: Spurring Intel on

        x86 compatibility, while not negligible, is becoming more and more irrelevant in a CPU's power and performance characteristics. Smaller, more efficient transistors means that the x86 translation layer will take up proportionately less space and use up proportionately less power as CPUs and overall systems become more capable.

        For the sake of argument, let's say that the x86 translation layer takes 100000 (10^5) transistors. In a processor die with 10000000 (10^7) transistors, that's significant. In a processor die the same size with 1000000000 (10^9), the cost is much less significant.

        The same goes for cache: Intel can generally afford a larger on-die cache because its transistors are smaller and use less power.

        What will be more significant will be what your software toolchain is. Is it optimised for x86 or ARM? your 3rd party libraries available for x86 or ARM? Are your peripheral drivers available for x86 or ARM?

        Plus, the question of your overall system? Does using Intel or ARM limit your choice of GPU? Does it limit your choice of WiFi chip? etc...

        Then there's the real performance gain Intel has over most ARM implementations on the market: Out of Order Execution. The thing is OoOE takes up large amounts of silicon (and hence uses lots of power). The reason why Intel's Atom didn't initially have OoOE was that it would use too much power. ARM has recently started to add OoOE. Intel has a few design wins in mobile devices and is pushing hard there, but it doesn't want ARM gaining a foothold in the server space, where they had been facing stiff competition from AMD for a while.

        1. Anonymous Coward
          Anonymous Coward

          Re: Spurring Intel on

          "Intel has a few design wins in mobile devices "

          Shouldn't that read "Intel has funded builders to bring a few reference designs to market" ?

          "Intel doesn't want ARM gaining a foothold in the server space"

          Indeed not, but why would server purchasers care what Intel think, if someone else's product is an equal or better fit? Surely Intel won't risk being caught again offering server vendors financial incentives to stay loyal to the Intel camp? So what matters is, rightly, what the purchasers want. Sometimes prospective purchasers just want a bit of sensible competition, so the incumbents can't get away with too many ripoffs (hello Xeon).

          "Are your libraries/peripheral drivers available for x86 or ARM?"

          "Does using Intel or ARM limit your choice of GPU? Does it limit your choice of WiFi chip? etc."

          Well I don't know about GPUs in servers, but for almost anything else, why would choice of libraries/peripherals influence the choice of x86 vs ARM, if there are open source libraries/drivers?

          x86. The chip in your Windows box.

  10. Anonymous Coward
    Anonymous Coward

    They want their own chip and they don't want Bing (and by extension, most likely everyone else) to have access to it. Now I think that's a rather odd attitude to take, when you're supposedly the greatest company for the use of FOSS, a company who makes a vast amount of its money out of Android a (depending upon who you talk to) version of Linux.

    If this were to happen, I can't see any other Linux developers condoning it. Were they to start rolling out chromebooks with the same processor how would the Linux community react? They would be in the position that they're in with a few other hardware developers, but with a company who supposedly rely upon Linux.

    Custom hardware seems to me a retrograde step, if it's used anywhere outside of their private datacentres, and even then it's not particularly something I relish.

  11. Charlie Clark Silver badge

    Whol'll be first?

    Now that has AMD swings both ways it might well be one of the first to be able to offer ARM-64 in 14nm in volume (via Global Foundries). Apparently, a lot of companies are looking to go straight to 14nm because of energy loss problems at 20nm.

    Intel's server business is safe for a while as all these custom chips only benefit customers with very specific needs: thousands or even millions of http servers and associated caches and the margins are going to stay wafer thin. Only once ARM-64 gets up the general purpose grunt levels of x86-64 will there be any chance of the mass market turning away from Intel but then only really on price. Of course, the more early adopters of ARM-64 there are, the faster any particular software stack is likely to be available for it. Will Microsoft jump in and offer turnkey Exchange servers based on ARM?

    1. Uncle Ron

      Re: Whol'll be first?

      My observation is, anything that runs Exchange will be a winner. Period.

  12. All names Taken
    Paris Hilton

    Stage managed problem ...

    with stage managed solution?

    Maybe it is my personal weakness but this looks too much like a google buys ARM solution followed a year or two later by intel beats ARM in new business rollout for google.

    Assuming the above is mistaken won has to wonder about MS & intel vs Apple & XXX vs ARM & google ...and ponder?

    (Sell Apple/ MS / intel shares and buy ARM and google?)

  13. DocJD

    Apple

    Isn't Apple's newest chip already 64 bit low power ARM?

    What if Google came out with a new version of Android for phones and tablets that ran only on the Google 64 bit ARM? It would let Android compete with iOS at 64 bit, and guarantee their software would work properly. It would also earn Google some income in parallel with giving Android away free.

  14. Lars Silver badge
    Happy

    Up grading or down grading

    It seems to me that Intel is in the process of down grading X86 to a slimmer processor while ARM is trying to up grade to a more efficient processor avoiding to get too fat. Who knows, perhaps it's easier to build than to tear down. I think Google has made a strong decision to have highly qualified, in house, processor design skill. With Intel and AMD rather "closed box" and ARM more "open" then their interest in ARM does not surprise me. Came to think of it, is the sk. RISC Intel still variable length instruction.

  15. seansaysthis

    Im surprised Google and FB havent looked at this earlier. They have the scale to investigate this whereas many others dont. However theres a long way from can we to we are. Interesting times but Google and FB have strong partnrships with Intel so dont be surprised if this gets dropped.

  16. Mikel

    OK look

    You start with 8 core 64bit ARM silicon with a nice GPGPU, and layer into the SOC 32GB or 64GB of LPDDR4. Lead length for the RAM is 0.2mm so go ahead and use a wide low latency memory bus and reduce the cache. Use the BGA for power and FDR infiniband only. Put 4 of these and an infiniband mesh ASIC on each side of a high profile DIMM sized fin. Six fins and more network ASIC on each side of a daughter card, 20 daughter cards in a 2U chassis. Mix in some Magic clustering software, and it's all sorted. 7680 cores per RU, 307,200 per rack, same power and cooling.

  17. blondie101

    pure powerplay

    According to Cringerly (http://www.cringely.com/2013/11/25/intel-wants-everyones-chip-maker/) maybe Intel will be making Google's chips.... at 14nm.... And not because Intel likes Google but because they have so much 14nm capacity and (maybe) not enough buyers. What will they do? Next years will be very interesting.

    Oh and Googel wants this because they rather buy a design than they must buy the whole package from one. Macchiavelli's "divide and conquer"... In all respects.

  18. J.G.Harston Silver badge

    "shakeup" is a noun. You need a verb there, such as "shake up".

  19. pprotus

    Ya don't know, what ya don't know...

    One of the benefits of designing your own, full-custom processor, from scratch, is that you end-up having a good idea of what's inside. Intel (and AMD, in the past) have both admitted to the inclusion of circuitry that, with a little external hardware, permits the packaged processor to act as it's own microprocessor emulator...with complete access to everything within a system. The design of said external hardware (and the software/firmware to make it all function) is a closely guarded secret. But given that both companies have occasionally hired 3rd party assembly houses to fab the hardware, and contractors to write the software, it would be amazing if the NSA, among others, does not already possess copies of all variations of the HW, SW, and manuals that exist. The unanswered question is... Does Intel and/or AMD include enough circuitry and flash memory within a processor package to allow a processor with network access to communicate in-band and to download and execute tasks without using any external (to the package) system resources?

    Icon: If this were not an anonymous posting, I'd use Big Brother...since the idea of building snoop capability into every processor on earth would seem obvious to him.

    1. A Non e-mouse Silver badge

      Re: Ya don't know, what ya don't know...

      Intel (and AMD, in the past) have both admitted to the inclusion of circuitry that, with a little external hardware, permits the packaged processor to act as it's own microprocessor emulator.

      Isn't this similar to JTAG?

    2. Bronek Kozicki
      Facepalm

      Re: Ya don't know, what ya don't know...

      oops, you forgot to click anonymous checkbox ...

  20. The last doughnut

    This is business

    That's why I come to the comments section - endless uninformed comment.

    The limiting factor with microprocessors is energy. It costs money to house, power and cool a server centre. Google have worked out they can build a better server centre more cost effectively if they move to an ARM-based server processor architecture. Yes they have the cash and organizational ability to do it. No they probably won't add any special sauce to the design - they are better off with a general purpose machine that they can put their clever software on.

    1. Robert Sneddon

      Re: This is business

      Electricity is cheap if you buy it in industrial quantities on long-term contracts and spending a billion dollars on developing and building custom processing engines and racking them in data centres to save fifty million dollars a year on power and cooling isn't very cost-effective.

      Remember that a server's power drain isn't just the CPU and a 45W TDP Intel CPU with big caches and fast I/O and a single set of RAM, support chips etc. will be a lot more capable than few 5W ARM devices while they will need their own support chips, RAM drivers etc. I'm not sure the actual power savings are there to be had given the computational load the server array needs to meet. It's fun to piss on Intel for being a dinosaur staying focussed on the x86 architecture but they've spent a lot of time and effort getting power consumption down over the past few years while maintaining the processing capabilities. ARMs' approach has been to try and improve their capabilities without letting their power consumption grow too fast but tablets are evidence this is a problem for them -- the latest iPads have three times the battery capacity of the original iPad 1 to provide a similar runtime.

      1. The last doughnut

        Re: This is business

        Okay it may cost perhaps 50 million to develop the processor and a similar amount for the remaining hardware. But the lower power design means you can pack more of them in the same space (chip, board, box, rack, centre) so it becomes cheaper to operate. With the economies of scale they have it just works out better value for them.

        Why else would they do it?

      2. Anonymous Coward
        Anonymous Coward

        Re: This is business

        "a server's power drain isn't just the CPU and a 45W TDP Intel CPU with big caches and fast I/O and a single set of RAM, support chips etc. will be a lot more capable than few 5W ARM devices while they will need their own support chips, RAM drivers etc. "

        A very fair point. However, just suppose for a moment that (as others have already suggested) Google may have enough pull in this market to make a serious SoC with e.g. serious quantities of RAM on the same die, enough to process a good selection of Google workloads *without* always needing space or electricity for external RAM. Is that even possible? If it is possible for a sensible price, it could be quite an interesting prospect for Google. And/or AWS or similar cloudy stuff.

        " the latest iPads have three times the battery capacity of the original iPad 1 to provide a similar runtime."

        Not being an Apple expert, how much of that is down to SoC power drain and how much is down to stuff like retina displays etc?

  21. Anonymous Coward
    Anonymous Coward

    Robot brain

    Could this be related to the recent spate of robotics companies being purchased...

    What sort of computing would a Boston Robotics quadriped require?

  22. CheesyTheClown

    Legacy 16-bit support?

    Uhhh... Since when did Intel put 16-bit support into the core again?

    Also, ARM has its fair share of legacy crap as well.

    Also, to be fair, ARM will work just fine in the server. Whether there is any power to he saved is a different question. Intel hasn't been sitting around and just letting ARM catch up. While ARM has been getting faster, Intel has been lowering power consumption... Hell, my Surface Pro 2 gets 7 hours on a charge and it's fast enough to emulate 200 Cisco routers.

    I think you'll find the greatest advantage of ARM is the ability to also run as big-endian. There are billions of cycles to be saved by using big endian. Most compilers don't optimize endian translation. Most internet apps perform a massive number of operations in big endian. AVX is a bit of a mess in endian related tasks too.

    Intel is also closer in reality to performing 16x16 matrix transpositions than ARM. So large scale video and image processing (very very common task for companies like Facebook and Google). If Intel implemented an AVX instruction set able to function on columns as well as rows, ARM would have a really long way to go to catch up on power vs. performance

  23. DeLummox

    Which type of server are you talking about!!!

    Gentlemen

    There are two types of servers out there. These are data and computing. Google needs very little processing power to look up a search query (DATA), it grabs the data and sends it to your computer. ARM is ideal for this, low computing power, low power consumption. Google is also talking about having chrome- books that have their computing power offloaded to the web. For this you need computing power, which requires x86, either AMD or INTEL. INTEL if you need single threaded power, AMD if you have multi-threaded applications. YES I said AMD. It's Intel for gaming and AMD for high end multi-threaded video editing.

  24. 2cent

    How a bot this?

    Google does care about what chips it mixes and where.

    But all these robotic firm buy-outs tells me more energy is being lost in design and maintenance of buildings and installations requiring humans.

This topic is closed for new posts.

Other stories you might like