back to article Google Research: Three things that MUST BE DONE to save the data center of the future

One prominent member of Google Research is more concerned with the challenges of speedily answering queries from vast stores of data than he is about finding business intelligence hidden inside the complexities of that omnipresent buzzword, "big data". In fact, Google Fellow Luiz André Barroso, speaking at ISSCC on Sunday …

COMMENTS

This topic is closed for new posts.
  1. roselan

    little data, big time, macroseconds and the evil 99%...

    That's more what I work with. Google infra is my pirrelli calendar.

  2. Ken Hagan Gold badge

    "much more complex parsing"

    "He quickly pointed out that he wasn't talking only about Google's head-mounted device, but of voice queries in general, as well as those based upon what cameras see or sensors detect. All will require much more complex parsing than is now needed by mere typed commands and queries, and as more and more users join the online world, the problems of scaling up such services will grow by leaps and bounds."

    Nit-picking, perhaps, but *those* problems can be solved in an embarrassingly parallel fashion before the query reaches the core of the data centre.

  3. Anonymous Coward
    Anonymous Coward

    Just off hand

    I'd say that googlasses will need to do some serious caching with

    time/space constraints as well as logic like they do for

    modern cpu's to parallelise complex instructions. Mixing

    discrete symbol systems with formal physical models is

    non trivial.

    Fubar the Hack

    1. Ken Hagan Gold badge

      Re: Just off hand

      You seem to be channeling amanfrommars.

      "Fubar the Hack"

      You write that as though you were signing the comment, but you posted as Anonymous Coward. Are you messing with my head?

  4. Anonymous Coward
    Anonymous Coward

    Sucking at microseconds

    I'm not sure who the "we" are in the "we suck at microseconds" quote, but there are parts of the IT industry that excel at dealing with microsecond latencies, particularly in the financial technology space [insert standard anti-Wall St statement below].

    I'm not even talking about the ultra low-latency/high frequency trading crowd btw, as these days that's all in the nanosecond range, but the bread and butter of the capital markets for many years has been in microsecond latency reduction.

  5. supersurfer

    Root Cause.. Commodity Sprawl and Linux false panacea.

    If we take 100 steps back and look at where the "commodity" industry has pushed our data centers.. you need to realize that in moving from a few supercomputers.. to the new trend of scaling out to several hundred blades or "junk" servers.. that the amount of inter-system communication is growing in leaps and bounds (where the latencies become catastrophic).

    As you move processing away from the initiating host CPU.. you increase your latency an order of magnitude each hop.. from on CPU cache.. to in system RAM.. out to networks.. and related storage latencies (multiplied by the amount of handshaking and protocol hopping from WITHIN ONE system out to a dozen or a Hundred becomes a tragic exercise).

    Hence, the current "commodity" approach to custom build an ugly sprawl of hundreds of systems composed of dozens of separate 3rd party vendor products and components.. quickly results in a nightmare of management and capacity planning as the world of Linux is quickly AWAKE to.. custom integrating a dozen separate products as a one-off with tons of testing and re-configuring/tuning cycles.. that no one vendor can/will support.. and then that all those dozen products and underlying subsystems require separate maintenance updates/patches/firmware * TIMES 200 or 500 or 1000 !!!.. all using separate config tools and maintenance/monitoring (your eyes become quickly opened and your hair falling out faster).

    Couple that with the FALLACY that commodity is cheaper (ha!) .. you realize a year into renewing your dozen support contracts.. dozen separate SW licenses .. dozen agreements for per seat/user licensing.. and your dozen SW license agreements per CPU socket .. (your Apps, your OS, your Virtualization, your storage, your network .. etc. .etc..) not to mention the dozen management/monitoring/maintenance tools your staff needs to train on and use across your 200 or 500 or 1000 systems.. your maintenance effort and exposure/risk of outage costs also grow by an order of magnitude.

    My organization just went through this realization that Linux/x86 commodity is a false panacea.. ending up with thousands of blades and many mission critical production outages, plus costs 2-3* TIMES what a single vendor solution provides. We just moved one application environment from 400 HP x86 systems running RedHat and VMware.. and cut our costs over 3 years saving $3M !! and increased our performance 10x-100x !! by moving to Oracle's SuperCluster's (one stack of management/monitoring tools.. one phone # for support.. all factory pre-integrated/certified.. they own the CPU.. the Solaris OS hands down 100% better than Linux.. they own Java.. and Oracle DB's.. + their Virtualization and monitoring/mangment tools are all FREE with their support contract vs. VMware gouging us with almost 1/4 our SW license costs !).

This topic is closed for new posts.

Other stories you might like