back to article We can't go on like this for much longer, boffins cry to data centre designers

The basic equations are easy: as data centres grow they need more computing power, better networking, more electricity and more cooling – a combination which University of California San Diego researchers predict will need whole racks to be shrunk to chip size. Already, there are outstanding examples of what happens when a …

COMMENTS

This topic is closed for new posts.
  1. Kevin McMurtrie Silver badge

    Software

    It would be good to attack this from the software side too. Many analysis tasks have become too complex to implement with hand-crafted assembly language, or hand-crafted with anything. What happens is that many large and complicated frameworks are tied together with a relatively small amount of custom code. Each framework has a formal representation of data inputs and outputs that are each padded with protection against accidental misuse that would cause obvious data corruption. All of this formality and safety can end up being an enormous processing overhead. "Enterprise Edition" software is the classic example of nearly infinite inefficiency, but seemingly low-level tasks suffer too. What would be useful would be a radical new generation of JIT compiler that can make extreme optimizations across an entire system; analyzing enormous codebases and producing minimal hardware instructions to produce the correct result. Given that an entire data center is available to perform the analysis, it could be feasible.

    1. Anonymous Coward
      Anonymous Coward

      Re: Software

      Get used to it. Developers today can't be trusted with low-level hand-crafted code and they surely can't be trusted with high-level hand-crafted code without extensive testing and training wheels left in place at run-time. We should be at the stage where code blocks are the stock parts which we integrate to get a finished piece instead of each developer having to reinvent the square-wheel. That's the bill of goods that has been repeatedly sold since the '80's with OOP and we're still not there yet. Sooo..., to overcome the bloat, we have to shrink the distance and tweak up the compute/watts with SDN and all the other bells and whistles.

  2. Lusty
    Trollface

    Careful..

    The Fandroids will be along shortly to moan about recycling and why it would be madness to integrate CPU and switch on a chip since they won't be replaceable...

  3. Anonymous Coward
    Anonymous Coward

    Just send for ........

    Professor Frink and Professor Farnsworth. With those two egg heads on the case Deep Thought will be up and running within the week.

  4. Pallas Athena

    Haven't we seen that?

    A large, rack-size computer with lots of processing power, lots of memory and lots of networking capacity all build in - kind of like ... a mainframe?

    1. Roland6 Silver badge

      Re: Haven't we seen that?

      One of the things a mainframe does, is do away with all the cases and power supplies for each individual blade - things that consume power, create/retain heat and take up space.

      Hence I would expect to see as a near term step the further reduction in the 'packaging' of server blades, with the intent of reducing it to a single bare board, that slots into a (Standard ?) backplane bus. I suspect that many of the major vendors will probably want to sell preconfigured 'racks'/mainframes.

      As for a rack on a chip, well for the moment that is in the realms of science fiction. But I note the sales pitch for more funding: "Fainman and Porter call for expanded research into nanophotonics to greatly increase the scale of optical comms that can be integrated on-chip.", which could deliver something that could be useful in many devices.

  5. Interceptor

    But you can have these things - tiny data centers, I mean. With speed increases and wear-leveling improvements, you can create petabyte or exabyte sized NAS devices that you can cram multiples of into a shoebox. As for the CPUs driving them, how many cores now do we have on various ARM SOCs? 16? 32? more than enough computing power. Depending on my needs, I could fill a standard rack with enough "stuff" to supplant some of the basketball-court sized (and larger) data centers I worked in during the 90s, and have room left over to virtualize the whole thing again, and have an entire backup/fail-over segment - and have the PDU tucked down in the bottom.

    I reviewed a DC power driven server for Rackable back in the dim days of the early 2000s that ran much, much cooler than an equivalent AC powered unit (of course, if you're wired for AC already that's an expense right there...)

    But the point still stands: big iron needn't be big, and it needn't be hot and power-hungry.

  6. Anonymous Coward
    Anonymous Coward

    Some like it hot

    I'm probably missing something obvious here, but why do companies build huge data centres in hot places? The company I work for has its main servers in Arizona - wouldn't Alaska be better? They'd save huge amounts of money on cooling.

    1. Roo

      Re: Some like it hot

      "I'm probably missing something obvious here,"

      Yeah, lots of power at low cost with multiple sources.

      1. Interceptor

        Re: Some like it hot

        Also: weather stability. Data centers where there's lots of earthquakes, snowstorms or hurricanes is suboptimal. I realize there are data centers in the northeast, west, and southeast but if I had to pick a destination, one of the "desert states" wouldn't be far off the mark.

      2. Roland6 Silver badge

        Re: Some like it hot

        >Yeah, lots of power at low cost with multiple sources.

        That will be Iceland then...

This topic is closed for new posts.

Other stories you might like