back to article Intel demos real-time code compression for die shrinkage, power saving

Intel researchers have developed a way to make the increasingly tiny processors needed to power the impending "Internet of Things" even tinier: compress the code running on them. "We compress the code, make it smaller, and save area and power of integrated on-die memory," Intel Labs senior reseacher Sergey Kochuguev from ZAO …

COMMENTS

This topic is closed for new posts.
  1. mtp

    Compilers job

    Surely that is a job for the compiler not the CPU

    1. smudge

      Re: Compilers job

      Surely that is a job for the compiler not the CPU

      Completely understand where you're coming from.

      But then:

      - all compilers for this processor would have to compress the code

      - or the processor would have to be able to distinguish between compressed and uncompressed code.

      If you were going to produce a compiler which output compressed code, then you'd either want to make that option switchable, or have it produce both compressed and uncompressed code so that you could use it for other processors.

      So - start with the proof of concept demonstrator, as they have done. Then do the other stuff later, if there is sufficient interest.

  2. Mad Chaz
    Linux

    It's not being handled by the CPU, it's handled by the dedicated memory controler. The reason it's better to have it done in hardware instead of by the compiler is it makes for less work for the CPU, not more. The idea being that the CPU having to do less work means extending battery life.

    1. frank ly

      Executable code is produced by a compiler _before_ it is executed and often on a different machine to the one that uses the code. This is about something that improves code efficiency _while_ it is being executed on a target device.

      (Having a compiler optimise for a target architecture would be a separate consideration.)

    2. Gordan

      "The reason it's better to have it done in hardware instead of by the compiler is it makes for less work for the CPU,"

      How do you figure that? If both compression and decompression is done in hardware and the initial code is uncompressed, then the CPU has to burn power to compress the code in the first place, then de-compress it just-in-time to execute it.

      If the compression is done by the compiler you load the compressed code directly at run-time, and you only have to decompress it just-in-time to execute. By having the compiler do the compression once (and it can spend a lot more optimizing the compression since it doesn't have to be done in real-time at run-time) you are saving at least half of the run-time work, probably more since compressing is typically slower than decompressing.

  3. Matt Bucknall

    Thumb

    Sounds a bit like what ARM set out to achieve with their Thumb/Thumb2 instruction sets, but taking things one step further with proper data compression rather than just shrinking the instruction format.

    The article doesn't mention what architecture this is targeting, although as it's Intel I'm guessing x86 because I can't imagine them getting back into smaller embedded stuff having dumped MCS-51, i960 and XScale a long time ago. If that's the case, it makes sense for the compression/decompression to be on chip so that code compatibility can be maintained. If compression had to be done at compile time, compatibility would go out of the window.

    As for 'intelligent drapes, coffee machines, toothbrushes, baby monitors, stereos, alarm clocks, supermarket shelves, air-quality sensors, and more', surely that has been ARM's bread and butter since almost forever? It's hardly new for microcontrollers to be embedded in that kind of stuff and you don't need vast amounts of processing power or memory to achieve Internet connectivity.

  4. Hcobb
    FAIL

    It just shows how flabby the micro-ops stored in Intel's caches really are. Thank you x86!

  5. Anonymous Coward
    Anonymous Coward

    Begs the question

    If they are going to the trouble of decompressing code at runtime using hardware, why not go the extra step and use hardware to translate high level byte codes into machine language on the fly. Theoretically that should enable a much greater level of compression.

    1. Gordan

      Re: Begs the question

      A compiler in hardware? Whasn't that one of the many too-smart-by-half ideas that Java was supposed to bring?

      1. Valhrafn

        Re: Begs the question

        It is my understanding they do that too. Flabby x86 is expanded into multiple uops that are scheduled independently ..

        On another note, 20,000 gates? I've seen entire 32-bit cores in less than that!

  6. Nuno trancoso

    Hmmm

    If this was some form of say UPX, then you'd have a small benefit on storage space saved, but your memory "energy footprint" would be the same as the executable would still take the same space once uncompressed.

    But... if the exec is compressed once as it leaves storage>RAM and then dynamically uncompressed/compressed as the CPU fetches it (and maybe changes it), you'd have a smaller memory "energy footprint" than the original, assuming code would compress enough that the lower memory energy usage outweighed the energy the compression/decompression unit used.

    As for blaming compilers, well, there are many a part of code a compiler simply can't take a guess at rewriting/optimizing. For example code that might never execute but that the compiler can't ignore because it might execute sometimes. A unit such as this actually works in that case because it doesn't need to make assumptions, it just packs/unpacks as needed.

  7. Craig 28

    Lets just hope it's not another incident like the heat factory that was hyperthreading on the pentium 4. I still shudder at the memory of my sister's Pentium D laptop that was basically two P4 cores bolted together into one chip.

  8. James Hughes 1

    Did anyone actually read the article before commenting?

    This seems to be a decompression unit between code storage and the CPU; completely transparent to the CPU - its asks for memory, its get it, not realising that it was grabbed from a compressed image and decompressed on the fly. This means smaller memory requirement (and memory is the biggest foot print on these chips), and reduced power to run the memory. Take in to account the power needed to decompress, and there is still a net gain in power requirements, but a 5% loss in performance. According to Intel.

    It's nothing to do with the compiler. Its nothing to do with the architecture of the instruction set.

    As for the compiler in HW, look up ARM Jazelle - might be of relevance.

  9. Rogue Jedi

    50 billion connected devices?

    so that is about 7 devices per persion, at current I own 2 (phone and desktop), i suppose this means they are expecting people in rich contries to own internet enabaled versions of each of the following:

    1 ) desktop

    2 ) laptop

    3 ) tablet

    4 ) phone

    5 ) car

    6 ) tv

    7 ) microwave

    8 ) refirdgerator

    9 ) freezer

    10) washing machine

    11) tumble drier

    12) oven

    am I missing anything?

    1. Linh Pham

      Re: 50 billion connected devices?

      DVRs and game consoles would be a big ones, be it an XBOX, Wii or PS3, or something like a Nintendo 3DS, PSP or PS Vita. Smart electrical meters, HVAC controls (residential or commercial), weather-aware sprinkler systems, EV charging stations, home security systems would also add to the count.

  10. BlueGreen

    I don't get it

    "We compress the code [...]" but code is typically far less than data in a conventional app. Ok, they say it is for embedded stuff in which case there is likely to be little code anyway; not your full app stack on a full fat OS; so I can't see the value.

    If it is for Rogue Jedi's tumble drier/microwave/car, it makes no sense as these are heavy energy users anyway. If it's for portable devices such as phones, there will be a lot more data than code. Still can't understand it.

    First question, then what niche is this technology targeting?

    Might help more if the data was compressed but that's been done already I believe, in which case this might just be a land-grab for maybe-one-day-useful patent IP. That's the only thing that makes sense to me so far - any wiser suggestions welcome.

    1. BlueGreen

      Re: I don't get it

      And then he reads the last para (sorry for being dumb): "intelligent drapes, coffee machines, toothbrushes, baby monitors, stereos, alarm clocks, supermarket shelves, air-quality sensors"

      Well...

      intelligent drapes, intelligent toothbrushes? Just die.

      Coffee machines - heavy energy user anyway.

      Baby monitors - a quick browse suggest some are mains powered, and the rest have wifi & speakers. which is likely to use more power than cpu.

      Alarm clocks? Oh please, WTF do I need to put intelligence in a clock for?

      supermarket shelves, air-quality sensors - dunno.

    2. Anonymous Coward
      Anonymous Coward

      Re: I don't get it

      "what niche is this technology targeting?"

      It's targeted at spamming the gullible media, trying to convince them and their readers that Intel still have some relevance outside the increasingly irrelevant world of x86.

This topic is closed for new posts.

Other stories you might like