back to article MIT boffins devise faster Fast Fourier transform

Researchers at MIT have published a paper detailing a new approach to Fast Fourier tranforms (FFT) that could increase the speeds of image and sound processing tenfold. FFT has been in use since the 1960s to process audio and image files by isolating key frequencies from an irregular original signal and then assembling them in …

COMMENTS

This topic is closed for new posts.
  1. Michael H.F. Wilkinson Silver badge
    Boffin

    So it is not an FFT, it is an approximation

    Fourier Transform allows exact reconstruction of the image, not an approximation. It is a variation on the theme of lossy image compression exemplified by JPEG (which uses the discrete cosine transform), or wavelet compression. The idea is the same: decompose the image into some basis function and discard unimportant ones.

    I do not doubt the MIT guys may be doing this in a clever way, but FFT it is not.

    1. Anonymous Coward
      Anonymous Coward

      But...

      "The idea is the same: decompose the image into some basis function and discard unimportant ones."

      And which are the unimportant ones? The ones with a low signal energy, which are precisely the ones this algorithm will ignore.

      So yes, this IS important, for exactly the reason that this will speed up the most common usage of FFTs, which is identifying in the frequency domain what bits of a signal matter and what bits don't.

      And given that the next generation modulation schemes after OFDM (which, as N. E. Fule knows, is the heart of LTE among others) will be using sparse FFTs to minimize the crest factor of the signal, this could make those next generation systems take less computational power (and thus, less actual power) as well as making them a whole lot friendlier to the RF system (not requiring as much transmitter overhead due to the reduced crest factor).

      1. Michael H.F. Wilkinson Silver badge
        Boffin

        Again, there may be something important in there, I have yet to study the paper in detail. However, there is a ton of literature on removing low energy components (we have a few papers on other compression techniques, so I have been keeping up with much of it), so that in itself is not new. I also question whether the global sine/cosine basis used by Fourier is ideal for this work. More localized basis functions (local DCT/ wavelets/curvelets/insert your favourite over-complete dictionary of functions here) are often used. This is why the MIT paper uses a local transform, rather than global, as I understand it from the Reg report.

        So again, it is not really an FFT, but it may be another kind of lossy compression technique. If they compress or analyse better than others kudos to them!

        1. Paul Shirley

          @Michael

          You really need to read the paper, the Reg's description has no useful information about how it works or in fact what it does.

          To the extent I understand the paper, it appears to filter its frequency bins in the time domain. By assuming each bin is dominated (or in the special case only contains) a single frequency it's trivially easy to sample the remaining sine wave to deduce its exact frequency and phase. The filter design seems to be about cleanly splitting the bins so that its safe to further process signals in just each bin rather than the entire source sample.

          One worry is the algorithm is probabilistic, the quoted times are *expected runtime* and they don't make clear what the worst case is or how small a sample would provoke them. For audio encoding I can see an adaptive search seeded by the previous frame would fix this and this might be useful in real life with masking effects allowing quite ruthless frequency decimation.

          For video encoding they misrepresent the nature of DCT block based coding. We don't pick a number of fourier addends up front and throw away all the others (the 57 of 64 argument), we compute them all *then* let the arithmetic decimate them dynamically and automatically. With no search being made, an inherently messier signal and a lower ability to simply discard components compared to audio it looks like a bad fit. I'd say that would scale to non-block based encodings as well.

      2. Anonymous Coward
        Anonymous Coward

        @David D. Hagood

        Michael's point is that this is being spun as an improved FFT.

        It isn't an FFT. It may be far more useful for many things, but it is being mis-labeled.

        Tell you what, I'll sell you a motorcar. Its got four seats and luggage space, and doesn't need any fuel to go - very green - but you have to peddle it.

        Useful, may be chosen in a hurry based on name and a couple of quick spec points, but not necessarily what was wanted.

        1. Anonymous Coward
          Anonymous Coward

          @AC 09.38

          I agree. If I had to pedal it I would peddle it too!

  2. Voland's right hand Silver badge
    Devil

    Interesting...

    The most interesting bit here is that this is a naturally parallel algorithm.

    1. Anonymous Coward
      Anonymous Coward

      FPGA or GPU DSP anyone?

      Indeed… one ideally suited to FPGA implementation or GPUs.

  3. HMB

    Time to dig those CDs out again so I can rerip them in Even More Advanced Audio Codec.

    1. Field Marshal Von Krakenfart
      Joke

      "decompose the image into some basis function and discard unimportant ones."

      I just tried that on my X-Factor greatest hits, I managed to compress the whole CD into a file of 0 bytes.

      1. BorkedAgain
        Thumb Up

        @Krankenfart

        Genial!

        Made me laugh, almost literally out loud... :)

  4. Andus McCoatover
    Windows

    @So it is not an FFT, it is an approximation. FFT is an approximation.

    Luv-a-duck, what goes around, comes around!

    I remember struggling to understand the Cooley-Tukey 'twiddle factor' many years ago. Then, the FFT became a (close) approximation to the DFT. But, the FFT _was_ an approximation. it was a fraction of a db out, but, so what if the speed to calculate was not N^2, but 2N. Vast difference.

    (Corrections gratefully recieved, been a long time since I worked on FFT Rectum Tantalysers - oops...)

    1. Anonymous Coward
      Anonymous Coward

      FFT is an approximation; this is a _lossily-compressed_ approximation.

      They've basically found a way to combine the transform with the compression into a single streaming operation, IIUC. That'll be a pretty neat way to reduce the degree of Amdahl/Gustafson overhead.

      1. Anonymous Coward
        Anonymous Coward

        FFT an approximation?

        FFT is _not_ an approximation but a non-naive algorithm for computing DFT. This sounds like an approximation of DFT. Two completely different problem domains really.

    2. Eddie Edwards
      Headmaster

      FFT is not an approximation

      Corrections, as requested :)

      FFT is not an approximation. In fixed-point, or in an integer field, or using real numbers, it is precisely correct. If you're using floating-point, of course, a different rearrangement of equivalent operations is going to give you different rounding errors and slightly different numerical output.

      Oh, and it's O(NlogN) not O(N).

  5. Destroy All Monsters Silver badge
    Boffin

    Fuzzy Fourier Transform?

    El Reg, do NOT be afraid of using Big Words like "time domain" and "frequency domain".

    We can take it it!

  6. Anonymous Coward
    Anonymous Coward

    Am I hearing this right ?

    Or is that the point ?

  7. Someone Else Silver badge
    Coat

    To paraphrase at least one Industry Luminary...

    .We don't need a faster Fourier transform...the "fast" Fourier transform is all you'll ever need.

  8. Yeah Right 2
    Coat

    Upgrade Treadmill

    For the last ten years or so I have preferred to wait a couple of generations between upgrades. I'm going to skip the fast fast fourier transform and wait until the fast fast fast fast fourier transform comes out.

    1. LaeMing

      Prepare to engage

      Ludicrous-speed fourier transform!

  9. Robert Carnegie Silver badge

    If I understood this, I'd try to patent it. Maybe. I know about ethics. It's a county in England where they make that awful television show.

    1. Toastan Buttar

      Isn't the village of Wainscotting in Ethics?

  10. Paul Hovnanian Silver badge
    Joke

    So the old way was a half-fast Fourier Transform?

    1. LaeMing
      Go

      we had none of these fast fourier transform nonsenses.

      We transformed our fouriers uphill through the snow both ways!

  11. Anonymous Coward
    Anonymous Coward

    Eggzelent!

    This should reduce bandwidth required for drone video transmission!

  12. roger stillick
    Happy

    b4 FFT

    Harmonic Analyzers were barely digital and only good for slow stuff like tidal analysis... FT good only for things as Microwave Spectrum Analyzers which pretty much only give the predominant sum of the single pwr vs freq scan... note that this is as analog as watching a vom needle move around... put the output on a O scope and still a wiggley analog picture... digitize the analog picture and rescan a bunch of times-put the data into a spreadsheet pgm and you got what put HP and Tek out of the instrument business-SOFTWARE... about this time CDMA was declassified... FFT is just software that manipulates data to do something useful... this may be useful if GSM is a QAM system with freq components like Direct TV does... CDMA is coherent noise-all different... EOF

  13. Robin Bradshaw

    Lookout for patent landmines

    "then uses technology from 4G networks"

    Oh dear, this means that between apple, motorola, samsung, nokia etc somebody somewhere will have patents that covers parts of this and they will be sued into oblivion if they ever try to commercialise it.

  14. bazza Silver badge

    Real world performance...

    This is not the first time that a new discrete Fourier transform has come along.

    About ten years ago some academic came up with a fresh approach to the algorithm that claimed to produce the correct arithmetic result but with a reduction in the number of floating point operations. Everyone got very excited, but as far as I know it never saw the light of day in any commercial application. Whilst the exact algorithm wasn't published (they were aiming for a patent) from the vague description the authors gave I concluded that it wasn't going to be very cache friendly. And if an algorithm isn't cache friendly then it isn't going to be terribly fast on a CPU, especially if the amount of data being processed is larger than the L1 cache size; you'd have to build dedicated silicon to implement it, and that is *very* expensive to do.

    The bigger CPUs (x86, sparc, ppc; not ARM) generally are astonishingly fast if data fits in L1 cache (ever timed the Intel IPP library's FFT on smallish data sizes?) and a complicated algorithm like this new one may actually be of some real world benefit. If it can break down larger FFTs into lumps of data that stay in L1 cache for longer then the real world performance could be significantly better than existing algorithms. So there maybe some software applications.

    But as for hardware applications (signal/video/image processing in mobile phones) it might not see the light of day for a long time; if it can be squeezed into existing chips then great; if not, then it'll have to wait until the next design iterations, which could be a long time away. And it will have to worth it; if it takes twice as many transistors then the cost/benefit analysis that the hardware manufacturers will domight not stack up in its favour.

    1. Pseu Donyme

      >cost/benefit analysis

      By the pointy-haired boss based on a chat with the marketing guy in the loo.

  15. Thought About IT
    Devil

    Can't trust Fourier Transforms ...

    ... because they were invented by that man who discovered the greenhouse effect. Surely a persona non grata here on El Reg?

  16. JDX Gold badge

    Not FFT?

    I'm not sure the definition of FFT requires it to be lossless? We are still applying Fourier Analysis, and still using a transform, just choosing not to send all the stuff.

  17. Werner McGoole
    WTF?

    Hmm

    So it's faster than a FFT because it doesn't do all the transform. Why didn't I think of that!

    Seriously, wouldn't a better comparison be with other lossy compression algorithms? In which case the advantage of using sin & cos as the basis functions isn't altogether obvious.

  18. Ken Hagan Gold badge

    MIT boffins devise modest tweak to FFT. Film at 11.

    The point of the FFT is that it turns an O(n^2) algorithm into an O(n.log n) algorithm. For interesting problems, that's a factor of hundreds, thousands or millions. Problems that were far too large to even consider running on the largest supercomputer money would ever be able to buy during your lifetime suddenly fitted onto a machine with the computational grunt of Bjarne Stroustrup's rice cooker.

    What is described here sounds like premultiplying the big-O by a slightly smaller number, at some loss in accuracy. It may prove to be useful, but it's not in the same league as the original discovery.

    1. Paul Shirley

      repeat after me: its NOT AN FFT

      Its NOT an FFT. It's not even an FT.

      It's better described as a Direct Fourier APPROXIMATION, completely skipping any domain transform and guestimating (with high probability) the dominant fourier components. The point is, by not even attempting to compute the full result it can beat O(n log n). It's only interesting because FFT's are following by decimation in so many real life uses.

  19. IHateWearingATie
    Paris Hilton

    Strange how...

    ... for some articles the discussion descends into nonsense and others it gets far more techy than the original article and it goes over my head (at my A level physics understanding).

    Anyway - more Pris Hilton articles (or something)!

    1. Arthur the cat Silver badge
      Paris Hilton

      Pris Hilton?

      That would be the replicant heiress with killer thighs then?

  20. malcolmus_rex

    hmmmm.... pondering this.

    could the same method work for spherical harmonics??? that would be useful for all sorts of GFD style applications, especially if it's a true parallel/scalable process.

  21. Mostly_Harmless Silver badge
    Coat

    I can see it now...

    I've got a great idea for a movie, about two rival teams of researchers racing to get their work in this area published first. I'll call it "The Fast And The Fouriers".

  22. Anonymous Coward
    Anonymous Coward

    Faster FFT my arse

    The output is not the same as a DFT, therefore it's not a Faster FFT. This sounds more like yet another lossy compression scheme. Who wrote this article?

    Now, let me have a look at the actual paper...

  23. Dave Bell
    Coat

    The whole point of a more efficient algorithm is that it allows clock cycles to be used for the Windows GUI.

  24. Stuart Halliday
    FAIL

    missed

    So no one else noticed that the sinewave graph is hopelessly inaccurate?

This topic is closed for new posts.