Wednesday, January 18, 2012

A New Faster Fourier Transform Can Speed One of IT's Fundamental Algorithms

A New Faster Fourier Transform Can Speed One of IT's Fundamental Algorithms:
The Fast Fourier Transform From many irregular signals, one. Christine Daniloff via MIT News
Even faster than the fast Fourier transform

An algorithm called the fast Fourier transform is one of the most important aspects of your digital life that you never think about. It's a core concept in information technology, making possible the signal processing, image and audio compression, and other complex mathematics necessary for you to cram every episode of Breaking Bad onto your mobile device alongside every track Jay-Z ever made, and then play it all back without a hitch.


Basically, the Fourier transform turns irregular signals into pure frequencies, so the fluctuating voltage signal traveling through a wire from your MP3 player to a set of speakers can be translated on the fly into the sounds you want to hear. The algorithm does this so quickly it earned the name "fast" (as in "fast Fourier transform," or FFT). And it's about to get faster.


A team of MIT researchers is this week presenting a new algorithm that boosts the performance of the FFT in a large number of cases. That's good news for you, as it spells a future where, for instance, mobile devices might transfer large video files without burning all their bandwidth and battery in the process.


Without getting too deep into the nuts and bolts (of which there are many, available for your perusal over at MIT News), FFT takes the various samples within a signal (these samples are at a range of frequencies) and expresses them as a weighted sum of those frequencies. Some of those frequencies are more important than others, and hence are weighted more heavily. In fact, many of those samples are weighted so weakly as to not be necessary at all. They can be discarded with negligible loss of quality.


The new algorithm outpaces the old FFT by slicing a signal into narrow slices of bandwidth, with each slice containing just one heavily weighted frequency. From there, via some clever filtering and further slicing of each slice into smaller slices still, the new algorithm quickly separates the low-weighted frequencies and isolates the highly-weighted signals. In "sparse" signals--those with a relatively small number of heavily-weighted frequencies--the new algorithm basically cuts to the chase more quickly than the old, extracting what you need and filtering out what you don't.


For things like file compression, that's key. And given that most natural signals are indeed sparse--they include few heavily weighted signals that we need and a bunch of noise that we don't necessarily have to have--the new FFT could drastically increase the speed at which signals are processed in many cases, sometimes providing a tenfold boost in speed. Given that the current Fourier transform method was already quite fast, that's serious improvement.


No comments:

Post a Comment

Related Posts Plugin for WordPress, Blogger...