top of page

Context

 

Joseph Fourier, a French mathematician from the early 19th century, proposed that any arbitrary continuous signal could be completely described in frequency-domain as the sum of a series of sine and cosine functions. This resulted on the formulation of the Fourier series and Fourier transform, later expanded to the discrete time case, resulting on the discrete time Fourier series and discrete Fourier transform. Any continuous signal is approximated by a set of discrete samples taken at a given frequency, meaning that with sufficiently fast sampling rate, any continuous signal could be exactly reconstructed from a set of equally spaced samples. If the sampling frequency is lower than the needed, aliasing can occur leading to a reconstructed signal different from the original.

 

The minimum rate at which this sampling had to be made to achieve perfect reconstruction was not defined until the proposition of Harry Nyquist’s and Claude Shannon’s sampling theorem. The Nyquist-Shannon sampling theorem states that in order to avoid aliasing, the sampling rate, also known as the Nyquist rate, should be greater than twice the highest frequency component of the target signal, the Nyquist frequency. This result is the foundation of today’s digital systems.

 

While this theoretical result is applicable to well bound, noiseless signals, natural signals most of the times present noise and unexpected higher frequency components that have to be accounted for when designing any system. When the sampling frequency is lower than the needed because of these unaccounted for higher frequency components, aliasing occurs. In order to compensate for this, most present day systems are preceded by an anti-aliasing component, which normally would be a low-pass filter, to ensure that there are only frequency components lower than the Nyquist frequency. To allow for a feasible filter, most of the times the effective sampling rate is some orders of magnitude higher than the Nyquist rate putting even more pressure on the design of the sampling system.

 

Even with these considerations the Nyquist-Shannon sampling theorem has been successfully used over the years of digital signal processing. As the digital revolution develops though, the search for higher signal resolution, processing rate and new data structures is stressing the capabilities of traditional systems. As the operating frequency requirements rise, development of analog-to-digital (ADC) components is becoming more complex, expensive and even impossible. With higher sampling rates sometimes reaching frequencies on the Gigahertz range, a torrent of data is generated and sent to digital signal processing (DSP) systems, that need to cope with it. Furthermore, requirements such as high signal to noise ratio (SNR), low energy consumption and high efficiency and low cost increase the difficulty in developing this systems.

 

As the challenges presented to traditional signal acquisition and processing systems become harder, and in some cases even impossible, to overcome, new methods need to be developed and imple- mented to substitute the old ones. Compressive sensing (CS) is one alternative. Building upon the theory of sparse representations, this approach aims to offer new solutions to several of the problems that traditional systems struggle to overcome.

 

With the works of Donoho and Candes, Romberg, and Tao it was demonstrated that a sparse or compressible signal can be reconstructed from an incomplete number of samples as long as some constraints are met.

 

The framework of CS, presented below, is based on various concepts, most important of which are:

bottom of page