top of page

Motivation

 

Sampling is a fundamental procedure in every digital system that needs to operate with physical signals and processes, being responsible for the bridge between the digital and analog domain. This process aims to approximate a continuous signal by taking discrete measurements (samples) over a certain space, time or other domain. Using the Nyquist theorem, it is widely known that it is possible to perfectly reconstruct a signal from a discrete set of samples as long as the sampling frequency is at least twice the maximum sinal-frequency. While this approach has been used for a wide range of applications, in some cases the required sampling frequency is too high, the data volume derived from traditional sampling is too big to process or even the hardware too expensive, leading to the search for new sampling options.

 

Compressive sensing emerged as a promising technique to sense, process and store a broad range of images, videos and biomedical signals leveraging their sparsity resulting in sub-Nyquist sampling frequencies and lower volumes of data. While the technique results may seem very attractive its application also includes solving a linear system in which the solution is sparse. Obtaining such solution is computationally expensive, resulting in slow operation. One way to accelerate this process is to take advantage of the parallelization features of field-programmable gate arrays. Over the years several compressive sensing algorithms have been proposed, some focusing in accuracy of the results with little regard for the computation efforts and time, others trying to achieve a good approximation in the least time possible and with fewer resources, accepting a small amount of error.

 

This dissertation presents and studies various performance trade-offs of two compressive sensing algorithms implemented in FPGA: orthogonal matching pursuit (OMP) and iterative hard-thresholding (IHT).

bottom of page