Thursday, April 08, 2010

CS: CS hardware ?, Entropy optimization solver at GE,, Improved Sparse Recovery Thresholds with Two-Step Reweighted l_1 Minimization

Justify FullHamed Hamid Muhammed just sent me the following:
... I and a colleague of mine actually developed a Compressed Sensing Hyperspectral Camera that we called CamSpect. You can find a newly published article about it here:

http://link.aip.org/link/?SII/3/79/1

The idea is to instantaneously acquire highly compressed images which can be saved then afterwards these images can be mathematically converted into Hyperspectral images.

The camera doesn't exist yet. Just a simplified prototype was made by utilising a webcam.
This paper presents a sensitivity analysis of using instantaneous multichannel two-dimensional (2D) imaging to achieve instantaneous 2D imaging spectroscopy. A simulated multiple-filter mosaic was introduced and used to acquire multichannel data which were transformed into spectra. The feasibility of two different transformation approaches (the concrete pseudoinverse approach and a statistical approach) was investigated through extensive experimental tasks. A promising statistical method was identified to be used for accurate estimation of spectra from multichannel data. Comparison between estimated and measured spectra shows that higher estimation accuracy can be achieved when using a larger number of usable multiple-filter combinations in the mosaic.

Also of interest is the following manuscript: Affordable Simultaneous Hyperspectral Imaging. The CamSpect site is located here. The company developing this technology is Uppdaga. From what I have briefly read, CamSpect seems to indeed perform some sort of compressed sensing in that the sensor can acquire the spectral content at the same time as the data in physical space. However, when one uses the word compressed sensing nowadays, it generally refers to the fact that:
  • the underlying signal being compressed is sparse,
  • some type of subsampling is garanteed and observed
  • new types of solvers are needed to perform the decoding between the compressed measurements and the full signal.
The approach highlighted in CamSpect does not seem to use the sparsity of the signal explicitly. nor does it advertize some sort of subsampling. Also the decoding is performed using the pseudo-inverse of the measurement matrix. All in all, the approach of using a different kind of filters to multiplex different wavelengths and record these measurements using a webcam is similar to the CASSI instrument at Duke (some videos about related work can be found on the compressive sensing hardware page.)

After a corporate interest advertized by Nokia, here is a blog entry from GE Global Research that summarizes some work performed there on Compressed Sensing. The entry is entitled The beauty of compressive sensing. One can read:
In the first image (to the left, in the upper left), the pixels are roughly half white and half dark. The Fourier coefficients (upper right) are not sparse either, containing many high frequency components. Therefore, the L1-based reconstruction methods have difficulty in recovering the original image from few samples. In contrast, the entropy optimization method can perfectly reconstruct the signal with 2500 measurements (bottom right), about 61% of the number of pixels.
The wording "entropy optimization" reminds me of this paper by Bhaskar Rao and Kenneth Kreutz-Delgado entitled An affine scaling methodology for best basis selection where one can read in the abstract:
The Gaussian entropy minimization algorithm is shown to be equivalent to a well-behaved p = 0 norm-like optimization algorithm.
The GE work seems to point to the use of a solver comparable to an l_0 solver. I look forward to the publications on that.

It is well known that $\ell_1$ minimization can be used to recover sufficiently sparse unknown signals from compressed linear measurements. In fact, exact thresholds on the sparsity, as a function of the ratio between the system dimensions, so that with high probability almost all sparse signals can be recovered from iid Gaussian measurements, have been computed and are referred to as "weak thresholds" \cite{D}. In this paper, we introduce a reweighted $\ell_1$ recovery algorithm composed of two steps: a standard $\ell_1$ minimization step to identify a set of entries where the signal is likely to reside, and a weighted $\ell_1$ minimization step where entries outside this set are penalized. For signals where the non-sparse component has iid Gaussian entries, we prove a "strict" improvement in the weak recovery threshold. Simulations suggest that the improvement can be quite impressive-over 20% in the example we consider.
The reweighted l1 code can be found here and will be featured soon in the reconstruction section of the Big Picture in Compressive Sensing.

No comments:

Printfriendly