Saturday, February 23, 2008

This coming week will see few updates. Time to look at things with some depth.

Compressed Sensing: Faster is Good, but now Greedier is Better.


Another paper from the Rice repository that was not mentioned before. It was written by Anna Gilbert and Joel Tropp, and touches on an on-going concern that greedy reconstruction algorithms need more measurements than traditional l1 based or basis pursuit methods in order to obtain the same results. In Signal Recovery From Random Measurements Via Orthogonal Matching Pursuit, it seems the problem has been solved.
This paper demonstrates theoretically and empirically that a greedy algorithm called Orthogonal Matching Pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(mln d) random linear measurements of that signal. This is a massive improvement over previous results, which require O(m2) measurements. The new results for OMP are comparable with recent results for another approach called Basis Pursuit (BP). In some settings, the OMP algorithm is faster and easier to implement, so it is an attractive alternative to BP for signal recovery problems.
In other words, we now seem to have a fast greedy algorithm that requires as many measurements to obtain the same result. This is good news and seems an improvement on the already excellent results of Deanna Needell and Roman Vershynin in Signal recovery from incomplete and inaccurate measurements via Regularized Orthogonal Matching Pursuit.

Credit photo: NASA/JPL/Space Science Institute, Dione, a moon of Saturn photographed the day before yesterday by the Cassini probe.

Friday, February 22, 2008

Compressed Sensing: Deterministic Subsampling, A New Era ?

As mentioned earlier, Yves Meyer and Basarab Matei are extending the results of Emmanuel Candes, Justin Romberg and Terry Tao mentioned in Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information where the initial main question was : "Is it possible to reconstruct a function f from the partial knowledge of its Fourier coefficients on the set ?" Yves Meyer and Basarab Matei improve these results by proving that a positive function in (and zero outside) can be recovered if one knows its fourier coefficients with indices on a lattice defined as:



Which also is the lattice formed by the Fourier spectrum of a Quasicrystal ( and can be changed).


More details can be found in [1]. It doesn't get more straightforward than that. They also seem to have some results on noisy experiments. Since this is a problem with complex variables, it looks as though one would have to resort to a second order cone program (see [2])

To be complete, we would like to mention that for complex valued signals, the minimum l1 problem (1.5) and, therefore, the minimum TV problem (1.1) can be recast as special convex programs known as second order cone programs (SOCP’s).

I have not implemented it yet but other questions with regards to greedy algorithms or l_p techniques seem to be totally open. In the end, it doesn't get any more deterministic than that. What does this result mean for the rest of us ? Either we have access directly to the Fourier Transform of the signal of interest like in the case of MRI or in Interferometry, or we need to have specifically designed instrumentation that deal directly in the Fourier world. While this analysis seems to be focused on images, there is no requirement to stay in dimension 2 either. We may even need to make sure to check that dictionaries of elemental bases are themselves positive ( using techniques like Non-Negative Matrix Factorization (NMF)) so that each element is sampled with few Fourier coefficients. It looks like Meyer and Barabei have opened an entire field of low hanging fruits in both theoretical and applied math as well as hardware implementations.


[1] A variant on the compressed sensing of Emmanuel Candes, Yves Meyer, Basarab Matei. For a copy please contact directly the authors.
[2]Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information, Emmanuel Candes, Justin Romberg and Terry Tao

Thank you http://servalx02.blogspot.com/ on how to use LATeX page on Blogger.

Thursday, February 21, 2008

Compressed Sensing: Fast Bayesian Matching Pursuit

Here is another Bayesian approach to Compressed Sensing. In Fast Bayesian Matching Pursuit, Philip Schniter, Lee C. Potter, and Justin Ziniel tell us how to reconstruct signals using a greedy algorithm and bayesian techniques. The abstract reads:

A low-complexity recursive procedure is presented for minimum mean squared error (MMSE) estimation in linear regression models. A Gaussian mixture is chosen as the prior on the unknown parameter vector. The algorithm returns both an approximate MMSE estimate of the parameter vector and a set of high posterior probability mixing parameters. Emphasis is given to the case of a sparse parameter vector. Numerical simulations demonstrate estimation performance and illustrate the distinctions between MMSE estimation and MAP model selection. The set of high probability mixing parameters not only provides MAP basis selection, but also yields relative probabilities that reveal potential ambiguity in the sparse model


Of note is this text in the conclusion:

Numerical experiments suggest that the estimates returned by FBMP outperform (in normalized MSE) those of other popular algorithms (e.g., SparseBayes, OMP, StOMP, GPSRBasic, BCS) by several dB in typical situations.


No code yet available.

Compressed Sensing: Thesis Defense / Bounds and practical constructions of optimal compressed sensing matrices

Added to the Calendar, Shriram Sarvotham will defend his thesis entitled "Bounds and practical constructions of optimal compressed sensing matrices" on Thursday, February 28, 2008 at 4:00 PM to 5:30 PM. It'll take place in room 1049 of Duncan Hall, Rice university, Houston, TX. The abstract of his thesis is:

Compressed Sensing (CS) is an emerging paradigm in signal acquisition that enables reconstruction of sparse signals from a small number of linear projections. The theme of this thesis is to derive fundamental bounds on the performance of CS. We derive two key bounds on the quality of CS matrices as measured by the Restricted Isometry Property (RIP). The ideas shed light on the intimate relationships between the RIP and a variety of areas such as the algebra of Singular Value Decompositions (SVDs) of submatrices, coding on Euclidean spheres and the Generalized Pythagorean Theorem. We infer the dimensions of an optimal CS matrix based on the two key bounds.


Good Luck Shriram!

Wednesday, February 20, 2008

An attempt at the Big Picture in Compressed Sensing and a bet on NROL-21


I just put together a more static page on the Compressed Sensing Framework where I try to paint the big picture of the subject as far as I understand it. The field is getting bigger and there is a need for newcomers to see what are the current lines of thoughts without being first drowned in details. Let me know if I am doing a good job or not. It is at:

it also includes the calendar and search engine.


Thanks to a NOTAM, Comos4u and Nasawatch reports that NROL-21 will probably be shot down during the Moon eclipse so that the missile has an overwhelming probability of not mistaking the moon for something else. Phil Plait has more on the subject. My bet ? there is no way we will see a clean kill at Mach 17, but that's just me.

Credit & Copyright: Noel Munford (Palmerston North Astronomical Society, New Zealand)

Compressed Sensing: More on Deterministic Subsampling.


The current crop of deterministic schemes devised to perform compressed sensing measurements or subsample below Nyquist are listed here. To that we have to add the upcoming paper of Yves Meyer and Basarab Matei. Improving on a previous algorithm Mark Iwen and Craig Spencer have just released a new preprint entitled: Improved Bounds for a Deterministic Sublinear-Time Sparse Fourier Algorithm. The abstract reads:
This paper improves on the best-known runtime and measurement bounds for a recently proposed Deterministic sublinear-time Sparse Fourier Transform algorithm (hereafter called DSFT). In [1], [2], it is shown that DSFT can exactly reconstruct the Fourier transform (FT) of an N-bandwidth signal f , consisting of B  N non-zero frequencies, using O(B2 ·polylog(N)) time and O(B2 · polylog(N)) f -samples. DSFT works by taking advantage of natural aliasing phenomena to hash a frequency sparse signal’s FT information modulo O(B·polylog(N)) pairwise coprime numbers via O(B · polylog(N)) small Discrete Fourier Transforms. Number theoretic arguments then guarantee the original DFT frequencies/coefficients can be recovered via the Chinese Remainder Theorem. DSFT’s usage of primes makes its runtime and signal sample requirements highly dependent on the sizes of sums and products of small primes. Our new bounds utilize analytic number theoretic techniques to generate improved (asymptotic) bounds for DSFT. As a result, we provide better bounds for the sampling complexity/number of low-rate analog-to-digital converters (ADCs) required to deterministically recover frequency-sparse wideband signals via DSFT in signal processing applications [3], [4].

This is an improvement over the previous algorithm presented here by Mark Iwen in A Deterministic Sub-linear Time Sparse Fourier Algorithm via Non-adaptive Compressed Sensing Methods

Also in line with probabilistic schemes to help your average LAPACK library (see Random Projection Lapack ?), Mark Iwen and Craig Spencer are also making available A Note on Compressed Sensing and Rapid Matrix Multiplication when you know the result of a matrix multiplication will yield a sparse matrix, which is not often but a step in the right direction.The abstract reads:
This paper discusses how compressed sensing methods can be used to (approximately) multiply two square matrices quickly if the product is known to be sparse. Let A and B denote square N × N matrices,  > 0, and c > 0. If A · B is compressible in each column, we can obtain a near-optimal best cN^.29462 element-per-column approximation to A · B using O(N^(2+epsilon)) operations. More specifically, if each column of the product A · B has at most cN^.29462 non-zero elements, then we can calculate the product A · B exactly by using O(N^(2+epsilon)) operations.



Credit photo: NASA/JPL-Caltech/Cornell University, View from Spirit's Overwintering Position, photo taken Jan. 29, 2008.

Tuesday, February 19, 2008

Compressed Sensing: Toward Deterministic Subsampling

In a presentation today (see Calendar), Yves Meyer pointed out quite rightfully that while the initial experiment performed by Emmanuel Candes and Justin Romberg [1] is very nice and impressive, it currently is not explained by the current crop of results in the compressed sensing field. In other words, as shown in this figure



one can notice the sampling in the Fourier space is pretty regular i.e. along lines, whereas one should expect from the random approaches seen so far that the sampling should be done randomly all over the Fourier plane, i.e. more like this:





Today he presented some of his and Basarab Matei results trying to find a way to do more deterministic subsampling that could explain the initial result of [1]. As far as I could tell the current results they arrive at allows for positive functions sampled in the Fourier space at points that lie at the vertices of quasicrystals to obtain an exact reconstruction. This is really impressive. And yes, this is an uncanny result that again rely on geometry, something

figure from reference [2]

that we have seen earlier. Then again, could a quasicrystal lattice fit into the star like sampling of [1] ?
If you wondered, the Quasicrystals made of Meyer sets are named after their discoverer: Yves Meyer. More can be found here.



Reference: [1] E. J. Candès, J. Romberg and T. Tao. Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inform. Theory, 52 489-509. (pdf)
[2] Third figure from http://www.jcrystal.com/steffenweber/qc.html

Compressed Sensing: On the Interplay Between Coding Theory and Compressed Sensing

Added to the Calendar, Olgica Milenkovic, will make a presentation entitled: "On the Interplay Between Coding Theory and Compressed Sensing" tomorrow at the University of Illinois at Urbana Champaign.

Abstract:
Compressed sensing (CS) is a signal processing technique that allows for accurate recovery of sparse data-vectors based on a small number of linear measurements. In its most basic form, robust CS can be viewed as a specialized error-control coding scheme in which the data alphabet does not necessarily have the structure of a finite field and where the notion of a parity-check is replaced by a more general functionality. We describe how to combine and extend the classical CS and Euclidean superimposed coding paradigms in terms of introducing new minimum distance and codeword-composition constraints. In this setting, we derive fundamental lower and upper bounds on the achievable compression ratio for this type of constrained compressed sensing (CCS) schemes using classical coding-theoretic techniques. We also show that Euclidean superimposed codes, a new class of weighted Euclidean superimposed codes, as well as spherical codes, can be used for deterministic CCS designs. Furthermore, we demonstrate that sparse reconstruction in the presence of noise can be performed via low-complexity correlation-maximization algorithms. Our problem analysis is motivated by a myriad of applications ranging from compressed sensing microarray designs, reliability-reordering decoding of linear block-codes, identification in multi-user communication systems, and fault tolerant computing.


This is a joint work with Wei Dai from the ECE Department at UIUC.
Date Feb 20, 2008
Time 4:00 pm
Location 2269 Beckman Institute
Sponsor Department of Electrical and Computer Engineering

Monday, February 18, 2008

Compressed Sensing: Visual Geometric Arguments


I do not attend Math department seminars, so I am not yet bored with the pointy l1 ball as others are :-). But while we are at it, showing the l_p ball being better is one thing, what about a similar argument using Jenn to explain more visually Neighborly polytopes and compressed sensing ? If that is possible that is.




(via Ned Batcheler's blog)

Sunday, February 17, 2008

Compressed Sensings: Updates from Rice, a course and a blog

Petar Maymounkov has a blog where he discusses subjects related to Compressed Sensing, here is the latest entry and an earlier one ( LP decoding and WP vs. RIP). In the blog, I found the webpage of the course given by Piotr Indyk last fall at MIT. It is entitled: Sketching, Streaming and Sub-linear Space Algorithms. The course introduction reads:

Over the last few years, a new approach to computing succinct approximate “sketches” of data has been discovered. These sketches are much shorter (often exponentially!) than the original data, but nevertheless they retain important and useful information , such as the number of distinct elements in the data set, the "similarity" between the data elements, etc. The methods have been successfully applied to web data compression, approximate query processing in databases, network measurement and signal processing/acquisition.

There are many different "manifestations" of the sketching approach: data stream algorithms, randomized dimensionality reduction and compressive sensing. In this course we will cover results from those areas in a unified and systematic manner. On the way, we will introduce the necessary tools from communication complexity, statistics, geometric functional analysis, combinatorial group testing, and other areas.
The Rice Compressed Sensing site has new entries (and not so new ones that are now more easily available). I list them with their abstracts:

* Przemysław Wojtaszczyk
, Stability and instance optimality for Gaussian measurements in compressed sensing
In compressed sensing we seek to gain information about vector x belonging to R^N from d much less than N nonadaptive linear measurements. Candes, Donoho, Tao et. al. ( see e.g. [2, 4, 8]) proposed to seek good approximation to x via l1 minimisation. In this paper we show that in the case of Gaussian measurements it recovers the signal well from inacurate measurements, thus improving result from [4]. We also show that with big probability it gives information comparable with best k term approximation in euclidean norm, k ~ d/lnN. This provides the first numerically friendly algorithm to do so, see [7].

* Irina Gorodnitsky and Bhaskar Rao, Sparse signal reconstruction from limited data using FOCUSS: A re-weighted norm minimization algorithm
We present a nonparametric algorithm for finding localized energy solutions from limited data. The problem we address is underdetermined, and no prior knowledge of the shape of the region on which the solution is nonzero is assumed. Termed the FOcal Underdetermined System Solver (FOCUSS), the algorithm has two integral parts: a low-resolution initial estimate of the real signal and the iteration process that refines the initial estimate to the final localized energy solution. The iterations are based on weighted norm minimization of the dependent variable with the weights being a function of the preceding iterative solutions. The algorithm is presented as a general estimation tool usable across different applications. A detailed analysis laying the theoretical foundation for the algorithm is given and includes proofs of global and local convergence and a derivation of the rate of convergence. A view of the algorithm as a novel optimization method which combines desirable characteristics of both classical optimization and learning-based algorithms is provided. Mathematical results on conditions for uniqueness of sparse solutions are also given. Applications of the algorithm are illustrated on problems in direction-of-arrival (DOA) estimation and neuromagnetic imaging.
* Bhaskar Rao and Kenneth Kreutz-Delgado, An affine scaling methodology for best basis selection
A methodology is developed to derive algorithms for optimal basis selection by minimizing diversity measures proposed by Wickerhauser and Donoho. These measures include the p-norm-like (p less than 1)) diversity measures and the Gaussian and Shannon entropies. The algorithm development methodology uses a factored representation for the gradient and involves successive relaxation of the Lagrangian necessary condition. This yields algorithms that are intimately related to the Affine Scaling Transformation (AST) based methods commonly employed by the interior point approach to nonlinear optimization. The algorithms minimizing the (p less than 1) diversity measures are equivalent to a recently developed class of algorithms called FOCal Underdetermined System Solver (FOCUSS). The general nature of the methodology provides a systematic approach for deriving this class of algorithms and a natural mechanism for extending them. It also facilitates a better understanding of the convergence behavior and a strengthening of the convergence results. The Gaussian entropy minimization algorithm is shown to be equivalent to a well-behaved p = 0 norm-like optimization algorithm. Computer experiments demonstrate that the p-norm-like and the Gaussian entropy algorithms perform well, converging to sparse solutions. The Shannon entropy algorithm produces solutions that are concentrated but are shown to not converge to a fully sparse solution.

* Shane Cotter, J. Adler, Bhaskar Rao and Kenneth Kreutz-Delgado, Forward sequential algorithms for best basis selection

* Bhaskar Rao, K. Engan, S.F. Cotter, Jason Palmer, Kenneth Kreutz-Delgado, Subset selection in noise based on diversity measure minimization
In this paper, we develop robust methods for subset selection based on the minimization of diversity measures. A Bayesian framework is used to account for noise in the data and a maximum a posteriori (MAP) estimation procedure leads to an iterative procedure which is a regularized version of the FOCal Underdetermined System Solver (FOCUSS) algorithm. The convergence of the regularized FOCUSS algorithm is established and it is shown that the stable fixed points of the algorithm are sparse. We investigate three different criteria for choosing the regularization parameter: quality of fit, sparsity criterion, and L-curve. The L-curve method, as applied to the problem of subset selection, is found not to be robust, and we propose a novel modified L-curve procedure that solves this problem. Each of the regularized FOCUSS algorithms is evaluated through simulation of a detection problem, and the results are compared with those obtained using a sequential forward selection algorithm termed orthogonal matching pursuit (OMP). In each case, the regularized FOCUSS algorithm is shown to be superior to the OMP in noisy environments.

* Shane Cotter, Bhaskar Rao, K. Engan, and Kenneth Kreutz-Delgado, Sparse solutions to linear inverse problems with multiple measurement vectors
We address the problem of finding sparse solutions to an underdetermined system of equations when there are multiple measurement vectors having the same, but unknown, sparsity structure. The single measurement sparse solution problem has been extensively studied in the past. Although known to be NP-hard, many single–measurement suboptimal algorithms have been formulated that have found utility in many different applications. Here, we consider in depth the extension of two classes of algorithms–Matching Pursuit (MP) and FOCal Underdetermined System Solver (FOCUSS)–to the multiple measurement case so that they may be used in applications such as neuromagnetic imaging, where multiple measurement vectors are available, and solutions with a common sparsity structure must be computed. Cost functions appropriate to the multiple measurement problem are developed, and algorithms are derived based on their minimization. A simulation study is conducted on a test-case dictionary to show how the utilization of more than one measurement vector improves the performance of the MP and FOCUSS classes of algorithm, and their performances are compared.

* David Wipf and Bhaskar Rao, Sparse bayesian learning for basis selection
Sparse Bayesian learning (SBL) and specifically relevance vector machines have received much attention in the machine learning literature as a means of achieving parsimonious representations in the context of regression and classification. The methodology relies on a parameterized prior that encourages models with few nonzero weights. In this paper, we adapt SBL to the signal processing problem of basis selection from overcomplete dictionaries, proving several results about the SBL cost function that elucidate its general behavior and provide solid theoretical justification for this application. Specifically, we have shown that SBL retains a desirable property of the 0-norm diversity measure (i.e., the global minimum is achieved at the maximally sparse solution) while often possessing a more limited constellation of local minima. We have also demonstrated that the local minima that do exist are achieved at sparse solutions. Later, we provide a novel interpretation of SBL that gives us valuable insight into why it is successful in producing sparse representations. Finally, we include simulation studies comparing sparse Bayesian learning with Basis Pursuit and the more recent FOCal Underdetermined System Solver (FOCUSS) class of basis selection algorithms. These results indicate that our theoretical insights translate directly into improved performance.

* David Wipf , Jason Palmer and Bhaskar Rao, Perspectives on Sparse Bayesian Learning.
Recently, relevance vector machines (RVM) have been fashioned from a sparse Bayesian learning (SBL) framework to perform supervised learning using a weight prior that encourages sparsity of representation. The methodology incorporates an additional set of hyperparameters governing the prior, one for each weight, and then adopts a specific approximation to the full marginalization over all weights and hyperparameters. Despite its empirical success however, no rigorous motivation for this particular approximation is currently available. To address this issue, we demonstrate that SBL can be recast as the application of a rigorous variational approximation to the full model by expressing the prior in a dual form. This formulation obviates the necessity of assuming any hyperpriors and leads to natural, intuitive explanations of why sparsity is achieved in practice.
* David Wipf and Bhaskar Rao, An empirical bayesian strategy for solving the simultaneous sparse approximation problem.
Given a large overcomplete dictionary of basis vectors, the goal is to simultaneously represent 1 signal vectors using coefficient expansions marked by a common sparsity profile. This generalizes the standard sparse representation problem to the case where multiple responses exist that were putatively generated by the same small subset of features. Ideally, the associated sparse generating weights should be recovered, which can have physical significance in many applications (e.g., source localization). The generic solution to this problem is intractable and, therefore, approximate procedures are sought. Based on the concept of automatic relevance determination, this paper uses an empirical Bayesian prior to estimate a convenient posterior distribution over candidate basis vectors. This particular approximation enforces a common sparsity profile and consistently places its prominent posterior mass on the appropriate region of weight-space necessary for simultaneous sparse recovery. The resultant algorithm is then compared with multiple response extensions of matching pursuit, basis pursuit, FOCUSS, and Jeffreys prior-based Bayesian methods, finding that it often outperforms the others. Additional motivation for this particular choice of cost function is also provided, including the analysis of global and local minima and a variational derivation that highlights the similarities and differences between the proposed algorithm and previous approaches.
* Michael Lustig, Juan Manuel Santos, David Donoho, and John Pauly, k-t SPARSE: High frame rate dynamic MRI exploiting spatio-temporal sparsity.
Recently rapid imaging methods that exploit the spatial sparsity of images using under-sampled randomly perturbed spirals and non-linear reconstruction have been proposed [1,2]. These methods were inspired by theoretical results in sparse signal recovery [1-5] showing that sparse or compressible signals can be recovered from randomly under-sampled frequency data. We propose a method for high frame-rate dynamic imaging based on similar ideas, now exploiting both spatial and temporal sparsity of dynamic MRI image sequences (dynamic scene). We randomly under-sample k-t space by random ordering of the phase encodes in time (Fig. 1). We reconstruct by minimizing the L1 norm of a transformed dynamic scene subject to data fidelity constraints. Unlike previously suggested linear methods [7, 8], our method does not require a known spatio-temporal structure nor a training set, only that the dynamic scene has a sparse representation. We demonstrate a 7-fold frame-rate acceleration both in simulated data and in vivo non-gated Cartesian balanced-SSFP cardiac MRI .

*Hong Jung, Jong Chul Ye, and Eung Yeop Kim, Improved k-t BLASK and k-t SENSE using FOCUSS.
The dynamic MR imaging of time-varying objects, such as beating hearts or brain hemodynamics, requires a significant reduction of the data acquisition time without sacrificing spatial resolution. The classical approaches for this goal include parallel imaging, temporal filtering, and their combinations. Recently, model-based reconstruction methods called k-t BLAST and k-t SENSE have been proposed which largely overcome the drawbacks of the conventional dynamic imaging methods without a priori knowledge of the spectral support. Another recent approach called k-t SPARSE also does not require exact knowledge of the spectral support. However, unlike the k-t BLAST/SENSE, k-t SPARSE employs the so-called compressed sensing theory rather than using training. The main contribution of this paper is a new theory and algorithm that unifies the above mentioned approaches while overcoming their drawbacks. Specifically, we show that the celebrated k-t BLAST/SENSE are the special cases of our algorithm, which is asymptotically optimal from the compressed sensing theory perspective. Experimental results show that the new algorithm can successfully reconstruct a high resolution cardiac sequence and functional MRI data even from severely limited k-t samples, without incurring aliasing artifacts often observed in conventional methods.

* Jong Chul Ye, Compressed sensing shape estimation of star-shaped objects in Fourier imaging.
Recent theory of compressed sensing informs us that near-exact recovery of an unknown sparse signal is possible from a very limited number of Fourier samples by solving a convex L1 optimization problem. The main contribution of the present paper is a novel non-parametric shape estimation framework and a computational algorithm for binary star shape objects, whose radius functions belong to the space of bounded-variation functions. Specifically, in contrast with standard compressed sensing, the present approach involves directly reconstructing the shape boundary under sparsity constraint. This is done by converting the standard pixel based reconstruction approach into estimation of a non-parametric shape boundary on a wavelet basis. This results in an L1 minimization under a nonlinear constraint, which makes the optimization problem nonconvex. We solve the problem by successive linearization and application of one-dimensional compressed sensing, which significantly reduces the number of sampling requirements as well as the computational burden. Fourier imaging simulation results demonstrate that high quality reconstruction can be quickly obtained from a very limited number of samples. Furthermore, the algorithm outperforms the standard compressed sensing reconstruction approach using the total variation norm.

* Joshua Trzasko, Armando Manduca, and Eric Borisch, Highly undersampled magnetic resonance image reconstruction via homotopic ell-0-minimization. The abstract reads:

In clinical Magnetic Resonance Imaging (MRI), any reduction in scan time offers a number of potential benefits ranging from high-temporal-rate observation of physiological processes to improvements in patient comfort. Following recent developments in Compressive Sensing (CS) theory, several authors have demonstrated that certain classes of MR images which possess sparse representations in some transform domain can be accurately reconstructed from very highly undersampled K-space data by solving a convex L1-minimization problem. Although L1-based techniques are extremely powerful, they inherently require a degree of over-sampling above the theoretical minimum sampling rate to guarantee that exact reconstruction can be achieved. In this paper, we propose a generalization of the Compressive Sensing paradigm based on homotopic approximation of the L0 semi-norm and show how MR image reconstruction can be pushed even further below the Nyquist limit and significantly closer to the theoretical bound. Following a brief review of standard Compressive Sensing methods and the developed theoretical extensions, several example MRI reconstructions from highly undersampled K-space data are presented.



* Irina Gorodnitsky, J. George and Bhaskar Rao, Neuromagnetic source imaging with FOCUSS: A recursive weighted minimum norm algorithm . The abstract reads:

The paper describes a new algorithm for tomographic source reconstruction in neural electromagnetic inverse problems. Termed FOCUSS (FOCal Underdetermined System Solution), this algorithm combines the desired features of the two major approaches to electromagnetic inverse procedures. Like multiple current dipole modeling methods, FOCUSS produces high resolution solutions appropriate for the highly localized sources often encountered in electromagnetic imaging. Like linear estimation methods, FOCUSS allows current sources to assume arbitrary shapes and it preserves the generality and ease of application characteristic of this group of methods. It stands apart from standard signal processing techniques because, as an initialization-dependent algorithm, it accommodates the non-unique set of feasible solutions that arise from the neuroelectric source constraints. FOCUSS is based on recursive, weighted norm minimization. The consequence of the repeated weighting procedure is, in effect, to concentrate the solution in the minimal active regions that are essential for accurately reproducing the measurements. The FOCUSS algorithm is introduced and its properties are illustrated in the context of a number of simulations, first using exact measurements in 2- and 3-D problems, and then in the presence of noise and modeling errors. The results suggest that FOCUSS is a powerful algorithm with considerable utility for tomographic current estimation.

* Shane Cotter and Bhaskar Rao, Sparse channel estimation via matching pursuit, the abstract reads:
Channels with a sparse impulse response arise in a number of communication applications. Exploiting the sparsity of the channel, we show how an estimate of the channel may be obtained using a matching pursuit (MP) algorithm. This estimate is compared to thresholded variants of the least squares (LS) channel estimate. Among these sparse channel estimates, the MP estimate is computationally much simpler to implement and a shorter training sequence is required to form an accurate channel estimate leading to greater information throughput.

Saturday, February 16, 2008

Compressed Sensing: Recovery Algorithms for Vector Valued Data, Wavelet-tailored Algorithm and a Network API for Sensor Networks


Here is a new preprint from Massimo Fornasier and Holger Rauhut on Recovery Algorithms for Vector Valued Data with Joint Sparsity Constraints. The abstract reads:

Vector valued data appearing in concrete applications often possess sparse expansions with respect to a preassigned frame for each vector component individually. Additionally, different components may also exhibit common sparsity patterns. Recently, there were introduced sparsity measures that take into account such joint sparsity patterns, promoting coupling of non-vanishing components. These measures are typically constructed as weighted l1 norms of componentwise lq norms of frame coefficients. We show how to compute solutions of linear inverse problems with such joint sparsity regularization constraints by fast thresholded Landweber algorithms. Next we discuss the adaptive choice of suitable weights appearing in the definition of sparsity measures. The weights are interpreted as indicators of the sparsity pattern and are iteratively up-dated after each new application of the thresholded Landweber algorithm. The resulting two-step algorithm is interpreted as a double-minimization scheme for a suitable target functional. We show its l2-norm convergence. An implementable version of the algorithm is also formulated, and its norm convergence is proven. Numerical experiments in color image restoration are presented.
They point out to the areas which can use the algorithm developed in the paper:

"...Neuroimaging (functional Magnetic Resonance Imaging, Magnetoencephalography), distributed compressed sensing [3] and several other problems with coupled vector valued solutions are fields where we expect that our scheme can be fruitfully used. The numerical solution of differential and integral operator equations can also be addressed within this framework.."


There are also a new batch of preprints and presentation slides from Rice:

The poster for Sparse Signal Reconstruction from Noisy Compressive Measurements Using Cross Validation by Petros Boufounos , Marco F. Duarte and Richard Baraniuk is here. While the slides for Multiscale Random Projections for Compressive Classification are here ( Marco F. Duarte, Mark Davenport, Michael Wakin, Jason Laska , Dharmpal Takhar, Kevin Kelly and Richard Baraniuk).



Marco F. Duarte
, Michael Wakin and Richard Baraniuk, Wavelet-Domain Compressive Signal Reconstruction Using a Hidden Markov Tree Model. The abstract reads:

Compressive sensing aims to recover a sparse or compressible signal from a small set of projections onto random vectors; conventional solutions involve linear programming or greedy algorithms that can be computationally expensive. Moreover, these recovery techniques are generic and assume no particular structure in the signal aside from sparsity. In this paper, we propose a new algorithm that enables fast recovery of piecewise smooth signals, a large and useful class of signals whose sparse wavelet expansions feature a distinct “connected tree” structure. Our algorithm fuses recent results on iterative reweighted ℓ1-norm minimization with the wavelet Hidden Markov Tree model. The resulting optimization-based solver outperforms the standard compressive recovery algorithms as well as previously proposed wavelet-based recovery algorithms. As a bonus, the algorithm reduces the number of measurements necessary to achieve low-distortion reconstruction.
As pointed out in the conclusion: "This paper introduced a wavelet-tailored weighting scheme for the iterative reweighted ℓ1-norm minimization algorithm for compressive sensing recovery." I am sure their scheme can also be used for the reweighted l-p minimization. This is the second algorithm (it seems) that is tailored to wavelets (the first one is Sublinear recovery of sparse wavelet signals by Ray Maleh and Anna Gilbert )

and the final intriguing paper is A Network Application Programming Interface for Data Processing in Sensor Networks by Raymond Wagner, J. Ryan Stinnett, Marco F. Duarte, Richard Baraniuk, David B. Johnson and T. S. Eugene Ng. The abstract reads:
Since the inception of sensor networks, a wide variety of algorithms for in-network data processing have emerged. To enable practical implementation of a broad class of these proposed algorithms, sufficient application programming support for network communications is critical. While standard (but relatively low-level) network application programming interfaces (APIs) such as those implemented in TinyOS can provide a basis for flexible application-specific customization, experience in the past several years has shown that data processing algorithms in fact share similar higher level communication needs that can be better supported. In this paper, we first identify common communication patterns through an extensive survey of data processing algorithms proposed over the past four years in the proceedings of the Information Processing in Sensor Networks (IPSN) conference. We then present the design of a higher level network API for sensor networks that is powerful, convenient to use, and compact. Many familiar issues in traditional networking such as addressing and reliability have vastly different solutions in the sensor network environment, and we carefully develop the rationales behind the design of the resulting network API, providing an in-depth discussion on the fundamental design decisions. We believe that the proposed network API serves as a starting point for the implementation of a more comprehensive sensor network communication middleware than what is found in currently available systems, enabling algorithm developers to quickly and easily implement their designs without having to handle low-level networking details themselves.
Image credit: NASA/JPL-Caltech/Cornell University, Opportunity used its panoramic camera (Pancam) to capture this image with low-sun angle at a local solar time of 3:21 p.m. during the rover's 1,433rd Martian day, of sol (Feb. 4, 2008).

Friday, February 15, 2008

Predicting NROL-21 's shape: decreasing the odds.

So it looks like the decision has been made that given that the tanks of NROL-21 are likely to survive the reentry like the tanks of Columbia, shooting it down might be a good solution. Good call, Hydrazine is nasty and so is 300 pounds at terminal velocity....

Wednesday, February 13, 2008

Compressed Sensing: Reduce and Boost: Recovering Arbitrary Sets of Jointly Sparse Vectors

Moshe Mishali, Yonina C. Eldar just made available the following preprint on arxiv.org:

Reduce and Boost: Recovering Arbitrary Sets of Jointly Sparse Vectors. The abstract reads:
The rapid developing area of compressed sensing suggests that a sparse vector lying in an arbitrary high dimensional space can be accurately recovered from only
a small set of non-adaptive linear measurements. Under appropriate conditions on the measurement matrix, the entire information about the original sparse vector is captured in the measurements, and can be recovered using efficient polynomial methods. The vector model has been extended to a finite set of sparse vectors sharing a common non-zero location set. In this paper, we treat a broader framework in which the goal is to recover a possibly infinite set of jointly sparse vectors. Extending existing recovery methods to this model is difficult due to the infinite structure of the sparse vector set. Instead, we prove that the entire infinite set of sparse vectors can recovered by solving a single, reduced-size finite-dimensional problem, corresponding to recovery of a finite set of sparse vectors. We then show that the problem can be further reduced to the basic recovery of a single sparse vector by randomly combining the measurement vectors. Our approach results in exact recovery of both countable and uncountable sets as it does not rely on discretization or heuristic techniques. To efficiently recover the single sparse vector produced by the last reduction step, we suggest an empirical boosting strategy that improves the recovery ability of any given sub-optimal method for recovering a sparse vector. Numerical experiments on random data demonstrate that when applied to infinite sets our strategy outperforms discretization techniques in terms of both run time and empirical recovery rate. In the finite model, our boosting algorithm is characterized by fast run time and superior recovery rate than known popular methods.



This is indeed new and worthy and brings some support to the generic problem of M/EEGs signals. Interesting is the use of randomness in the reconstruction stage.

Their previous work included Blind Multi-Band Signal Reconstruction: Compressed Sensing for Analog Signals. The abstract of which is:


We address the problem of reconstructing a multi-band signal from its sub-Nyquist point-wise samples. To date, all reconstruction methods proposed for this class of signals assumed knowledge of the band locations. In this paper, we develop a non-linear blind perfect reconstruction scheme for multi-band signals which does not require the band locations. Our approach assumes an existing blind multi-coset sampling method. The sparse structure of multi-band signals in the continuous frequency domain is used to replace the continuous reconstruction with a single finite dimensional problem without the need for discretization. The resulting problem can be formulated within the framework of compressed sensing, and thus can be solved efficiently using known tractable algorithms from this emerging area. We also develop a theoretical lower bound on the average sampling rate required for blind signal reconstruction, which is twice the minimal rate of known-spectrum recovery. Our method ensures perfect reconstruction for a wide class of signals sampled at the minimal rate. Numerical experiments are presented demonstrating blind sampling and reconstruction with minimal sampling rate.
Credit photo: NASA, view from Earth taken the space shuttle STS-122 the day before yesterday.

Tuesday, February 12, 2008

Compressed Sensing: Using sparse representations for missing data imputation in noise robust speech recognition

Jort Gemmeke and Bert Cranen just made available a preprint entitled Using sparse representations for missing data imputation in noise robust speech recognition. The abstract reads:
Noise robustness of automatic speech recognition benefits from using missing data imputation: Prior to recognition the parts of the spectrogram dominated by noise are replaced by clean speech estimates. Especially at low SNRs each frame contains at best only a few uncorrupted coefficients so frame-by-frame restoration of corrupted feature vectors, and thus recognition accuracy, is sub-optimal. In this paper we present a novel imputation technique working on entire word. A word is sparsely represented in an overcomplete basis of exemplar (clean) speech signals using only the uncorrupted time-frequency elements of the word. The corrupted elements are replaced by estimates obtained by projecting the sparse representation in the basis. We achieve recognition accuracies of 90% at SNR -5 dB using oracle masks on AURORA-2 as compared to 60% using a conventional frame-based approach. The performance obtained with estimated masks can be directly related to the proportion of correctly identified uncorrupted coefficients.

While using the l1 reconstruction technique, Jort also uses random projections to decrease the dimensionality of the very large data he has to handle. In all, a clever use of sparse decomposition and random projection techniques in a notoriously difficult problem. The conclusion speaks for itself.

We showed the potential of the method by achieving recognition accuracies on AURORA-2 of 91% at SNR -5 dB using an oracle mask, an increase of 30% percent absolute over a state-of-the art missing data speech recognizer.

The moral of the story: contributing to the Monday Morning Algorithm series will get you to substantially improve your field of endeavor :-)

Photo credit: NASA/JPL/Space Science Institute, Prometheus moon. Photo taken yesterday.

Saturday, February 09, 2008

The 1003rd post: Compressed Sensing and a Tile.



As some of you pointed out, Terry Tao made an entry on his blog about a presentation he made on Compressed Sensing in Australia. His post had me thinking on writing this entry on the CS framework.

I have added a new page that features matlab codes I have implemented from different papers on Compressed Sensing. The Matlab files being small is a testament of the elegance of these algorithms. As stated, better codes implemented by the authors of these papers can be requested directly from the authors.

Damaris tells us STS-122 is missing some tiles, but she is checking on it.

I nearly missed it but this is the 1003rd post of this blog, of which about 115 of them are about Compressed Sensing.

Friday, February 08, 2008

Compressed Sensing: Greedier and Faster for Linear and Nonlinear Sparse Inverse Problems, A new version of Sparsify.

Thomas Blumensath and Mike Davies just released three papers on greedy reconstructions algorithms (some of them are implemented in the Sparsify package) where the emphasis is in solving for large problems. The second paper also touches on an interesting development where the system to be solved is a nonlinear one while knowing the solution to be sparse. This is new and interesting in light of the manifold approach to compressed sensing. Obviously, this also means that Sparsify has just been updated and includes the greedy gradient algorithm for non-linear sparse inverse problems,

The first article is Fast greedy algorithms for large sparse inverse problems and the abstract reads:
In this paper we introduce and study algorithms to solve large linear equations, in which the solution is assumed to be sparse. We concentrate on greedy strategies and in particular study extensions to the recently introduced Gradient Pursuit framework. Different approaches are compared that select several elements per iteration, thereby significantly decreasing the computational complexity of the method. We argue in particular for the use of a so called stagewise weak selection strategy. Experiments with a range of sparse inverse problems show that a Gradient Pursuit algorithm with such a selection strategy is competitive to and in many cases outperforms other fast algorithms, both in terms of estimation accuracy and in terms of computation speed.

The second article is Gradient Pursuit for Non-Linear Sparse Signal Modelling, the abstract reads:
In this paper the linear sparse signal model is extended to allow more general, non-linear relationships and more general measures of approximation error. A greedy gradient based strategy is presented to estimate the sparse coefficients. This algorithm can be understood as a generalisation of the recently introduced Gradient Pursuit framework. Using the presented approach with the traditional linear model but with a different cost function is shown to outperform OMP in terms of recovery of the original sparse coefficients. A second set of experiments then shows that for the nonlinear model studied and for highly sparse signals, recovery is still possible in at least a percentage of cases.
Finally the third paper is entitled Faster & Greedier: algorithms for sparse reconstruction of large datasets and the abstract reads:

We consider the problem of performing sparse reconstruction of large-scale data sets, such as the image sequences acquired in dynamic MRI. Here, both conventional L1 minimization through interior point methods and Orthogonal Matching Pursuit (OMP) are not practical. Instead we present an algorithm that combines fast directional updates based around conjugate gradients with an iterative thresholding step similar to that in StOMP but based upon a weak greedy selection criterion. The algorithm can achieve OMP-like performance and the rapid convergence of StOMP but with MP-like complexity per iteration. We also discuss recovery conditions applicable to this algorithm.

Compressed Sensing: The Framework

[Update: This entry is now superseeded by this more complete web page trying to build the Big Picture on Compressed Sensing (also here) ]

Sometimes I lose track of this, but Compressed Sensing is really about acquiring a sparse signal with the help of an incoherent basis. While much of the emphasis is rightfully about reconstruction work, we are beginning to see the emergence of other tools that complete this framework. Two other stages are emerging:
  • the search for bases or dictionaries in which a set of signals are sparse in and,
  • specific incoherent measurements tools.
Algorithms already implemented that find sparse dictionaries are presented in:
Knowledge of specific domain signals enables the ability to build these hopefully small dictionaries. The next step will almost certainly bring about techniques that find elements within a manifold as opposed to a full set of functions, some sort of Data Driven Dictionary.

Non Adaptive Measurement codes/matrices (the first four are pasted from Terry Tao site):

  • Random Fourier Ensemble: The signal is a discrete function f on Z/NZ, and the measurements are the Fourier coefficients at a randomly selected set Omega of frequencies of size M ( A is an M x N matrix.)

  • Gaussian ensemble: A is an M x N matrix (M x N Gaussian variables).

  • Bernoulli ensemble: A is an M x N matrix (M x N Bernoulli variables).

  • Gaussian ensemble: A is an m x n matrix (m > n) whose measurement coefficients are Gaussian variables.
  • Other recent implementations achieving specific goals (sparsity,....)
    Obviously, with the advent of Data Driven Dictionnaries, we will have specific domain specific measurement matrix methods. Also, I am sure I am missing some implementation from the sketching world and I guess I should also list the adaptive schemes.

    Reconstruction codes - Matching Pursuit/Greedy, Basis Pursuit/Linear Programming, Bayesian -
    Eventually, the end point is the birth of specific hardware implementations of these schemes ([L1], [L2],[L3],[L4],[L5],[L6], [7], [8]). The most salient features of the random or noiselet measurements is that one can foresee a hardware implementation without having to know the decomposing sparse basis since there is a high probability that these two will be incoherent with each other.

    [ Update: some of the homemade codes are here]



    Credit: I have cut and pasted section of this entry directly from the Rice repository, Terry Tao site. NASA Mars rover Opportunity on Sol 1432.

    Compressed Sensing: Optimal non-linear models for sparsity and sampling

    From the Rice repository:

    Akram Aldroubi, Carlos Cabrelli, and Ursula Molter made a new version of their preprint entitled Optimal non-linear models for sparsity and sampling. The abstract reads:

    Given a set of vectors (the data) in a Hilbert space H, we prove the existence of an optimal collection of subspaces minimizing the sum of the square of the distances between each vector and its closest subspace in the collection. This collection of subspaces gives the best sparse representation for the given data, in a sense defined in the paper, and provides an optimal model for sampling in union of subspaces. The results are proved in a general setting and then applied to the case of low dimensional subspaces of R^N and to infinite dimensional shift-invariant spaces in L^2(R^d). We also present an iterative search algorithm for finding the solution subspaces. These results are tightly connected to the new emergent theories of compressed sensing and dictionary design, signal models for signals with finite rate of innovation, and the subspace segmentation problem.

    Thursday, February 07, 2008

    Make that 5


    Compressive Sensing: Hardware Implementation update and some newer developments

    The tutorial on Comressed Sensing presented at ITA '08 is now on the Rice repository. It is entitled Tutorial on compressive sensing and was presented by Richard Baraniuk, Justin Romberg, and Michael Wakin.

    It is a great tutorial. For the new items, one can start at slide 101, where we have a description of new developments in particular: Manifold lifting. I wish I were in that room because it sounds fascinating, I am awaiting for the preprint on this.

    I could not track down the Georgia Tech digital Imager but we now have photos of it. I am also waiting for the preprint on this.



    I am not sure I have seen anything on compressive sensing camera arrays yet. Also of interest was the description of the random demodulator, a component of a very high rate A2I or A/D hardware.




    NIPS 2007 Tutorial videos: Visual Recognition in Primates and Machines and Sensory Coding and Hierarchical Representations


    Some of the NIPS tutorials are out. Of interest are that of Tomaso Poggio who with Thomas Serre has devised a feedforward model of the visual cortex and that of Michael Lewicki on Sensory Coding and Hierarchical Representations when looked in relation to this entry. In the first presentation I note an interesting paper that begins the work of defining a norm based on the hierarchical structure underlying the visual cortex model of Poggio and Serre. It is entitled Derived Distance: towards a mathematical theory of visual cortex by Steve Smale, Tomaso Poggio, Andrea Caponnetto and Jake Bouvrie. And as we have begun to learn, never underestimate when a Fields medalist write about things that we little people can understand. I need to come back to this later.

    Visual Recognition in Primates and Machines by Tomaso Poggio. The slides are here. The abstract of the presentation reads:
    Understanding the processing of information in our cortex is a significant part of understanding how the brain works and of understanding intelligence itself, arguably one of the greatest problems in science today. In particular, our visual abilities are computationally amazing and we are still far from imitating them with computers. Thus, visual cortex may well be a good proxy for the rest of the cortex and indeed for intelligence itself. But despite enormous progress in the physiology and anatomy of the visual cortex, our understanding of the underlying computations remains fragmentary. I will briefly review the anatomy and the physiology of primate visual cortex and then describe a class of quantitative models of the ventral stream for object recognition, which, heavily constrained by physiology and biophysics, have been developed during the last two decades and which have been recently shown to be quite successful in explaining several physiological data across different visual areas. I will discuss their performance and architecture from the point of view of state-of-the-art computer vision system. Surprisingly, such models also mimic the level of human performance in difficult rapid image categorization tasks in which human vision is forced to operate in a feedforward mode. I will then focus on the key limitations of such hierarchical feedforward models for object recognition, discuss why they are incomplete models of vision and suggest possible alternatives focusing on the computational role of attention and its likely substrate – cortical backprojections. Finally, I will outline a program of research to attack the broad challenge of understanding in terms of brain circuits the process of image inference and in particular recognition tasks beyond simple scene classification.

  • Flash Movie Session A

  • Flash Movie Session B



    Sensory Coding and Hierarchical Representations by Michael Lewicki. The slides are here. The abstract description reads:

    The sensory and perceptual capabilities of biological organisms are still well beyond what we have been able to emulate with machines, and the brain devotes far more neural resources to the problems of sensory coding and early perception than we give credit in our algorithms. What is it all doing? Although a great deal has been learned about anatomical structure and physiological properties, insights into the underlying information processing algorithms have been difficult to obtain. Recent work, however, that has begun to elucidate some of the underlying computational principles and processes that biology uses to transform the raw sensory signal into a hierarchy of representations that subserve higher-level perceptual tasks. A central hypothesis in this work is that biological representations are optimal from the viewpoint of statistical information processing, and adapt to the statistics of the natural sensory environment. In this tutorial, I will review work on learning sensory codes that are optimal for the statistics of the natural sensory environment and show how these results provide theoretical explanations for a variety of physiological data in both the auditory and visual systems. This will include work that that has extended these results to provide functional explanations for many non-linear aspects of early auditory and visual processing. I will focus on work on the auditory and visual systems but also emphasize the generality of these approaches and how they can be applied to any sensory domain. I will also discuss work that generalizes the basic theory and shows how neural representations optimally compensate for sensory distortion and noise in neural populations. Finally, I will review work that goes beyond sensory coding and investigates the computational problems involved in computing more abstract sensory properties and invariant features that can subserve higher-level tasks such as perceptual organization and analysis of complex, natural scenes.

  • Flash Movie Session A
  • Flash Movie Session B
  • Wednesday, February 06, 2008

    Compressed Sensing: Coded Aperture and CS in Astronomy, the Pareto frontier, Sampling bounds, Multiscale sparse image representation

    The Rice repository has new publications that I have not covered yet. Jerome Bobin, Jean-Luc Starck, and Roland Ottensamer introduces us to the subject of Compressed sensing in astronomy where one would expect the same "boring" decomposition of sky images but no, this is different story they tell us. For the first time, Compressed Sensing is likely an enabling technology, woohoo! The abstract reads:

    Recent advances in signal processing have focused on the use of sparse representations in various applications. A new field of interest based on sparsity has recently emerged : compressed sensing. This theory is a new sampling framework that provides an alternative to the well-known Shannon sampling theory. In this paper we investigate how compressed sensing (CS) can provide new insights into astronomical data compression and more generally how it paves the way for new conceptions in astronomical remote sensing. We first give a brief overview of the compressed sensing theory which provides very simple coding process with low computational cost, thus favoring its use for real-time applications often found on board space mission.We introduce a practical and effective recovery algorithm for decoding compressed data. In astronomy, physical prior information is often crucial for devising effective signal processing methods. We particularly point out that a CS-based compression scheme is flexible enough to account for such information. In this context, compressed sensing is a new framework in which data acquisition and data processing are merged. We show also that CS provides a new fantastic way to handle multiple observations of the same field view, allowing us to recover information at very low signal-to-noise ratio, which is impossible with standard compression methods. This CS data fusion concept could lead to an elegant and effective way to solve the problem ESA is faced with, for the transmission to the earth of the data collected by PACS, one of the instruments on board the Herschel spacecraft which will launched in 2008.

    I think this is one of the first implementation that will have a direct bearing on the actual implementation of a full sized hardware (and a space hardware at that). As the authors point out:

    The Herschel satellite 1, which will be launched in 2008, is faced with a similar problem. Indeed the photometer data need to be compressed by a factor of 16 to be transferred. The yet implemented lossless compression scheme (based on entropy coding) yield a compression rate of 2.5. ESA 2 is in need of a compression ratio of 6. As the CPU load has to be extremely small, conventional compression methods cannot be used.

    But more interestingly, they connect their approach to coded aperture as mentioned before here, here and here:

    Interestingly, such kind of measurement paradigm is far from being science-fiction. Indeed, in the field of'-ray imaging, the so-called coded-masks3 (see [39] and references therein) are used since the sixties and are currently operating in the ESA/Integral space mission4. In gamma-ray (high energy) imaging, coded masks are used as aperture masks scattering the incoming gamma photons. More formally, the couple (coded aperture mask and detector field) is equivalent to selecting some projections in the Fourier space. In coded aperture imaging, the way the mask is designed is likely to simulate incoherent projections. Furthermore, gamma-ray data are often made of point sources that are almost sparse in the pixel domain. Fourier measurements then provide near optimal incoherent projections. The first application of compressed sensing then dates back to the sixties ! In the compressed sensing community, the coded mask concept has inspired the design of the celebrated “compressed sensing camera” [40] that provide effective image compression with a single pixel.
    In coded aperture imaging, the decoding step is often performed by iterative techniques based on maximum entropy [41]. Applying a sparsity-based recovery technique as advocated by the compressed sensing theory would probably provide enhancements.

    This is great but I wish they would explain a little bit better this sentence though:
    "More formally, the couple (coded aperture mask and detector field) is equivalent to selecting some projections in the Fourier space.."

    And then they use the noiselet transform. It should be in the public domain or I am going to have to implement it as a Monday Morning Algorithm one of these days. Any takers ? The interesting part of the paper is the development of a new Iterative Thresholding reconstruction algorithm called ProxIT. The algorithm seems to be able to take on large images.

    Probing the Pareto frontier for basis pursuit solutions by Ewout van den Berg and Michael P. Friedlander. The abstract reads:

    The basis pursuit problem seeks a minimum one-norm solution of an underdetermined least-squares problem. Basis pursuit denoise (BPDN) fits the least-squares problem only approximately, and a single parameter determines a curve that traces the optimal trade-off between the least-squares fit and the one-norm of the solution. We prove that this curve is convex and continuously differentiable over all points of interest, and show that it gives an explicit relationship to two other optimization problems closely related to BPDN.
    We describe a root-finding algorithm for finding arbitrary points on this curve; the algorithm
    is suitable for problems that are large scale and for those that are in the complex domain. At each
    iteration, a spectral gradient-projection method approximately minimizes a least-squares problem with an explicit one-norm constraint. Only matrix-vector operations are required. The primal-dual solution of this problem gives function and derivative information needed for the root-finding method. Numerical experiments on a comprehensive set of test problems demonstrate that the method scales well to large problems.
    A presentation on the subject was mentioned before.

    Galen Reeves and Michael Gastpar just released Sampling bounds for sparse support recovery in the presence of noise. The abstract reads:

    It is well known that the support of a sparse signal can be recovered from a small number of random projections. However, in the presence of noise all known sufficient conditions require that the per-sample signal-to-noise ratio (SNR) grows without bound with the dimension of the signal. If the noise is due to quantization of the samples, this means that an unbounded rate per sample is needed. In this paper, it is shown that an unbounded SNR is also a necessary condition for perfect recovery, but any fraction (less than one) of the support can be recovered with bounded SNR. This means that a finite rate per sample is sufficient for partial support recovery. Necessary and sufficient conditions are given for both stochastic and non-stochastic signal models. This problem arises in settings such as compressive sensing, model selection, and signal denoising.
    Galen Reeves master's thesis is entitled Sparse signal sampling using noisy linear projections.


    Julien Mairal, Guillermo Sapiro, and Michael Elad, Multiscale sparse image representation with learned dictionaries. The abstract reads:

    This paper introduces a new framework for learning multiscale sparse representations of natural images with overcomplete dictionaries. Our work extends the K-SVD algorithm [1], which learns sparse single-scale dictionaries for natural images. Recent work has shown that the K-SVD can lead to state-of-the-art image restoration results [2, 3]. We show that these are further improved with a multiscale approach, based on a Quadtree decomposition. Our framework provides an alternative to multiscale pre-defined dictionaries such as wavelets, curvelets, and contourlets, with dictionaries optimized for the data and application instead of pre-modelled ones.

    So it now looks like we have a competitor to the Efficient sparse coding algorithms by Honglak Lee, Alexis Battle, Rajat Raina, Andrew Ng mentioned before.

    Credit photo: ESA, Mirror of the Herschel Spacecraft.

    Printfriendly