Tuesday, March 10, 2009

CS: The Long Post.

It's a long post and maybe it shows that the field is maturing. In yesterday's post on the Duke Compressive Sensing Videos, I forgot to mention the introductory talk of Larry Carin, here it is.

First, for those of you in the U.S. and who have congressional representation, please take a moment to read this small blog entry on the recent Conyers' bill being considered in Congress. PubMed is a jewel for the average citizen when it comes to learning on the development on the cure of diseases. The World at large will thank you for letting this resource open and free for the good of mankind.

Now that Kepler has launched, the next launch of importance will be that of Herschel which will have an implementation of a compressive sensing algorithm for one of its camera. The launch is scheduled on April 16th and it is on the CS calendar. When was the last time you had the launch of a Spacecraft on your calendar ? ... I thought so.

In other news, it looks like some of the professionals are thinking that 12 MP is enough for digital cameras. I note in this entry, that Fuji has just come out with FinePix200EXR which uses a technology we mentioned before that is aimed at high dynamic range.


What about putting something in front of that camera :-) ? (even though it looks like it is a pain to change a point and shoot lens).

On a totally unrelated item, one of my counter passed the 100,000 th visit to the blog and attendant sites this week-end. The starting date for the counter was about March 2007. Another counter by Google analytics shows the following trend.


In particular, let us note that the readership crossed over 10,000 visits in February (the shortest month of the year) ! This does not take into account the readership through feedreaders and e-mail ( I realize there is a potentially large overlap). Those stats are on the right side menu of the blog.

I was about to leave a comment on this blog, but the software would not allow me to do that, oh well. My comment was :

Instead of thinking of Battleship, could we think of a magic trick where the spectators would give out CS measurements ?


Let us get back to Compressive Sensing papers some of which have appeared recently on the Rice Compressive Sensing site, some others on Arxiv and others directly on the author's page. Avishy Cami kindly lets me know of an extension to his and his colleagues' previous work using Kalman filtering to reconstruct signals in Extended Compressed Sensing: Filtering Inspired Methods for Sparse Signal Recovery and Their Nonlinear Variants (also here) by himself, Pini Gurfil, Dimitri Kanevsky and Bhuvana Ramabhadran. The abstract reads:

New methods are presented for sparse signal recovery from a sequence of noisy observations. The sparse recovery problem, which is NP-hard in general, is addressed by resorting to convex and non-convex relaxations. The body of algorithms in this work extends and consolidate the recently introduced Kalman filtering (KF)-based compressed sensing methods. These simple methods, which are briefly reviewed here, rely on a pseudo-measurement trick for incorporating the norm relaxations following from CS theory. The extension of the methods to the nonlinear case is discussed and the notion of local CS is introduced. The essential idea is that CS can be applied for recovering sufficiently small and sparse state perturbations thereby improving nonlinear estimation in cases where the sensing function maps the state onto a lower-dimensional space. Other two methods are considered in this work. The extended Baum-Welch (EBW), a popular algorithm for discriminative training of speech models, is amended here for recovery of normalized sparse signals. This method naturally handles nonlinearities and therefore has a prominent advantage over the nonlinear extensions of the KF-based algorithms which rely on validity of linearization. The last method derived in this work is based on a Markov chain Monte Carlo (MCMC) mechanism. This method roughly divides the sparse recovery problem into two parts. Thus, the MCMC is used for optimizing the support of the signal while an auxiliary estimation algorithm yields the value of elements. An extensive numerical study is provided in which the methods are compared and analyzed. As part of this, the KF-based algorithm is applied to lossy speech compression.
Avishy also tells me that examples from these papers using Kalman Filtering will soon be on his site. Great.

I wonder how I missed this one, but it looks like it is not on the blog. Combinatorial Sublinear-Time Fourier Algorithms by Mark Iwen. The abstract reads:
We study the problem of estimating the best k term Fourier representation for a given frequency-sparse signal (i.e., vector) A of length N  k. More explicitly, we investigate how to deterministically identify k of the largest magnitude frequencies of ˆA, and estimate their coefficients, in polynomial(k, logN) time. Randomized sublinear time algorithms which have a small (controllable) probability of failure for each processed signal exist for solving this problem [21, 22]. In this paper we develop the first known deterministic sublinear time sparse Fourier Transform algorithm which is guaranteed to produce accurate results. As an added bonus, a simple relaxation of our deterministic Fourier result leads to a new Monte Carlo Fourier algorithm with similar runtime/sampling bounds to the current best randomized Fourier method [22]. Finally, the Fourier algorithm we develop here implies a simpler optimized version of the deterministic compressed sensing method previously developed in [27].

Streaming Measurements in Compressive Sensing: l_1 Filtering by M. Salman Asif and Justin Romberg. The abstract reads:
The central framework for signal recovery in compressive sensing is l_1 norm minimization. In recent years, tremendous progress has been made on algorithms, typically based on some kind of gradient descent or Newton iterations, for performing l_1 norm minimization. These algorithms, however, are for the most part “static”: they focus on finding the solution for a fixed set of measurements. In this paper, we will present a method for quickly updating the solution to some l_1 norm minimization problems as new measurements are added. The result is an “l_1 filter” and can be implemented using standard techniques from numerical linear algebra. Our proposed scheme is homotopy based where we add new measurements in the system and instead of solving updated problem directly, we solve a series of simple (easy to solve) desired solution.

Dantzig selector homotopy with dynamic measurements by M. Salman Asif and Justin Romberg. The abstract reads:
The Dantzig selector is a near ideal estimator for recovery of sparse signals from linear measurements in the presence of noise. It is a convex optimization problem which can be recast into a linear program (LP) for real data, and solved using some LP solver. In this paper we present an alternative approach to solve the Dantzig selector which we call “Primal Dual pursuit” or “PD pursuit”. It is a homotopy continuation based algorithm, which iteratively computes the solution of Dantzig selector for a series of relaxed problems. At each step the previous solution is updated using the optimality conditions defined by the Dantzig selector. We will also discuss an extension of PD pursuit which can quickly update the solution for Dantzig selector when new measurements are added to the system. We will present the derivation and working details of these algorithms.

Dynamic Updating for Sparse Time Varying Signals
by M. Salman Asif and Justin Romberg. The abstract reads:
Many signal processing applications revolve around finding a sparse solution to a (often underdetermined) system of linear equations. Recent results in compressive sensing (CS) have shown that when the signal we are trying to acquire is sparse and the measurements are incoherent, the signal can be reconstructed reliably from an incomplete set of measurements. However, the signal recovery is an involved process, usually requiring the solution of an ℓ1 minimization program. In this paper we discuss the problem of estimating a timevarying sparse signal from a series of linear measurements. We propose an efficient way to dynamically update the solution to two types of ℓ1 problems when the underlying signal changes. The proposed dynamic update scheme is based on homotopy continuation, which systematically breaks down the solution update into a small number of linear steps. The computational cost for each step is just a few matrix-vector multiplications.
Dynamic updating for L1 minimization by M. Salman Asif and Justin Romberg. The abstract reads:
The theory of compressive sensing (CS) suggests that under certain conditions, a sparse signal can be recovered from a small number of linear incoherent measurements. An effective class of reconstruction algorithms involve solving a convex optimization program that balances the L1 norm of the solution against a data fidelity term. Tremendous progress has been made in recent years on algorithms for solving these L1 minimization programs. These algorithms, however, are for the most part static: they focus on finding the solution for a fixed set of measurements.
In this paper, we will discuss "dynamic algorithms" for solving L1 minimization programs for streaming sets of measurements. We consider cases where the underlying signal changes slightly between measurements, and where new measurements of a fixed signal are sequentially added to the system. We develop algorithms to quickly update the solution of several different types of L1 optimization problems whenever these changes occur, thus avoiding having to solve a new optimization problem from scratch. Our proposed schemes are based on homotopy continuation, which breaks down the solution update in a systematic and efficient way into a small number of linear steps. Each step consists of a low-rank update and a small number of matrix-vector multiplications -- very much like recursive least squares. Our investigation also includes dynamic updating schemes for L1 decoding problems, where an arbitrary signal is to be recovered from redundant coded measurements which have been corrupted by sparse errors.
Signal Recovery from Inaccurate and Incomplete Measurements via ROMP by Deanna Needell and Roman Vershynin. The abstract reads:
We demonstrate a simple greedy algorithm that can reliably recover a vector v element of R^d from incomplete and inaccurate measurements x = \phi v + e. Here is a N × d measurement matrix with N much less d, and e is an error vector. Our algorithm, Regularized Orthogonal Matching Pursuit (ROMP), seeks to close the gap between two major approaches to sparse recovery. It combines the speed and ease of implementation of the greedy methods with the strong guarantees of the convex programming methods. For any measurement matrix \phi that satisfies a Uniform Uncertainty Principle, ROMP recovers a signal v with O(n) nonzeros from its inaccurate measurements x in at most n iterations, where each iteration amounts to solving a Least Squares Problem. The noise level of the recovery is proportional to √log n||e||_2. In particular, if the error term e vanishes the reconstruction is exact. This stability result extends naturally to the very accurate recovery of approximately sparse signals.

On-Off Random Access Channels: A Compressed Sensing Framework by Alyson Fletcher, Sundeep Rangan, Vivek Goyal. The abstract reads:
This paper considers a simple on-off random multiple access channel, where n users communicate simultaneously to a single receiver over m degrees of freedom. Each user transmits with probability lambda, where typically lambda n \lt m \lt \lt n, and the receiver must detect which users transmitted. We show that when the codebook has i.i.d. Gaussian entries, detecting which users transmitted is mathematically equivalent to a certain sparsity detection problem considered in compressed sensing. Using recent sparsity results, we derive upper and lower bounds on the capacities of these channels. We show that common sparsity detection algorithms, such as lasso and orthogonal matching pursuit (OMP), can be used as tractable multiuser detection schemes and have significantly better performance than single-user detection. These methods do achieve some near-far resistance but--at high signal-to-noise ratios (SNRs)--may achieve capacities far below optimal maximum likelihood detection. We then present a new algorithm, called sequential OMP, that illustrates that iterative detection combined with power ordering or power shaping can significantly improve the high SNR performance. Sequential OMP is analogous to successive interference cancellation in the classic multiple access channel. Our results thereby provide insight into the roles of power control and multiuser detection on random-access signalling.

The Power of Convex Relaxation: Near-Optimal Matrix Completion by Emmanuel Candes and Terence Tao. The abstract reads:
This paper is concerned with the problem of recovering an unknown matrix from a small fraction of its entries. This is known as the matrix completion problem, and comes up in a great number of applications, including the famous Netflix Prize and other similar questions in collaborative filtering. In general, accurate recovery of a matrix from a small number of entries is impossible; but the knowledge that the unknown matrix has low rank radically changes this premise, making the search for solutions meaningful. This paper presents optimality results quantifying the minimum number of entries needed to recover a matrix of rank r exactly by any method whatsoever (the information theoretic limit). More importantly, the paper shows that, under certain incoherence assumptions on the singular vectors of the matrix, recovery is possible by solving a convenient convex program as soon as the number of entries is on the order of the information theoretic limit (up to logarithmic factors). This convex program simply nds, among all matrices consistent with the observed entries, that with minimum nuclear norm. As an example, we show that on the order of nr log(n) samples are needed to recover a random n x n matrix of rank r by any method, and to be sure, nuclear norm minimization succeeds as soon as the number of entries is of the form nrpolylog(n).

Stagewise Weak Gradient Pursuits
by Thomas Blumensath and Mike Davies. The abstract reads:
Finding sparse solutions to underdetermined inverse problems is a fundamental challenge encountered in a wide range of signal processing applications, from signal acquisition to source separation. This paper looks at greedy algorithms that are applicable to very large problems. The main contribution is the development of a new selection strategy (called stagewise weak selection) that effectively selects several elements in each iteration. The new selection strategy is based on the realisation that many classical proofs for recovery of sparse signals can be trivially extended to the new setting. What is more, simulation studies show the computational benefits and good performance of the approach. This strategy can be used in several greedy algorithms and we argue for the use within the gradient pursuit framework in which selected coefficients are updated using a conjugate update direction. For this update, we present a fast implementation and novel convergence result.

Quantized Compressive Sensing
by Wei Dai, Hoa Vinh Pham, and Olgica Milenkovic. The abstract reads:
We study the average distortion introduced by scalar, vector, and entropy coded quantization of compressive sensing (CS) measurements. The asymptotic behavior of the underlying quantization schemes is either quantified exactly or characterized via bounds. We adapt two benchmark CS reconstruction algorithms to accommodate quantization errors, and empirically demonstrate that these methods significantly reduce the reconstruction distortion when compared to standard CS techniques.

Dictionary Learning for Sparse Approximations with the Majorization Method by Mehrdad Yaghoobi, and Thomas Blumensath, and Mike Davies. The abstract reads:
In order to find sparse approximations of signals, an appropriate generative model for the signal class has to be known. If the model is unknown, it can be adapted using a set of training samples. This paper presents a novel method for dictionary learning and extends the learning problem by introducing different constraints on the dictionary. The convergence of the proposed method to a fixed point is guaranteed, unless the accumulation points form a continuum. This holds for different sparsity measures. The majorization method is an optimization method that substitutes the original objective function with a surrogate function that is updated in each optimization step. This method has been used successfully in sparse approximation and statistical estimation (e.g. Expectation Maximization (EM)) problems. This paper shows that the majorization method can be used for the dictionary learning problem too. The proposed method is compared with other methods on both synthetic and real data and different constraints on the dictionary are compared. Simulations show the advantages of the proposed method over other currently available dictionary learning methods not only in terms of average performance but also in terms of computation time.

Parametric Dictionary Design for Sparse Coding
by Mehrdad Yaghoobi, and Laurent Daudet, and Mike Davies. The abstract reads:
This paper introduces a new dictionary design method for sparse coding of a class of signals. It has been shown that one can sparsely approximate some natural signals using an overcomplete set of parametric functions, e.g. [1], [2]. A problem in using these parametric dictionaries is how to choose the parameters. In practice these parameters have been chosen by an expert or through a set of experiments. In the sparse approximation context, it has been shown that an incoherent dictionary is appropriate for the sparse approximation methods. In this paper we first characterize the dictionary design problem, subject to a minimum coherence constraint. Then we briefly explain that equiangular tight frames have minimum coherence. The parametric dictionary design is then to find an admissible dictionary close to being tight frame. The complexity of the problem does not allow it to be solved exactly. We introduce a practical method to approximately solve it. Some experiments show the advantages one gets by using these dictionaries.

Parametric Dictionary Design for Sparse Coding
by Mehrdad Yaghoobi, and Laurent Daudet, and Mike Davies. The abstract reads:
This paper introduces a new dictionary design method for sparse coding of a class of signals. It has been shown that one can sparsely approximate some natural signals using an overcomplete set of parametric functions, e.g. [1], [2]. A problem in using these parametric dictionaries is how to choose the parameters. In practice these parameters have been chosen by an expert or through a set of experiments. In the sparse approximation context, it has been shown that an incoherent dictionary is appropriate for the sparse approximation methods. In this paper we first characterize the dictionary design problem, subject to a constraint on the dictionary. Then we briefly explain that equiangular tight frames have minimum coherence. The complexity of the problem does not allow it to be solved exactly. We introduce a practical method to approximately solve it. Some experiments show the advantages one gets by using these dictionaries.
Irregular and Multi-channel Sampling of Operators by Yoon Mi Hong and Gotz Pfander. The abstract reads:
The classical sampling theorem for bandlimited functions has recently been generalized to apply to so-called bandlimited operators, that is, to operators with band-limited Kohn-Nirenberg symbols. Here, we discuss operator sampling versions of two of the most central extensions to the classical sampling theorem. In irregular operator sampling, the sampling set is not periodic with uniform distance. In multi-channel operator sampling, we obtain complete information on an operator by multiple operator sampling outputs.
Fast Bayesian Matching Pursuit: Model Uncertainty and Parameter Estimation for Sparse Linear Models by Phil Schniter, Lee Potter and Justin Ziniel (this is a revised version).The abstract reads:
A low-complexity recursive procedure is presented for model selection and minimum mean squared error (MMSE) estimation in linear regression. Emphasis is given to the case of a sparse parameter vector and fewer observations than unknown parameters. A Gaussian mixture is chosen as the prior on the unknown parameter vector. The algorithm returns both a set of high posterior probability models and an approximate MMSE estimate of the parameter vector. Exact ratios of posterior probabilities serve to reveal potential ambiguity among multiple candidate solutions that are ambiguous due to observation noise or correlation among columns in the regressor matrix. Algorithm complexity is O(MNK), with M observations, N coefficients, and K nonzero coefficients. For the case that hyperparameters are unknown, an approximate maximum likelihood estimator is proposed based on the generalized expectation-maximization algorithm. Numerical simulations demonstrate estimation performance and illustrate the distinctions between MMSE estimation and maximum a posteriori probability model selection.
There is also a presentation: Sparse Reconstruction via Bayesian Variable Selection and Bayesian Model Averaging by Phil Schniter, Lee Potter, and Subhojit Som

Finally, we have Some Variations on Total Variation Based Image Smoothing by Antonin Chambolle, Stacey Levine, and Bradley Lucier. The abstract reads:

In this paper we study finite-diff erence approximations to the variational problem using the BV smoothness penalty that was introduced in an image smoothing context by Rudin, Osher, and Fatemi. We give a dual formulation for an \upwind" finite-diff erence approximation for the BV seminorm; this formulation is in the same spirit as one popularized by Chambolle for a simpler, more anisotropic, finite-di fference approximation to the BV seminorm. We introduce a multiscale method for speeding the approximation of both Chambolle's original method and of the new formulation of the upwind scheme. We demonstrate numerically that the multiscale method is e ective, and we provide numerical examples that illustrate both the qualitative and quantitative behavior of the solutions of the numerical formulations.
Insights into the stable recovery of sparse solutions in overcomplete representations using network information theory by Yuzhe Jin and Bhaskar Rao. The abstract reads:

In this paper, we examine the problem of overcomplete representations and provide new insights into the problem of stable recovery of sparse solutions in noisy environments. We establish an important connection between the inverse problem that arises in overcomplete representations and wireless communication models in network information theory. We show that the stable recovery of a sparse solution with a single measurement vector (SMV) can be viewed as decoding competing users simultaneously transmitting messages through a Multiple Access Channel (MAC) at the same rate. With multiple measurement vectors (MMV), we relate the inverse problem to the wireless communication scenario with a Multiple-Input Multiple-Output (MIMO) channel. In each case, based on the connection established between the two domains, we leverage channel capacity results with outage analysis to shed light on the fundamental limits of any algorithm to stably recover sparse solutions in the presence of noise. Our results explicitly indicate the conditions on the key model parameters, e.g. degree of overcompleteness, degree of sparsity, and the signal-to-noise ratio, to guarantee the existence of asymptotically stable reconstruction of the sparse source.
Performance limits of matching pursuit algorithms by Yuzhe Jin and Bhaskar Rao. The abstract reads:
In this paper, we examine the performance limits of the Orthogonal Matching Pursuit (OMP) algorithm, which has proven to be effective in solving for sparse solutions to inverse problem arising in overcomplete representations. To identify these limits, we exploit the connection between sparse solution problem and multiple access channel (MAC) in wireless communication domain. The forward selective nature of OMP helps it to be recognized as a successive interference cancellation (SIC) scheme that decodes non-zero entries one at a time in a specific order. We leverage this SIC decoding order and utilize the criterion for successful decoding to develop the information-theoretic performance limitation for OMP, which involves factors such as dictionary dimension, signal-to-noise-ratio, and importantly, the relative behavior of the non-zeros entries. Supported by computer simulations, our proposed criterion is demonstrated to be asymptotically effective in explaining the behavior of OMP.

Compressed learning: Universal sparse dimensionality reduction and learning in the measurement domain by Robert Calderbank, Sina Jafarpour, and Robert Schapire,The abstract reads:
In this paper, we provide theoretical results to show that compressed learning, learning directly in the compressed domain, is possible. In Particular, we provide tight bounds demonstrating that the linear kernel SVM’s classifier in the measurement domain, with high probability, has true accuracy close to the accuracy of the best linear threshold classifier in the data domain. We show that this is beneficial both from the compressed sensing and the machine learning points of view. Furthermore, we indicate that for a family of well-known compressed sensing matrices, compressed learning is universal, in the sense that learning and classification in the measurement domain works provided that the data are sparse in some, even unknown, basis. Moreover, we show that our results are also applicable to a family of smooth manifold-learning tasks. Finally, we support our claims with experimental results.

Optimized compressed sensing for curvelet-based seismic data reconstruction by Wen Tang, Jianwei Ma, and Felix Herrmann. The abstract reads:
Compressed sensing (CS) or compressive sampling provides a new sampling theory to reduce data acquisition, which says that compressible signals can be exactly reconstructed from highly incomplete sets of measurements. Very recently, the CS has been applied for seismic exploration and started to compact the traditional data acquisition. In this paper, we present an optimized sampling strategy for the CS data acquisition, which leads to better performance by the curvelet sparsity-promoting inversion in comparison with random sampling and jittered sampling scheme. One of motivation is to reduce the mutual coherence between measurement sampling schemes and curvelet sparse transform in the CS framework. The basic idea of our optimization is to directly redistribute the energy in frequency domain making original spectrum easily discriminated from the random noise induced by random undersampling, while o®ering control on the maximum gap size. Numerical experiments on synthetic and real seismic data show good performances of the proposed optimized CS for seismic data reconstruction.

Single-pixel remote sensing by Jianwei Ma. The abstract reads:
In this letter, we apply a new sampling theory named compressed sensing (CS) for aerospace remote sensing to reduce data acquisition and imaging cost. We can only record directly single or multiple pixels while need not the use of additional compression step to improve the problems of power consumption, data storage, and transmission, without degrading spatial resolution and quality of pictures. The CS remote sensing includes two steps: encoding imaging and decoding recovery. A noiselet transform-based single-pixel imaging and a random Fourier sampling-based multipixel imaging are alternatively used for encoding, and an iterative curvelet thresholding method is used for decoding. The new sensing mechanism shifts onboard imaging cost to offline decoding recovery. It would lead to new instruments with less storage space, less power consumption, and smaller size than currently used charged coupled device cameras, which would match effective needs particularly for probes sent very far away. Numerical experiments on potential applications for Chinese Chang’e-1 lunar probe are presented.
The batch least-absolute shrinkage and selection operator (Lasso) has well-documented merits for estimating sparse signals of interest emerging in various applications, where observations adhere to parsimonious linear regression models. To cope with linearly growing complexity and memory requirements that batch Lasso estimators face when processing observations sequentially, the present paper develops a recursive Lasso algorithm that can also track slowly varying sparse signals of interest. Performance analysis reveals that recursive Lasso can either estimate consistently the sparse signal’s support or its nonzero entries, but not both. This motivates the development of a weighted version of the recursive Lasso scheme with weights obtained from the recursive least-squares (RLS) algorithm. The resultant RLS-weighted Lasso algorithm provably estimates sparse signals consistently. Simulated tests compare competing alternatives and corroborate the performance of the novel algorithms in estimating time-invariant and tracking slow-varying signals under sparsity constraints.

Distributed compressive video sensing by Li-Wei Kang and Chun-Shien Lu. The abstract reads:
Low-complexity video encoding has been applicable to several emerging applications. Recently, distributed video coding (DVC) has been proposed to reduce encoding complexity to the order of that for still image encoding. In addition, compressive sensing (CS) has been applicable to directly capture compressed image data efficiently. In this paper, by integrating the respective characteristics of DVC and CS, a distributed compressive video sensing (DCVS) framework is proposed to simultaneously capture and compress video data, where almost all computation burdens can be shifted to the decoder, resulting in a very low-complexity encoder. At the decoder, compressed video can be efficiently reconstructed using the modified GPSR (gradient projection for sparse reconstruction) algorithm. With the assistance of the proposed initialization and stopping criteria for GRSR, derived from statistical dependencies among successive video frames, our modified GPSR algorithm can terminate faster and reconstruct better video quality. The performance of our DCVS method is demonstrated via simulations to outperform three known CS reconstruction algorithms.


Credit: NASA/JPL/University of Arizona,
Ancient Volcano Defrosting (ESP_011605_1170) as taken by the HiRise camera.

No comments:

Printfriendly