Wednesday, April 30, 2008

CS: A code, linearized Bregman iteration,Norms of random submatrices, Fast Fourier Sampling, Performance of l0 approximation, Radar Reconstruction


I mentioned earlier the article by G. Hosein Mohimani, Massoud Babaie-Zadeh and Christian Jutten entitled Fast Sparse Representation based on Smoothed l0 norm. G. Hosein Mohimani forwarded me the code implementing the algorithm. It is also available here.

Another reconstruction code can be found in this new paper entitled: Linearized Bregman Iterations for Compressed Sensing by Jian-Feng Cai, Stanley Osher and Zuowei Shen. The abstract reads;
Finding a solution of a linear equation Au = f with various minimization properties
arises from many applications. One of such applications is compressed sensing, where an efficient and robust-to-noise algorithm to find a minimal l1 norm solution is needed. This means that the algorithm should be tailored for large scale and completely dense matrices A, while Au and AT u can be computed by fast transforms and the solution to seek is sparse. Recently, a simple and fast algorithm based on linearized Bregman iteration was proposed in [26, 30] for this purpose. This paper is to analyze the convergence of linearized Bregman iterations and the minimization properties of their limit. Based on our analysis here, we derive also a new algorithm that is proven to be convergent with a rate. Furthermore, the new algorithm is as simple and fast as the algorithm given in [26, 30] in approximating a minimal l1 norm solution of Au = f as shown by numerical simulations. Hence, it can be used as another choice of an efficient tool in compressed sensing.
Of related interest to the random matrices results: Norms of random submatrices and sparse approximation by Joel Tropp. The abstract reads:

Many problems in the theory of sparse approximation require bounds on operator norms of a random submatrix drawn from a fixed matrix. The purpose of this note is to collect estimates for several different norms that are most important in the analysis of l1 minimization algorithms. Several of these bounds have not appeared in detail.

And On the Conditioning of Random Subdictionaries by Joel Tropp. The abstract reads:
An important problem in the theory of sparse approximation is to identify well-
conditioned subsets of vectors from a general dictionary. In most cases, current results do not apply unless the number of vectors is smaller than the square root of the ambient dimension, so these bounds are too weak for many applications. This paper shatters the square-root bottleneck by focusing on random subdictionaries instead of arbitrary subdictionaries. It provides explicit bounds on the extreme singular values of random subdictionaries that hold with overwhelming probability. The results are phrased in terms of the coherence and spectral norm of the dictionary, which capture information about its global geometry. The proofs rely on standard tools from the area of Banach space probability. As an application, the paper shows that the conditioning of a subdictionary is the major obstacle to the uniqueness of sparse representations and the success of l1 minimization techniques for signal recovery. Indeed, if a fixed subdictionary is well conditioned and its cardinality is slightly smaller than the ambient dimension, then a random signal formed from this subdictionary almost surely has no other representation that is equally sparse. Moreover, with overwhelming probability, the maximally sparse representation can be identifid via l1 minimization. Note that the results in this paper are not directly comparable with recent work on subdictionaries of random dictionaries.

On a different note, one can use the knowledge of the sparsity of the result of a transform to produce a faster algorithm for that transform. Anna Gilbert, Martin Strauss, and Joel Tropp present A Tutorial on Fast Fourier Sampling. The abstract reads:

Having obtained all N Fourier coefficients, it is straightforward to locate the m nonzero frequencies and their coefficients. Although you can compute the DFT efficiently by means of the fast Fourier transform (FFT), the fact remains that you must compute a very large number of zero coefficients when the signal involves few frequencies. This approach seems rather inefficient. The discrete uncertainty principle [8] suggests that it might be possible to use fewer samples from the signal. Indeed, if the spectrum of a length-N discrete-time signal contains only m nonzero frequencies, then the time domain has at least N/m nonzero positions. As a result, even if we sample the signal at relatively few points in time, the samples should carry significant information about the spectrum of the signal. This article describes a computational method, called the Fourier sampling algorithm, that exploits this insight [10]. The algorithm takes a small number of (correlated) random samples from a signal and processes them efficiently to produce an approximation of the DFT of the signal. The algorithm offers provable guarantees on the number of samples, the running time, and the amount of storage. As we will see, these requirements are exponentially better than the FFT for some cases of interest. This article describes in detail how to implement a version of Fourier sampling, it presents some evidence of its empirical performance, and it explains the theoretical ideas that underlie the analysis. Our hope is that this tutorial will allow engineers to apply Fourier sampling to their own problems. We also hope that it will stimulate further research on practical implementations and extensions of the algorithm.

This tutorial makes use of the results in A Deterministic Sub-linear Time Sparse Fourier Algorithm via Non-adaptive Compressed Sensing Methods by Mark Iwen. The abstract reads:

We study the problem of estimating the best B term Fourier representation for a given frequency-sparse signal (i.e., vector) A of length N much superior to B. More precisely, we investigate how to deterministically identify B of the largest magnitude frequencies of ˆ A, and estimate their coefficients, in polynomial(B, logN) time. Randomized sub-linear time algorithms, which have a small (controllable) probability of failure for each processed signal, exist for solving this problem. However, for failure intolerant applications such as those involving mission-critical hardware designed to process many signals over a long lifetime, deterministic algorithms with no probability of failure are highly desirable. In this paper we build on the deterministic Compressed Sensing results of Cormode and Muthukrishnan (CM) [26, 6, 7] in order to develop the first known deterministic sublinear time sparse Fourier Transform algorithm suitable for failure intolerant applications. Furthermore, in the process of developing our new Fourier algorithm, we present a simplified deterministic Compressed Sensing algorithm which improves on CM’s algebraic compressibility results while simultaneously maintaining their results concerning exponential decay.

and the Ann Arbor Fast Fourier Transform (AAFFT) in the Fast Approximation Algorithms page.


Finally there is Performance of the l0 approximation in a general dictionary by Francois Malgouyres, Mila Nikolova. The abstract reads:
We consider the minimization of the number of non-zero coefficients (the ℓ0 “norm”) of the representation of a data set in terms of an arbitrary dictionary under any fidelity constraint. This (nonconvex) optimization problem naturally leads to the sparsest representations, compared with other functionals. Our goal is to measure the sets of data yielding an exactly K-sparse solution— involving K nonzero components—from data uniformly distributed on a domain defined by any norm. We describe these sets of data and derive bounds on their Lebesgue measure. They lead to bounds on the probability of getting an exactly K-sparse solution.

One final note, Lee Potter, Phil Schniter, and Justin Ziniel just released Sparse Reconstrution for Radar. It is an interesting paper because it gives reasons as to why they are not using compressed sensing! (paragraph 1.2)

Credit photo: NASA, Hirise view of Hellas Planitia on Mars.

No comments:

Printfriendly