Thursday, June 19, 2008

CS: SIAM IS, EUSIPCO, Papers found on the Internets, New Rice entries and a Video.

In a previous entry, I forgot to mention the following abstract at the SIAM IS08 meeting of relevance to Compressed Sensing:

MS9
Semismooth Newton and Active Set Methods for Sparse Reconstruction

The problem of sparse reconstruction by Basis Pursuit is equivalent to regularization of inverse problems with sparsity constraints. We will shortly describe regularizing properties and then employ a semismooth version of Newton’s method to solve the related non-smooth minimization
problem. This algorithm can be formulated as an active set method and we prove its local superlinear convergence. Besides this local result, the algorithm is—in some cases—robust to the initial value. Moreover, we discuss globalization strategies of the algorithms.

Dirk Lorenz
Center for Industrial Mathematics
University of Bremen

Roland Griesse
Technical University Chemnitz

The EUSIPCO meeting is on August 25-29th. Thanks to Laurent Duval, I found out the program is out and here some of the presentations that seem relevant to Compressed Sensing. Some of these papers have already been featured here:

The LNLA meeting adjacent to the EUSIPCO meeting also has a program out but I could not see anything related to CS.

Out of the ones I have not covered yet but are available from the authors' site we have:

Dictionary identifiability from few training samples by Remi Gribonval and Karin Schnass. The abstract reads:

This article treats the problem of learning a dictionary providing sparse representations for a given signal class, via l_1 minimisation. The problem is to identify a dictionary from a set of training samples Y knowing that for some coefficient matrix X. Using a characterisation of coefficient matrices X that allow to recover any orthonormal basis (ONB) as a local minimum of an l_1 minimisation problem, it is shown that certain types of sparse random coefficient matrices will ensure local identifiability of the ONB with high probability, for a number of training samples which essentially grows linearly with the signal dimension.
Under-determined source separation via mixed-norm regularized minimization by M. Kowalski, E. Vincent and Remi Gribonval. The abstract reads:

We consider the problem of extracting the source signals from an under-determined convolutive mixture assuming that the mixing filters are known. We wish to exploit the sparsity and approximate disjointness of the time-frequency representations of the sources. However, classical time-frequency masking techniques cannot be directly applied due to the convolutive nature of the mixture. To address this problem, we first formulate it as the minimization of a functional combining a classical l_2 discrepancy term between the observed mixture and the mixture reconstructed from the estimated sources and a sparse regularization term defined in terms of mixed l_2/l_1 norms of source coefficients in a time-frequency domain. The minimum of the functional is then obtained by a thresholded Landweber iteration algorithm. Preliminary results are discussed for two synthetic audio mixtures.
as you may recall in this recent entry, the paper by Adam Zelinski, Lawrence L. Wald, Kawin Setsompop, Vivek Goyal and Elfar Adalsteinsson entitled Sparsity-Enforced Slice-Selective MRI RF Excitation Pulse Design had a similar l_1/l_2 mixed norm in the regularization.

There is also the potentially important Shift-invariant dictionary learning for sparse representations: extending K-SVD by Boris Mailhé, Sylvain Lesage, Remi Gribonval and Frederic Bimbot, Pierre Vandergheynst. The abstract reads:
Shift-invariant dictionaries are generated by taking all the possible shifts of a few short patterns. They are helpful to represent long signals where the same pattern can appear several times at different positions. We present an algorithm that learns shift invariant dictionaries from long training signals. This algorithm is an extension of K-SVD. It alternates a sparse decomposition step and a dictionary update step. The update is more difficult in the shift-invariant case because of occurrences of the same pattern that overlap. We propose and evaluate an unbiased extension of the method used in K-SVD, i.e. a method able to exactly retrieve the original dictionary in a noiseless case.
Remi Gribonval told me offline that this new technique will eventually be made available at some point in the future.

Finally, two papers from Crete:

Compressed Sensing of Audio Signals Using Multiple Sensors by Anthony Griffin and Panagiotis Tsakalides. The abstract reads:
Compressed sensing is an attractive compression scheme due to its universality and lack of complexity on the sensor side. In this paper we present a study on compressed sensing of real, non-sparse, audio signals. We investigate the performance of different bases and reconstruction algorithms. We then explore the performance of multi-sensor compressed sensing of audio signals and present a novel scheme to provide improved performance over standard reconstruction algorithms. We then present simulations and measured results of a new algorithm to perform efficient detection and estimation in a sensor network that is used to track the location of a subject wearing a tracking device, which periodically transmits a very sparse audio signal. We show that our algorithm can dramatically reduce the number of transmissions in such a sensor network.
Poster Abstract: Compressed Sensing of Audio Signals in a Wireless Sensor Network byAnthony Griffin and Panagiotis Tsakalides. The abstract reads:
Compressed sensing is an attractive compression scheme due to its universality and lack of complexity on the sensor side. In this work we demonstrate how it could be used in a wireless sensor network. We consider a sensor network that tracks the location of a subject wearing a device that periodically transmits an audio signal. Through simulations and measurements of a simple system, we illustrate that dramatic compression can be acheived.
Found also on the Internets:
Weighted Superimposed Codes and Constrained Integer Compressed Sensing by Wei Dai and Olgica Milenkovic. The abstract reads:
We introduce a new family of codes, termed weighted superimposed codes (WSCs). This family generalizes the class of Euclidean superimposed codes (ESCs), used in multiuser identification systems. WSCs allow for discriminating all bounded, integer-valued linear combinations of real-valued codewords that satisfy prescribed norm and non-negativity constraints. By design, WSCs are inherently noise tolerant. Therefore, these codes can be seen as special instances of robust compressed sensing schemes. The main results of the paper are lower and upper bounds on the largest achievable code rates of several classes of WSCs. These bounds suggest that with the codeword and weighting vector constraints at hand, one can improve the code rates achievable by standard compressive sensing.
and Compressive Sampling of Binary Images by Vladimir Stankovic, Lina Stankovic, and Samuel Cheng. The abstract reads:
Compressive sampling is a novel framework that exploits sparsity of a signal in a transform domain to perform sampling below the Nyquist rate. In this paper, we apply compressive sampling to reduce the sampling rate of binary images. A system is proposed whereby the image is split into non-overlapping blocks of equal size and compressive sampling is performed on selected blocks only using the orthogonal matching pursuit technique. The remaining blocks are sampled fully. This way, the complexity and the required sampling time is reduced since the orthogonal matching pursuit operates on a smaller number of samples, and at the same time local sparsity within an image is exploited. Our simulation results show more than 20% saving in acquisition for several binary images.
Finally, there are some new papers featured at the Rice Repository Page:

Shamgar Gurevich, Ronny Hadani, and Nir Sochen, On some deterministic dictionaries supporting sparsity. The abstract reads:
We describe a new construction of an incoherent dictionary, referred to as the oscillator dictionary, which is based on considerations in the representation theory of fi…nite groups. The oscillator dictionary consists of approximately p^5 unit vectors in a Hilbert space of dimension p, whose pairwise inner products have magnitude of at most 4/sqrt(p). An explicit algorithm to construct a large portion of the oscillator dictionary is presented.
Patrick Combettes and Valérie Wajs, Signal recovery by proximal forward-backward
splitting
. The abstract reads:
We show that various inverse problems in signal recovery can be formulated as the
generic problem of minimizing the sum of two convex functions with certain regularity properties. This formulation makes it possible to derive existence, uniqueness, characterization, and stability results in a unified and standardized fashion for a large class of apparently disparate problems. Recent results on monotone operator splitting methods are applied to establish the convergence of a forward-backward algorithm to solve the generic problem. In turn, we recover, extend, and provide a simplified analysis for a variety of existing iterative methods. Applications to geometry/texture image decomposition schemes are also discussed. A novelty of our framework is to use extensively the notion of a proximity operator, which was introduced by Moreau in the 1960s.
A variational formulation for frame-based inverse problems by Caroline Chaux, Patrick Combettes, Jean-Christophe Pesquet, and Valérie Wajs. The abstract reads:
A convex variational framework is proposed for solving inverse problems in Hilbert spaces with a priori information on the representation of the target solution in a frame. The objective function to be minimized consists of a separable term penalizing each frame coefficient individually and of a smooth term modeling the data formation model as well as other constraints. Sparsity-constrained and Bayesian formulations are examined as special cases. A splitting algorithm is presented to solve this problem and its convergence is established in infinite-dimensional spaces under mild conditions on the penalization functions, which need not be differentiable. Numerical simulations demonstrate applications to frame-based image restoration.

and finally, Proximal thresholding algorithm for minimization over orthonormal bases by Patrick Combettes and Jean-Christophe Pesquet. The abstract reads:
The notion of soft thresholding plays a central role in problems from various areas
of applied mathematics, in which the ideal solution is known to possess a sparse decomposition in some orthonormal basis. Using convex-analytical tools, we extend this notion to that of proximal thresholding and investigate its properties, providing in particular several characterizations of such thresholders. We then propose a versatile convex variational formulation for optimization over orthonormal bases that covers a wide range of problems, and establish the strong convergence of a proximal thresholding algorithm to solve it. Numerical applications to signal recovery are demonstrated.

There is also a video on Integration of Sensing and Processing (December 05-09, 2005) by Robert Nowak on Active learning vs. compressed sensing.

All these links are also featured on The CS List. Because of its date, the video is at the end of the CS video page.

Credit photo: NASA/JPL-Caltech/University of Arizona/Texas A&M. Sol 22 images from Phoenix. Snow White trench at Woonderland,

No comments:

Printfriendly