Wednesday, November 30, 2011

Compressive Sensing This Week

It's the long entry of the week that features everything from sensors, template matching to theoretical phase transition like results. But first here is a challenge on  Classifying Heart Sounds Challenge by Peter BentleyGlenn NordehnMiguel CoimbraShie Mannor, Rita Getz.


Also here is Jason Laska's thesis presentation slides entitled: Regime Change  Sampling Rate vs. Bit-Depth in Compressive Sensing

Document clustering involves repetitive scanning of a document set, therefore as the size of the set increases, the time required for the clustering task increases and may even become impossible due to computational constraints. Compressive sampling is a feature sampling technique that allows us to perfectly reconstruct a vector from a small number of samples, provided that the vector is sparse in some known domain. In this article, we apply the theory behind compressive sampling to the document clustering problem using k-means clustering. We provide a method of computing high accuracy clusters in a fraction of the time it would have taken by directly clustering the documents. This is performed by using the Discrete Fourier Transform and the Discrete Cosine Transform. We provide empirical results showing that compressive sampling provides a 14 times increase in speed with little reduction in accuracy on 7,095 documents, and we also provide a very accurate clustering of a 231,219 document set, providing 20 times increase in speed when compared to performing k-means clustering on the document set. This shows that compressive clustering is a very useful tool that can be used to quickly compute approximate clusters.

From 50 hours down to 2, it's not bad, not bad at all.


Recent breakthrough results in compressed sensing (CS) have established that many high dimensional objects can be accurately recovered from a relatively small number of non- adaptive linear projection observations, provided that the objects possess a sparse representation in some basis. Subsequent efforts have shown that the performance of CS can be improved by exploiting the structure in the location of the non-zero signal coefficients (structured sparsity) or using some form of online measurement focusing (adaptivity) in the sensing process. In this paper we examine a powerful hybrid of these two techniques. First, we describe a simple adaptive sensing procedure and show that it is a provably effective method for acquiring sparse signals that exhibit structured sparsity characterized by tree-based coefficient dependencies. Next, employing techniques from sparse hierarchical dictionary learning, we show that representations exhibiting the appropriate form of structured sparsity can be learned from collections of training data. The combination of these techniques results in an effective and efficient adaptive compressive acquisition procedure.


             


Optimal Phase Transitions in Compressed Sensing by Yihong Wu, Sergio Verdu. The abstract reads:
Compressed sensing deals with efficient recovery of analog signals from linear measurements. This paper presents a statistical study of compressed sensing by modeling the input signal as random processes. Three classes of encoders are considered, namely optimal nonlinear, optimal linear and random linear encoders. Focusing on optimal decoders, we investigate the fundamental tradeoff between measurement rate and reconstruction fidelity gauged by error probability and noise sensitivity in the absence and presence of measurement noise, respectively. Optimal phase transition thresholds are determined as a functional of the input distribution and compared to suboptimal thresholds achieved by various popular reconstruction algorithms. In particular, we show that Gaussian sensing matrices incur no penalty on the phase transition threshold with respect to optimal nonlinear encoding. Our results also provide a rigorous justification of previous results based on replica heuristics in the weak-noise regime.

Recent developments in sampling in abstract Hilbert spaces have led to a new theory of compressed sensing for infinitedimensional signals. In this paper, we continue with this theme by introducing a new type of subsampling for infinite-dimensional sparse recovery problems, known as semi-random sampling. As we demonstrate, this allows for subsampling in problems which previously had not been amenable to more conventional compressed sensing tools. More specifically, semi-random sampling allows one to overcome the so-called incoherence barrier, which limits the potential for subsampling via standard random sampling techniques. The key to this improvement is a property known as asymptotic incoherence. In the final part of this paper we provide specific estimates for this property in several important cases, and illustrate via numerical example the benefit of semi-random sampling.


Generalized Sampling: Extension to Frames and Inverse and Ill-Posed Problems by by Ben AdcockAnders C. HansenEvelyn HerrholzGerd Teschke. The abstract reads:
Generalized sampling is new framework for sampling and reconstruction in infinite-dimensional
Hilbert spaces. Given measurements (inner products) of an element with respect to one basis, it allows one to reconstruct in another, arbitrary basis, in a way that is both convergent and numerically stable. However, generalized sampling is thus far only valid for sampling and reconstruction in systems that comprise bases. Thus, in the first part of this paper we extend this framework from bases to frames, and provide fundamental sampling theorems for this more general case. The second part of the paper is concerned with extending the idea of generalized sampling to the solution of inverse and ill-posed problems. In particular, we introduce two generalized sampling frameworks for such problems, based on regularized and non-regularized approaches. We furnish evidence of the usefulness of the proposed theories by providing a number of numerical experiments.



At ASU in APM/EEE 598 Reverse Engineering of Complex Dynamical Networks taught by Rodrigo Platte

  • 11/07 - Robert Thompson - Reverse Engineering Dynamical Systems From Time-Series Data Using l1-Minimization (PDF) (Platte)
  • 11/21 - Lei Ying - "Predicting communication networks from time series by compressive sensing" (PDF) (Lai)
  • 11/21 - Riqi Su - "Predict network collective behavior via compressive sensing" (PDF) (Lai)



.

This paper considers the application of compressed sensing to spherical acoustics in order to improve spatial sound field reconstruction. More specifically, we apply compressed sensing techniques to a set of Ambisonic sound signals to obtain a super-resolution plane-wave decomposition of the original sound field. That is to say, we investigate using the plane-wave decomposition to increase the spherical harmonic order of the Ambisonic sound scene. We refer to this as upscaling the Ambisonic sound scene. A focus of the paper is using sub-band analysis to make the plane-wave decomposition more robust. Results show that the sub-band analysis does indeed improve the robustness of the plane-wave decomposition when dominant overlapping sources are present or in noisy or diffuse sound conditions. Upscaling Ambisonic sound scenes allows more loudspeakers to be used for spatial sound field reconstruction, resulting in a larger sweet spot and improved sound quality.

David let me know of the Advance program for the 2012 IEEE INTERNATIONAL SOLID-STATE CIRCUITS CONFERENCE
page 42
22.4 A 256×256 CMOS Image Sensor With ΔΣ-Based Single-Shot Compressed Sensing Y. Oike, A. El Gamal, Stanford University, Stanford, CA;  Sony, Atsugi, Japan
Page 53
2:50 Is Compressed Sensing Relevant to Image Sensors? Abbas El Gamal, Stanford University, Stanford, CA
1:45 Image and Depth from a Conventional Camera with a Coded Aperture, Bill Freeman, Massachusetts Institute of Technology, Cambridge, MA
I learned from Petros  twitter stream about two papers of his:

Finding optimal sparse solutions to estimation problems, particularly in underdetermined regimes has recently gained much attention. Most existing literature study linear models in which the squared error is used as the measure of discrepancy to be minimized. However, in many applications discrepancy is measured in more general forms such as log-likelihood. Regularization by `1-norm has been shown to induce sparse solutions, but their sparsity level can be merely suboptimal. In this paper we present a greedy algorithm, dubbed Gradient Support Pursuit (GraSP), for sparsity-constrained optimization. Quantifiable guarantees are provided for GraSP when cost functions have the “Stable Hessian Property”.
and


Abstract—We present a novel method to securely determine whether two signals are similar to each other, and apply it to approximate nearest neighbor clustering. The proposed method relies on a locality sensitive hashing scheme based on a secure binary embedding, computed using quantized random projections. Hashes extracted from the signals preserve information about the distance between the signals, provided this distance is small enough. If the distance between the signals is larger than a threshold, then no information about the distance is revealed. Theoretical and experimental justification is provided for this property. Further, when the randomized embedding parameters are unknown, then the mutual information between the hashes of any two signals decays to zero exponentially fast as a function of the `2 distance between the signals. Taking advantage of this property, we suggest that these binary hashes can be used to perform privacy-preserving nearest neighbor search with significantly lower complexity compared to protocols which use the actual signals.


We will show that tight frames satisfying the restricted isometry property give rise to nearly tight fusion frames which are nearly orthogonal and hence are nearly equi-isoclinic. We will also show how to replace parts of the RIP frame with orthonormal sets while maintaining the RIP property


Abstract—This paper studies multicasting compressively sampled signals from a source to many receivers, over lossy wireless channels. Our focus is on the network outage from the perspective of signal distortion across all receivers, for both cases where the transmitter may or may not be capable of reconstructing the compressively sampled signals. Capitalizing on extreme value theory, we characterize the network outage in terms of key system parameters, including the erasure probability, the number of receivers and the sparse structure of the signal. We show that when the transmitter can reconstruct the compressively sensed signal, the strategy of using network coding to multicast the reconstructed signal coefficients can reduce the network outage significantly. We observe, however, that the traditional network coding could result in suboptimal performance with powerlaw decay signals. Thus motivated, we devise a new method, namely subblock network coding, which involves fragmenting the data into subblocks, and allocating time slots to different subblocks, based on its priority. We formulate the corresponding optimal allocation as an integer programming problem. Since integer programming is often intractable, we develop a heuristic algorithm that prioritizes the time slot allocation by exploiting the inherent priority structure of power-law decay signals.Numerical results show that the proposed schemes outperform the traditional methods with significant margins.

Recent research has shown that performance in signal processing tasks can often be significantly improved by using signal models based on sparse representations, where a signal is approximated using a small number of elements from a fixed dictionary. Unfortunately, inference in this model involves solving non-smooth optimization problems that are computationally expensive. While significant efforts have focused on developing digital algorithms specifically for this problem, these algorithms are inappropriate for many applications because of the time and power requirements necessary to solve large optimization problems. Based on recent work in computational neuroscience, we explore the potential advantages of continuous time dynamical systems for solving sparse approximation problems if they were implemented in analog VLSI. Specifically, in the simulated task of recovering synthetic and MRI data acquired via compressive sensing techniques, we show that these systems can potentially perform recovery at time scales of 10-20{\mu}s, supporting datarates of 50-100 kHz (orders of magnitude faster that digital algorithms). Furthermore, we show analytically that a wide range of sparse approximation problems can be solved in the same basic architecture, including approximate $\ell^p$ norms, modified $\ell^1$ norms, re-weighted $\ell^1$ and $\ell^2$, the block $\ell^1$ norm and classic Tikhonov regularization.

Compressed sensing allowing sparse representations with respect to frames is seen to have much greater range of practical applications. In such settings, one approach to recover the signal is known as $\ell_1$-analysis. We expand in this article the performance analysis of this approach by providing a weaker recovery condition. We also base our analysis on general frames and alternative dual frames (as analysis operators). As one application to such a general-dual-based approach and performance analysis, we also outline an optimal-dual-based technique to demonstrate the effectiveness of using alternative dual frames as analysis operators.


Sparse Group Selection Through Co-Adaptive Penalties by Zhou Fang. The abstract reads:
Recent work has focused on the problem of conducting linear regression when the number of covariates is very large, potentially greater than the sample size. To facilitate this, one useful tool is to assume that the model can be well approximated by a fit involving only a small number of covariates -- a so called sparsity assumption, which leads to the Lasso and other methods. In many situations, however, the covariates can be considered to be structured, in that the selection of some variables favours the selection of others -- with variables organised into groups entering or leaving the model simultaneously as a special case. This structure creates a different form of sparsity. In this paper, we suggest the Co-adaptive Lasso to fit models accommodating this form of `group sparsity'. The Co-adaptive Lasso is fast and simple to calculate, and we show that it holds theoretical advantages over the Lasso, performs well under a broad set of conclusions, and is very competitive in empirical simulations in comparison with previously suggested algorithms like the Group Lasso and the Adaptive Lasso.


In a group testing scheme a set of tests is designed to identify a small number t of defective items among a large set (of size N) of items. In the non-adaptive scenario the set of tests has to be designed in one-shot. In this setting designing a testing scheme is equivalent to the construction of a disjunct matrix, an M x N matrix where the union of supports of any t columns does not contain the support of any other column. In principle, one wants to have such a matrix with minimum possible number M of rows (tests).
In this paper we relax the definition of disjunct matrix. The new definition allows one to come up with group testing schemes where almost all (as opposed to all) possible sets of defective items are identifiable. Our main contribution is to show that, it is possible to explicitly construct disjunct matrices with the relaxed definition with much smaller number of rows than possible with the original definition. As a consequence of our result, for any absolute constant \epsilon >0 and t proportional to any positive power of N, it is possible to explicitly construct a group testing scheme that identifies (1-\epsilon) proportion of all possible defective sets of size t using only O(t^{3/2}\sqrt{log(N/\epsilon)}) tests. Without our relaxation, the best known scheme requires O(t^2 log N) tests.

In this work, a Bayesian approximate message passing algorithm is proposed for solving the multiple measurement vector (MMV) problem in compressive sensing, in which a collection of sparse signal vectors that share a common support are recovered from undersampled noisy measurements. The algorithm, AMP-MMV, is capable of exploiting temporal correlations in the amplitudes of non-zero coefficients, and provides soft estimates of the signal vectors as well as the underlying support. Central to the proposed approach is an extension of recently developed approximate message passing techniques to the MMV setting. Aided by these techniques, AMP-MMV offers a computational complexity that is linear in all problem dimensions. In order to allow for automatic parameter tuning, an expectation-maximization algorithm that complements AMP-MMV is described. Finally, a detailed numerical study demonstrates the power of the proposed approach and its particular suitability for application to high-dimensional problems.

The Graphical Lasso: New Insights and Alternatives by Rahul Mazumder, Trevor Hastie. The abstract reads:
The graphical lasso [Banerjee et al., 2008, Friedman et al., 2007b] is a popular approach for learning the structure in an undirected Gaussian graphical model, using $\ell_1$ regularization to control the number of zeros in the precision matrix ?${\B\Theta}={\B\Sigma}^{-1}$. The R package glasso is popular, fast, and allows one to efficiently build a path of models for different values of the tuning parameter. Convergence of GLASSO can be tricky; the converged precision matrix might not be the inverse of the estimated covariance, and occasionally it fails to converge with warm starts. In this paper we explain this behavior, and propose new algorithms that appear to outperform GLASSO. We show that in fact glasso is solving the dual of the graphical lasso penalized likelihood, by block coordinate descent. In this dual, the target of estimation is $\B\Sigma$, the covariance matrix, rather than the precision matrix $\B\Theta$. We propose similar primal algorithms P-GLASSO and DP-GLASSO, that also operate by block-coordinate descent, where $\B\Theta$ is the optimization target. We study all of these algorithms, and in particular different approaches to solving their coordinate subproblems. We conclude that DP-GLASSO is superior from several points of view.


Building on the mathematical breakthroughs of compressive sensing (CS), we developed a 2D spectrometer system that incorporates a spatial light modulator and a single detector. For some wavelengths outside the visible spectrum, when it  is too expensive to produce the large detector arrays, this scheme gives us a better solution by using only one pixel. Combining this system with the “smashed filter” technique, we hope to create an efficient IR gas sensor. We performed Matlab simulations to evaluate the effectiveness of the smashed filter for gas tracing.  

Detecting and identifying targets or objects that are present in hyperspectral ground images are of great interest. Applications include land and environmental monitoring, mining, military, civil search-and-rescue operations, and so on. We propose and analyze an extremely simple and efficient idea for template matching based on l1 minimization. The designed algorithm can be applied in hyperspectral classification and target detection. Synthetic image data and real hyperspectral image (HSI) data are used to assess the performance, with comparisons to other approaches, e.g. spectral angle map (SAM), adaptive coherence estimator (ACE), generalized-likelihood ratio test (GLRT) and matched filter. We demonstrate that this algorithm achieves excellent results with both high speed and accuracy by using Bregman iteration.

Diffusion tensor imaging (DTI) is widely used to characterize tissue micro-architecture and brain connectivity. In regions of crossing fibers, however, the tensor model fails because it cannot represent multiple, independent intra-voxel orientations. Most of the methods that have been proposed to resolve this problem require diffusion  magnetic resonance imaging (MRI)  data that comprise large numbers of angles and high b-values, making them problematic for routine clinical imaging and many scientific studies.  We present a technique based on compressed sensing that can resolve crossing fibers using diffusion MRI data  that can be rapidly and routinely acquired in the clinic (30 directions, b-value equal to 700 s/mm2).  The method assumes that the observed data can be well fit using a sparse linear combination of tensors taken from a fixed collection of possible tensors each having a different orientation.   A fast algorithm for computing the best orientations based on a hierarchical compressed sensing algorithm and a novel metric for comparing estimated  orientations are also proposed.  The performance of this approach is  demonstrated  using both  simulations and  in vivo images. The method is observed to resolve crossing fibers using conventional data as well as a standard q-ball approach using much richer data that requires considerably more image acquisition time.
Hyperspectral data processing typically demands enormous computational resources in terms of storage, computation and I/O throughputs, especially when real-time processing is desired. In this paper, we investigate a lowcomplexity scheme for hyperspectral data compression and reconstruction. In this scheme, compressed hyperspectral data are acquired directly by a device similar to the single-pixel camera [5] based on the principle of compressive sensing. To decode the compressed data, we propose a numerical procedure to directly compute the unmixed abundance fractions of given endmembers, completely bypassing high-complexity tasks involving the hyperspectral data cube itself. The reconstruction model is to minimize the total variation of the abundance fractions subject to a preprocessed fidelity equation with a significantly reduced size, and other side constraints. An augmented Lagrangian type algorithm is developed to solve this model. We conduct extensive numerical experiments to demonstrate the feasibility and efficiency of the proposed approach, using both synthetic data and hardware-measured data. Experimental and computational evidences obtained from this study indicate that the proposed scheme has a high potential in real-world applications.

A Compressed Sensing Approach to 3D Weak Lensing by Adrienne Leonard, François-Xavier Dupé, Jean-Luc Starck. The abstract reads:
(Abridged) Weak gravitational lensing is an ideal probe of the dark universe. In recent years, several linear methods have been developed to reconstruct the density distribution in the Universe in three dimensions, making use of photometric redshift information to determine the radial distribution of lensed sources. In this paper, we aim to address three key issues seen in these methods; namely, the bias in the redshifts of detected objects, the line of sight smearing seen in reconstructions, and the damping of the amplitude of the reconstruction relative to the underlying density. We consider the problem under the framework of compressed sensing (CS). Under the assumption that the data are sparse in an appropriate dictionary, we construct a robust estimator and employ state-of-the-art convex optimisation methods to reconstruct the density contrast. For simplicity in implementation, and as a proof of concept of our method, we reduce the problem to one-dimension, considering the reconstruction along each line of sight independently. Despite the loss of information this implies, we demonstrate that our method is able to accurately reproduce cluster haloes up to a redshift of z=1, deeper than state-of-the-art linear methods. We directly compare our method with these linear methods, and demonstrate minimal radial smearing and redshift bias in our reconstructions, as well as a reduced damping of the reconstruction amplitude as compared to the linear methods. In addition, the CS framework allows us to consider an underdetermined inverse problem, thereby allowing us to reconstruct the density contrast at finer resolution than the input data.

Compressive Phase Retrieval From Squared Output Measurements Via Semidefinite Programming by Henrik Ohlsson, Allen Y. Yang, Roy Dong, S. Shankar Sastry. The abstract reads:
Given a linear system in a real or complex domain, linear regression aims to recover the model parameters from a set of observations. Recent studies in compressive sensing have successfully shown that under certain conditions, a linear program, namely, l1-minimization, guarantees recovery of sparse parameter signals even when the system is underdetermined. In this paper, we consider a more challenging problem: when the phase of the output measurements from a linear system is omitted. Using a lifting technique, we show that even though the phase information is missing, the sparse signal can be recovered exactly by solving a simple semidefinite program when the sampling rate is sufficiently high. This is an interesting finding since the exact solutions to both sparse signal recovery and phase retrieval are combinatorial. Besides, this also extends the type of applications that compressive sensing can be applied to those where only output magnitudes can be observed. We demonstrate the accuracy of the algorithms through extensive simulation and a practical experiment.

The goal of the following paper is absolutely not clear to me. The author first decomposes an image as a series of wavelets and then use a sparse measurement matrix to compressed the wavelet coordinates (and produce compressed samples). He then reconstruct the original image using this smaller set of compressed samples to check whether it looks similar to the original image. This is all fine, but I did not see the purpose for this: is it for decreasing the comparison between original image that one tries to see if compressed samples can be comparable ? I don't know.

Compressed sensing of astronomical images: orthogonal wavelets domains by Vasil Kolev. The abstract reads:
A simple approach for orthogonal wavelets in compressed sensing (CS) applications is presented. We compare efficient algorithm for different orthogonal wavelet measurement matrices in CS for image processing from scanned photographic plates (SPP). Some important characteristics were obtained for astronomical image processing of SPP. The best orthogonal wavelet choice for measurement matrix construction in CS for image compression of images of SPP is given. The image quality measure for linear and nonlinear image compression method is defined.

Fast Algorithms for Sparse Recovery with Perturbed Dictionary by Xuebing Han, Hao Zhang. The abstract reads:
In this paper, for sparse recovery of large underdetermined linear systems, we propose a new kind of fast algorithms, based on totally least square (TLS) method and FOCUSS (FOCal Underdetermined System Solver). The problem about sparse recovery was considered, when perturbations appear in both the measurements and the dictionary (sensing matrix) (here we can call the system model as TLS model). The objective function to be optimized is deduced through a maximum a posteriori (MAP) estimation. Then a new FOCUSS algorithm, named TLS-FOCUSS, is extended with main idea of TLS, to reduce the impact of the perturbation of dictionary and measurements to the performance of sparse recovery. Compared with other recovery algorithms on TLS model, TLS-FOCUSS algorithm is not only near-optimum but also fast, thus fit for large scale computation. In order to reduce the complexity of algorithm further, another suboptimal algorithm named SD-FOCUSS is proposed. Another breakthrough of the paper is that SD-FOCUSS can be applied in MMV (multiple measurement vectors) TLS model which field is not researched at present. The convergence of the TLS-FOCUSS algorithm and SD-FOCUSS algorithm is established with mathematical proof. The new algorithms based on TLS model are proved to be efficient and high-performance. The simulation results illustrate the advantage of TLS-FOCUSS and SD-FOCUSS on accuracyand stability compared with the other algorithms.

Quantum state tomography is a fundamental tool in quantum information processing. It allows us to estimate the state of a quantum system by measuring different observables on many identically prepared copies of the system. This is, in general, a very time-consuming task that requires a large number of measurements. There are, however, systems in which the data acquisition can be done more efficiently. In fact, an ensemble of quantum systems can be prepared and manipulated by external fields while being continuously and collectively probed, producing enough information to estimate its state. This provides a basis for continuous measurement quantum tomography. In this protocol, an ensemble of identically prepared systems is collectively probed and controlled in a time-dependent manner to create an informationally complete continuous measurement record. The measurement history is then inverted to determine the state at the initial time. We use two different estimation methods: maximum likelihood and compressed sensing. The general formalism is applied to the case of reconstruction of the quantum state encoded in the magnetic sub-levels of a large-spin alkali atom, ${}^{133}$Cs. We apply this protocol to the case of reconstruction of states in the full 16-dimensional electronic-ground subspace ($F=3 \oplus F=4$), controlled by microwaves and radio-frequency magnetic fields. We present an experimental demonstration of continuous measurement quantum tomography in an ensemble of cold cesium atoms with full control of its 16-dimensional Hilbert space. We show the exquisite level of control achieved in the lab and the excellent agreement between the theory discussed in this dissertation and the experimental results. This allows us to achieve fidelities >95% for low complexity quantum states, and >92% for arbitrary random states, which is a formidable accomplishment for a space of this size.

Distributed Representation of Geometrically Correlated Images with Compressed Linear Measurements by Vijayaraghavan Thirumalai, Pascal Frossard. The abstract reads:
This paper addresses the problem of distributed coding of images whose correlation is driven by the motion of objects or positioning of the vision sensors. It concentrates on the problem where images are encoded with compressed linear measurements. We propose a geometry-based correlation model in order to describe the common information in pairs of images. We assume that the constitutive components of natural images can be captured by visual features that undergo local transformations (e.g., translation) in different images. We first identify prominent visual features by computing a sparse approximation of a reference image with a dictionary of geometric basis functions. We then pose a regularized optimization problem to estimate the corresponding features in correlated images given by quantized linear measurements. The estimated features have to comply with the compressed information and to represent consistent transformation between images. The correlation model is given by the relative geometric transformations between corresponding features. We then propose an efficient joint decoding algorithm that estimates the compressed images such that they stay consistent with both the quantized measurements and the correlation model. Experimental results show that the proposed algorithm effectively estimates the correlation between images in multi-view datasets. In addition, the proposed algorithm provides effective decoding performance that compares advantageously to independent coding solutions as well as state-of-the-art distributed coding schemes based on disparity learning.


Compressive Sensing Framework for Speech Signal Synthesis Using a Hybrid Dictionary by Yue Wang, Zhixing Xu, Gang Li, Liping Chang and Chuanrong Hong. The abstract reads:
Compressive sensing (CS) is a promising focus in signal processing field, which offers a novel view of simultaneous compression and sampling. In this framework a sparse approximated signal is obtained with samples much less than that required by the Nyquist sampling theorem if the signal is sparse on one basis. Encouraged by its exciting potential application in signal compression, we use CS framework for speech synthesis problems. The linear prediction coding (LPC) is an efficient tool for speech compression, as the speech is considered to be an AR process. It is also known that a speech signal is quasi-periodic in its voiced parts, hence a discrete fourier transform (DFT) basis will provide a better approximation. Thus we propose a hybrid dictionary combined with the LPC model and the DFT model as the basis of speech signal. The orthogonal matching pursuit (OMP) is employed in our simulations to compute the sparse representation in the hybrid dictionary domain. The results indicate good performance with our proposed scheme, offering a satisfactory perceptual quality.

We discuss new methods for the recovery of signals with block-sparse structure, based on `1-minimization.Our emphasis is on verifiable conditions on the problem parameters (sensing matrix and the block structure) for accurate recovery and e ciently computable bounds for the recovery error. These bounds are then optimized with respect to the method parameters to construct the estimators with improved statistical properties. To justify the proposed approach we provide an oracle inequality which links the properties of the recovery algorithms and the best estimation performance. We also propose a new matching pursuit algorithm for block-sparse recovery


Many seismic exploration techniques rely on the collection of massive data volumes that are mined for information during processing. This approach has been extremely successful, but current efforts toward higher-resolution images in increasingly complicated regions of the Earth continue to reveal fundamental shortcomings in our typical workflows. The “curse of dimensionality” is the main roadblock, and is exemplified by Nyquist’s sampling criterion, which disproportionately strains current acquisition and processing systems as the size and desired resolution of our survey areas continues to increase. We offer an alternative sampling strategy that leverages recent insights from compressive sensing towards seismic acquisition and processing for data that are traditionally considered to be undersampled. The main outcome of this approach is a new technology where acquisition and processing related costs are no longer determined by overly stringent sampling criteria. Compressive sensing is a novel nonlinear sampling paradigm, effective for acquiring signals that have a sparse representation in some transform domain. We review basic facts about this new sampling paradigm that revolutionized various areas of signal processing, and illustrate how it can be successfully exploited in various problems in seismic exploration to effectively fight the curse of dimensionality.

Generalized sampling and the stable and accurate reconstruction of piecewise analytic functions from their Fourier coefficients by Ben AdcockAnders C. Hansen. The abstract reads:
Suppose that the first m Fourier coefficients of a piecewise analytic function are given. Direct expansion in a Fourier series suffers from the Gibbs phenomenon and lacks uniform convergence. Nonetheless, in this paper we show that, under very broad conditions, it is always possible to recover an n-term expansion in a different system of functions using only these coefficients. Such an expansion can be made arbitrarily close to the best possible n-term expansion in the given system. Thus, if a piecewise polynomial basis is employed, for example, exponential convergence can be restored. The resulting method is linear, numerically stable and can be implemented efficiently in only O (nm) operations. A key issue is how the parameter m must scale in comparison to n to ensure recovery. We derive analytical estimates for this scaling for large classes of polynomial and piecewise polynomial bases. In particular, we show that in many important cases, including the case of piecewise Chebyshev polynomials, this scaling is quadratic: m = O`n2´. Therefore, using a system of polynomials that the user is essentially free to choose, one can restore exponential accuracy in n and root exponential accuracy in m. This
generalizes a result proved recently for piecewise Legendre polynomials. The method developed in this paper is part of new numerical framework for sampling and reconstruction in abstract Hilbert spaces, known as generalized sampling. This paper extends previous work by the authors by introducing a substantially more flexible methodology which allows for sampling and reconstruction with respect to different inner products. In the final part of this paper we illustrate the application of generalized sampling to a related family of problems.
Here is Postdoc at University of New Mexico:

Postdoctoral Fellowship in Diffusion Tensor MR Spectroscopic Imaging in Human Brain
Department of Neurology
University of New Mexico School of Medicine
Applicants are invited to apply for an NIH funded postdoctoral fellow position in the Department of Neurology at the University of New Mexico School of Medicine (http://hsc.unm.edu/som/neuro/lab/). The goal of the fellowship project is to develop novel ultra-fast MR spectroscopic imaging methods to measure the diffusion tensor of metabolites in human brain. The project involves development of Proton-Echo-Planar-Spectroscopic-Imaging (PEPSI) with parallel imaging and compressed sensing. In vivo studies will be conducted in healthy adults and in children with autism in collaboration with the University of Washington and Seattle Children’s Hospital. Facilities include whole body Siemens TIM Trio scanners equipped with state-of-the-art 32 channel array coils.
Successful candidates will hold a PhD in biomedical MR spectroscopy, MR physics, MR engineering, or related fields. This project requires experience in MR spectroscopic imaging, parallel MRI reconstruction and compressed sensing. Strong candidates will have experience in the areas of MR pulse sequence programming (Siemens Syngo platform preferred) and diffusion tensor imaging. A statistical modeling background is desirable, but not required. A demonstrable record of peer reviewed journal publications and strong expertise in C/C++, and/or Matlab are essential. This 2 year position offers an excellent opportunity to be involved in inter-disciplinary research in a thriving functional neuroimaging environment. Strong collaborations exist between clinical and basic science departments at UNM, with the MIND Research Network (http://www.themindinstitute.org/), and with national and international research centers. A career path to research faculty is a possibility for outstanding candidates.
Please send your curriculum vitae, a letter describing your interest, background, and qualifications, and 3 letters of
recommendation to: sposse@unm.edu
Stefan Posse, PhD
Professor of Neurology
Department of Neurology
University of New Mexico School of Medicine
MSC 10 5620
1 University of New Mexico
Albuquerque, NM 87131 

And finally a talk that tool place last week:

Artificial Intelligence Lab Seminar
2011 Nov 25 at 15:30
PAS 2464
Oleg V. Michailovich, Assistant Professor, Department of ECE University of Waterloo
The unique ability of diffusion-weighted MRI (DW-MRI) to generate contrast based on the morphological properties of white matter opens the door to developing qualitatively new methods of early detection and diagnosis of many brain-related disorders. Unfortunately, practical implementation of DW-MRI still poses a number of challenges which hamper its wide-spread integration into standard clinical practice. Chief among these is the problem of prohibitively long scanning times, which necessitates the development of time-efficient methods for acquisition of diffusion data. In many such methods, however, the acceleration entails a trade-off between the time efficiency and the accuracy of signal reconstruction. In such a case, it is imperative for one to be able to understand the effect the above trade-off might have on the accuracy of diagnostic inference. Accordingly, the objective of this talk is twofold. First, using high-angular resolution diffusion imaging (HARDI) as a specific instance of DW-MRI, we will introduce the notion of a directional diffusion structure which, in combination with multidimensional scaling, allows representing HARDI data in a lower dimensional Euclidean space. Subsequently, based on this representation, we will develop an algorithm for detection and classification of first episode schizophrenia. Finally, the above algorithm will be applied to HARDI data acquired by means of compressed sensing and we will demonstrate that the resulting classification error increases insignificantly when the sampling density is reduced to as low as a fourth of its conventional value.


Other

I have updated my profile on LinkedIn to reflect some of my activities on Nuit Blanche. It means you can provide some recommendations based your experience reading this blog. You can join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin. For what it's worth, I am also on Quora.and Twitter.

Post Peer Review Reward System

One of the issue with regards to post peer review publishing is the "game" system designed to answer the following question "how do we value reviews ?" There is a traditional way to go about it (do people really care if you were a reviewer for a journal ?) and then there is the new way, where your contribution will stack up somewhere and be viewable by many: Take for instance these two requests and their attendant expected "rewards":




and this one ?




Tuesday, November 29, 2011

Compressed-Sensing Recovery of Images and Video Using Multihypothesis Predictions

Some of you may have noticed my preference for featuring papers with code implementations earlier in the week. Well the trend continues with Compressed-Sensing Recovery of Images and Video Using Multihypothesis Predictions by Chen Chen, Eric W. Tramel and James E. Fowler. The abstract reads:

Compressed-sensing reconstruction of still images and video sequences driven by multihypothesis predictions is considered. Specifically, for still images, multiple predictions drawn for an image block are made from spatially surrounding blocks within an initial non-predicted reconstruction. For video, multihypothesis predictions of the current frame are generated from one or more previously reconstructed reference frames. In each case, the predictions are used to generate a residual in the domain of the compressed-sensing random projections. This residual being typically more compressible than the original signal leads to improved reconstruction quality. To appropriately weight the hypothesis predictions, a Tikhonov regularization to an ill-posed least-squares optimization is proposed. Experimental results demonstrate that the proposed reconstructions outperform alternative strategies not employing multihypothesis predictions.
The extended results page is here and the implementation featured in the paper is here.





Image Credit: NASA/JPL/Space Science Institute, N00178529.jpg was taken on November 27, 2011 and received on Earth November 28, 2011. The camera was pointing toward TITAN at approximately 748,439 kilometers away, and the image was taken using the CL1 and CB3 filters. This image has not been validated or calibrated.

Monday, November 28, 2011

Compressive Sensing Under Matrix Uncertainties and Calibration



In this work, we consider a general form of noisy compressive sensing (CS) when there is uncertainty in the measurement matrix as well as in the measurements. Matrix uncertainty is motivated by practical cases in which there are imperfections or unknown calibration parameters in the signal acquisition hardware. While previous work has focused on analyzing and extending classical CS algorithms like the LASSO and Dantzig selector for this problem setting, we propose a new algorithm whose goal is either minimization of mean-squared error or maximization of posterior probability in the presence of these uncertainties. In particular, we extend the Approximate Message Passing (AMP) approach originally proposed by Donoho, Maleki, and Montanari, and recently generalized by Rangan, to the case of probabilistic uncertainties in the elements of the measurement matrix. Empirically, we show that our approach performs near oracle bounds. We then show that our matrix-uncertain AMP can be applied in an alternating fashion to learn both the unknown measurement matrix and signal vector. We also present a simple analysis showing that, for suitably large systems, it suffices to treat uniform matrix uncertainty as additive white Gaussian noise.
GampMatlab is available hereJason let me know in particular that:
I have added an example of using MU-GAMP to the SourceForge repository. You can find the example in the downloaded zip file here: trunk/code/examples/sparseEstim/muGampExample.m
Also, since I personally cannot run the code because my version of matlab does not do Object oriented scripts, Phil sent me  the following:

....I am attaching a much simplified implementation of MU-GAMP that i wrote for Danny Bickson (who then coded it in his GraphLab and discussed it on his blog http://bickson.blogspot.com/2011/11/gamp-generalized-approximate-message.html).
in this simplified implementation, it is assumed that the signal is bernoulli-gaussian with known statistics, and there is no stepsize-control. it does allow vector- or matrix-valued signals, however.hopefully you will find it useful.
thanks,
phil
Here is the code:

%

 this m-file uses GAMP to estimate the matrix X (of dimension NxT) from the noisy matrix observation Y=A*X+W of dimension MxT. here, A has independent Gaussian entries with matrix mean and variance (A_mean,A_var), W has iid Gaussian entries with scalar mean and variance (0,psi), and X has iid Bernoulli-Gaussian entries with scalar activity rate "lam" and with non-zero elements having scalar mean and variance (theta,phi).

% declare signal parameters
N = 1000; % # weights [dflt=1000]
M = 400; % # observations [dflt=400]
T = 10; % # dimensions per observation [dflt=10]
lam = 0.01*ones(N,T); % BG: prior probability of a non-zero weight [dflt=0.01]
theta = 0*ones(N,T); % BG: prior mean of non-zero weights [dflt=0]
phi = 1*ones(N,T); % BG: prior variance of non-zero weights [dflt=1]
SNRdB = 15; % SNR in dB [dflt=15]
% declare algorithmic parameters
iterMax = 50; % maximum number of AMP iterations [dflt=20]
% generate true Bernoulli-Gaussian signal (i.e., "weights")
B_true = (rand(N,T)>1-lam); % sparsity pattern
X_true = B_true.*(theta+sqrt(phi).*randn(N,T));% sparse weights
% generate measurement (i.e., "feature") matrix
A_mean = randn(M,N)/sqrt(M); % element-wise mean
A_mean2 = A_mean.^2;
A_var = abs(A_mean).^2; % element-wise variance
A_true = A_mean + sqrt(A_var).*randn(M,N); % realization
% generate noisy observations
psi = norm(A_true*X_true,'fro')^2/(M*T)*10^(-SNRdB/10); % additive noise variance
Y = A_true*X_true + sqrt(psi).*randn(M,T); % AWGN-corrupted observation
% initialize GAMP
X_mean = lam.*theta; % initialize posterior means
X_var = lam.*phi; % initialize posterior variances
S_mean = zeros(M,T); % initialized scaled filtered gradient
NMSEdB_ = NaN*ones(1,iterMax); % for debugging: initialize NMSE-versus-iteration
% iterate GAMP
for iter=1:iterMax,
% computations at observation nodes
Z_var = A_mean2*X_var;
Z_mean = A_mean*X_mean;
P_var = Z_var + A_var*(X_var + X_mean.^2);
P_mean = Z_mean - Z_var.*S_mean;
S_var = 1./(P_var+psi);
S_mean = S_var.*(Y-P_mean);
R_var = 1./(A_mean2'*S_var + eps);
R_mean = X_mean + R_var.*(A_mean'*S_mean);
% computations at variable nodes
gam = (R_var.*theta + phi.*R_mean)./(R_var+phi);
nu = phi.*R_var./(R_var+phi);
pi = 1./(1+(1-lam)./lam .*sqrt((R_var+phi)./R_var) .* exp( ...
-(R_mean.^2)./(2*R_var) + ((R_mean-theta).^2)./(2*(phi+R_var)) ));
X_mean = pi.*gam;
X_var = pi.*(nu+gam.^2) - (pi.*gam).^2;
% for debugging: record normalized mean-squared error
NMSEdB_(iter) = 20*log10(norm(X_true-X_mean,'fro')/norm(X_true,'fro'));
end;% iter
% for debugging: plot posterior means, variances, and NMSE-versus-iteration
subplot(311)
errorbar(X_mean(:,1),sqrt(X_var(:,1)),'.');
hold on; plot(X_true(:,1),'k.'); hold off;
axis([0,size(X_true,1),-1.2*max(abs(X_true(:,1))),1.2*max(abs(X_true(:,1)))])
ylabel('mean')
grid on;
title('GAMP estimates')
subplot(312)
plot(10*log10(X_var(:,1)),'.');
axe=axis; axis([0,N,min(axe(3),-1),0]);
ylabel('variance [dB]');
grid on;
subplot(313)
plot(NMSEdB_,'.-')
ylabel('NMSE [dB]')
grid on;
title(['NMSE=',num2str(NMSEdB_(iter),3),'dB '])
drawnow;

I also was talking to Volkan who pointed me to their similarly relevant paper at SPARS11 (page 68 of the SPARS proceedings) entitled Approximate Message Passing for Bilinear Models. In it, they tackle a specific calibration problem:

"....Example Application: For concreteness, we describe the application of the bilinear model (1) to the compressive system calibration problem. Based on the theoretical premise of compressive sensing, a great deal of research has revolved around the design of sampling systems, such as Analog-to-Information receivers and Xampling. The sampling matrices in these systems are pre-designed with certain desired theoretical properties to guarantee recovery along with the constraints of hardware implementations. However, when implementing the mathematical “sampling” operation—here defined by the matrix Φ—in real hardware, one often introduces what are effectively perturbations on Φ that create an undesired gap between theoretical and practical system performance. As a means of closing this gap, we are interested in jointly learning the true matrix Φ while simultaneously recovering the signal X...."


Attendant presentation slides Approximate Message Passing for Bilinear Models by Volkan Cevher, , Mitra FatemiPhilip Schniter provide additional insight on this calibration work:





Thanks  PhilJason and Volkan


Compressive and Noncompressive Power Spectral Density Estimation Software Package and The Continuous-Time Spectrally-Sparse (CTSS) Sampling Toolbox

Michael LexaMike Davies and John Thompson just released two implementations this past week :




Compressive and Noncompressive Power Spectral Density Estimation Software Package

Synopsis
The MATLAB m-files available for download on this page implement the power spectral density (PSD) estimation method proposed in:
M.A. Lexa, M.E. Davies, J.S. Thompson, Compressive and Noncompressive Power Spectral Density Estimation from Periodic Nonuniform Samples , 2011, Available on arXiv.
The method estimates the PSD of bandlimited, wide-sense stationary signals from sub-Nyquist sampled data. The technique employs multi-coset sampling and applies to spectrally sparse and nonsparse power spectra alike. For sparse density functions, compressed sensing theory is applied and the resulting compressive estimates exhibit better tradeoffs among the estimator's resolution, system complexity, and average sampling rate compared to their noncompressive counterparts. The estimator does not require signal reconstruction and can be directly obtained from solving either a least squares or a nonnegative least squares problem. The estimates are piecewise constant approximations whose resolutions (width of the piecewise constant segments) are controlled by the periodicity of the multi-coset sampling. The estimates are statistically consistent, and the method is widely applicable.
The software allows one to reproduce the three examples given in the paper. Specific details are provided in the README.pdf file included in the download.
Note that the software is meant for researchers who want to quickly understand and experiment with the proposed method. Consequently, the MATLAB scripts have no error handling capabilities and will "break" if incorrect parameters values are entered or if assumptions are violated.

and


Synopsis
The CTSS Sampling Toolbox is a collection of MATLAB m-files implementing three sub-Nyquist, uniform sampling techniques for acquiring continuous-time spectrally-sparse signals. Two of the techniques, the Random Demodulator (RD) and the Modulated Wideband Converter (MWC), leverage ideas from the theory of compressed sensing. The third, the Multi-coset (MC) Sampler, predates compressed sensing, but shares many commonalities with the MWC.
The RD, the MWC, and the MC Sampler were originally proposed in the following works:
The toolbox is meant for researchers who want to quickly understand, experiment, and compare these sampling systems. Consequently, the MATLAB scripts have minimal error handling capabilities and will "break" if incorrect parameters values are entered or if assumptions are violated.




Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Sunday, November 27, 2011

1-bit Compressive Sensing Resource at Rice


Jason Laska let me know of a new page at Rice on 1bit compressive sensing, it is at:



Thanks Jason.

It looks like they are missing Hamming Compressed Sensing by Tianyi Zhou, Dacheng Tao but I am sure it is a question of time before it is being added. Tianyi also told me recently that the attendant implmenetation should be available soon. Stay tuned.

It looks spiffier than my sorry attempt. All Nuit Blanche entries with 1bit papers generally are tagged: 1bit.


Tim Gowers' Model of Mathematical Publishing

Peter Krautzberger at Mathblogging provides a timeline of the recent flurry of blog entries on the issue of publication. Looks like my proposal of stacking a StackExchange clone on top of Arxiv is exactly the proposition Tim Gowers made ten days earlier. I had not read Tim's entry until yesterday. Looking at the feedback on his blog and to a lesser extent here, the proposal has hit a resonance, and not just in the mathematics community. The new aspect with the applied math/engineering community is this idea that reproducible research is by default enforced in this type of system. A no small feat. Here are the main points of Tim's proposal:


After that discussion, let me collect together what I see as the main features of this hypothetical paper-evaluation site.
  1. You post your papers on the arXiv and create pages for them on the site.
  2. People can respond to your papers. Responses can range from smallish comments to detailed descriptions and evaluations (the latter being quite similar to referees’ reports as they are now).
  3. Responses can be written under your own name or under a pseudonym.
  4. You can accrue reputation as a result of responses of either kind, but your pseudonym will have the reputation disguised enough to maintain your anonymity.
  5. Negative language is strongly discouraged. If a paper is uninteresting, it simply doesn’t attract much interest. If it is incorrect, one says so politely.
  6. There is a reputation premium for evaluating papers that have spent a long time not evaluated. (There would be a way of finding these: for instance, you could list all the unreviewed papers in a certain area or subarea in chronological order.)
  7. If you are not registered for the site, or if you are registered but had very few reputation points, then people know that you are not doing your bit for the mathematical community when it comes to the important task of evaluating the output of others. Conversely, if you have a high reputation, then people know that you are pulling your weight.
I think there is something also to be said about the versioning of papers.



Printfriendly