Monday, November 09, 2009

CS: Compressive Confocal Microscope, Compressive Matched Subspace Detection, Music Classification, Matrix Completion, Metric Learning


Before we get to the meat of today's entry, let me invite y'all to use the customized search engine on the right hand side of this blog to search for entries relevant to your area of interest. The search feature on the top left corner of this blog is buggy at best and does not do a good job of finding relevant entries when you are using only one word. Also, of related interest, I am seeking a Google Wave invite as I am interested to see if one could develop some compressive sensing hardware with some folks of the open hardware community. Irrespective to your ability to send me an Google Wave invite, you are welcome to join that experiment. [Hint to those of you working at Google, I just made a request for an invite]



It seems that I overlooked the work being done by the group of Gonzalo Arce at University of Delaware. I hope to remedy this today. From his group, we have a new compressive sensing hardware, a music classifier, and other relevant studies in between. The hardware first as exposed in : Compressive Confocal Microscopy by Peng Ye, Jose Paredes, Gonzalo Arce, Yichuan Hu , C. Chen, and Dennis Prather.The abstract reads:

In this paper, a new framework for confocal microscopy based on the novel theory of compressive sensing is proposed. Unlike wide field microscopy or conventional parallel beam confocal imaging systems that use charge-coupled devices (CCD) as acquisition devices in addition to complex mechanical scanning system, the proposed compressive confocal microscopy is a kind of parallel beam confocal imaging system which exploits the rich theory of compressive sensing by using a single pixel detector and a digital micromirror device (DMD) to capture linear projections of the in-focus image. With the proposed system, confocal imaging of high optical sectioning ability can be achieved at sub-Nyquist sampling rates. Theoretical analysis, simulations and experimental results are shown to demonstrate the characteristics and potential of the proposed approach.



and the attendant reconstruction paper: Compressive Confocal Microscopy: 3D reconstruction algorithms by Peng Ye, Jose Paredes, Yichuan Hu , C. Chen, Gonzalo Arce and Dennis Prather. The abstract reads:
In this paper, a new approach for Confocal Microscopy (CM) based on the framework of compressive sensing is developed. In the proposed approach, a point illumination and a random set of pinholes are used to eliminate out-of-focus information at the detector. Furthermore, a Digital Micromirror Device (DMD) is used to efficiently scan the 2D or 3D specimen but, unlike the conventional CM that uses CCD detectors, the measured data in the proposed compressive confocal microscopy (CCM) emerge from random sets of pinhole illuminated pixels in the specimen that are linearly combined (projected) and measured by a single photon detector. Compared to conventional CM or programmable array microscopy (PAM), the number of measurements needed for nearly perfect reconstruction in CCM is significantly reduced. Our experimental results are based on a testbed that uses a Texas Instruments DMD (an array of 1024£768; 13:68£13:68 ¹m2 mirrors) for computing the linear projections of illuminated pixels and a single photon detector is used to obtain the compressive sensing measurement. The position of each element in the DMD is defined by the compressed sensing measurement matrices. Three dimensional image reconstruction algorithms are developed that exploit the inter-slice spatial image correlation as well as the correlation between different 2D slices. A comprehensive performance comparison between several binary projection patterns is shown. Experimental and simulation results are provided to illustrate the features of the proposed systems.


We propose a system based on the combination of compressive sensing and non-linear processing that shows excellent robustness against noise. The key idea is the use of nonlinear mappings that act as analog joint source-channel encoders, processing the compressive sensing measurements proceeding from an analog source and producing continuous amplitude samples that are transmitted directly through the noisy channel. As we will show in our simulation results, the proposed framework is readily applicable in practical systems such as imaging, and clearly outperforms systems based on stand-alone compressive sensing.

and the next two papers focus in the adaptive detection of signals of interest while in the compressed measurement world: Compressive Matched Subspace Detection by Jose Paredes, Zhongmin Wang, Gonzalo Arce and Brian M. Sadler. The abstract reads:

In this paper, matched subspace detectors based on the framework of Compressive Sensing (CS) are developed. The proposed approach, called compressive matched subspace detectors, exploits the sparsity model of the signal-of-interest in the design of the random projection operator. By tailoring the CS measurement matrix (projection operator) to the subspace where the signal-of-interest is known to lie, the compressive matched subspace detectors can effectively capture the signal energy while the interference and noise effects are mitigated at sub-Nyquist rate. The proposed detection approach is particularly suitable for detection of wideband signals that emerge in modern communication systems that demand high-speed ADCs. The performance of the subspace compressive detectors are studied by analytically deriving closed-form expressions for the detection probability and through extensive simulations.
Compressed sensing (CS) provides an efficient way to acquire and reconstruct natural images from a reduced number of linear projection measurements at sub-Nyquist sampling rates. A key to the success of CS is the design of the measurement ensemble. This paper addresses the design of a novel variable density sampling strategy, where the “a priori” information about the statistical distributions that natural images exhibit in the wavelet domain is exploited. Compared to the current sampling schemes for compressed image sampling, the proposed variable density sampling has the following advantages: 1) The number of necessary measurements for image reconstruction is reduced; 2) The proposed sampling approach can be applied to several transform domains leading to simple implementations. In particular, the proposed method is applied to the compressed sampling in the 2D ordered discrete Hadamard transform (DHT) domain for spatial domain imaging. Furthermore, to evaluate the incoherence of different sampling schemes, a new metric that incorporates the “a priori” information is also introduced. Extensive simulations show the effectiveness of the proposed sampling methods.

Finally, a continuation of the concepts developed in the classification on faces, shows that music is not the same as faces: Music Genre Classification via Sparse Representation of Auditory Temporal Modulations by Yannis Panagakis, Constantine Kotropoulos, Gonzalo Arce. The abstract reads:
A robust music genre classification framework is proposed that combines the rich, psycho-physiologically grounded properties of slow temporal modulations of music recordings and the power of sparse representation-based classifiers. Linear subspace dimensionality reduction techniques are shown to play a crucial role within the framework under study. The proposed method yields a music genre classification accuracy of 91% and 93.56% on the GTZAN and ISMIR2004 Genre dataset, respectively. Both accuracies outperform any reported accuracy ever obtained by state of the art music genre classification algorithms in the aforementioned datasets.

Finally, in a totally unrelated field, we have: Matrix Completion from Power-Law Distributed Samples by Raghu Meka, Prateek Jain, and Inderjit Dhillon. The abstract reads:
The low-rank matrix completion problem is a fundamental problem with many important applications. Recently, [4],[13] and [5] obtained the first non-trivial theoretical results for the problem assuming that the observed entries are sampled uniformly at random. Unfortunately, most real-world datasets do not satisfy this assumption, but instead exhibit power-law distributed samples. In this paper, we propose a graph theoretic approach to matrix completion that solves the problem
for more realistic sampling models. Our method is simpler to analyze than previous methods with the analysis reducing to computing the threshold for complete cascades in random graphs, a problem of independent interest. By analyzing the graph theoretic problem, we show that our method achieves exact recovery when the observed entries are sampled from the Chung-Lu-Vu model, which can generate power-law distributed graphs. We also hypothesize that our algorithm solves the matrix completion problem from an optimal number of entries for the popular preferential attachment model and provide strong empirical evidence for the claim. Furthermore, our method is easy to implement and is substantially faster than existing methods. We demonstrate the effectiveness of our method on random instances where the low-rank matrix is sampled according to the prevalent random graph models for complex networks and present promising preliminary results on the Netflix challenge dataset.

and from the same group, the important: Metric and Kernel Learning using a Linear Transformation by Prateek Jain, Brian Kulis, Jason Davis and Inderjit Dhillon. The abstract reads:
Metric and kernel learning are important in several machine learning applications. However, most existing metric learning algorithms are limited to learning metrics over low-dimensional data, while existing kernel learning algorithms are often limited to the transductive setting and do not generalize to new data points. In this paper, we study metric learning as a problem of learning a linear transformation of the input data. We show that for high-dimensional data, a particular framework for learning a linear transformation of the data based on the LogDet divergence can be efficiently kernelized to learn a metric (or equivalently, a kernel function) over an arbitrarily high dimensional space. We further demonstrate that a wide class of convex loss functions for learning linear transformations can similarly be kernelized, thereby considerably expanding the potential applications of metric learning. We demonstrate our learning approach by applying it to large-scale real world problems in computer vision and text mining.


No comments:

Printfriendly