Monday, January 10, 2011

CS: The long post of the week

I got two e-mails this week-end related to teaching CS which I guess follows up on some recent entries trying to focus on low level undergraduate presentation material and attempt to put it on the interwebs. Borhan Sannadaji kindly sent me the following:

Hi Igor,
First of all Happy New Year and also thank you for your nice blog, the Nuit Blanche.As you've posted some of our works in your blog http://nuit-blanche.blogspot.com/2010/08/cs-long-post-of-week.html , http://nuit-blanche.blogspot.com/2010/10/cs-long-entry-of-week.html , I thought my recent presentation would be of your interest as well. This talk was presented in Control and Decision Conference last month in Atlanta, Georgia and is mainly based on our recent paper, Concentration of Measure Inequalities for Compressive Toeplitz Matrices with Applications to Detection and System Identification , but also had an introductory flavor towards CS so that the audience have a general view of CS and its applications. You can find the slides of the talk here http://inside.mines.edu/~bmolazem/CDC2010_Toeplitz.pdf  . Please let me know if you have any questions.
Regards,
Borhan


I also got:
Hi Igor,
I'm wondering if you are aware of simple papers about compressed sensing. My students need to learn, and the only readable manuscript I know is the tutorial by Rich Baraniuk. Maybe you know more?
Maybe I should have a dedicated page on recent tutorials. Besides the recent presentation by Emmanuel Candes at University of Washington, the next thing I would advise newcomers to do, asides from reading the tutorial of Rich and Emmanuel is to: check the Rice tutorial section,  watch the videos taken at IMA in 2007 and featured in Compressed Sensing Videos: Where Is My Popcorn ?. Of related interested are the pages I created featuring courses related compressive sensing and the online talks page (the IMA videos are at the very bottom). My material (that is still in its early version) is really aimed at engineering 102 or something like that.

Today I have the following papers and other webpages, announcements, etc....:

Global Convergence of the Locally Competitive Algorithm by Aurele.Balavoine and Christopher Rozell, Justin Romberg. The abstract reads:
The Locally Competitive Algorithm (LCA) is a continuous time dynamical system designed to solve the problem of sparse approximation. This class of approximation problems plays an important role in producing state-of-the-art results in many signal processing and inverse problems, and implementing the LCA in analog VLSI may significantly improve the time and power necessary to solve these optimization programs. The goal of this paper is to analyze the dynamical behavior of the LCA system and guarantee its convergence and stability. We show that fixed points of the system are extrema of the sparse approximation objective function when designed for a certain class of sparsity-inducing cost penalty. We also show that, if the objective has a unique minimum, the LCA converges for any initial point. In addition, we prove that under certain conditions on the solution, the LCA converges in a finite number of switches (i.e., node threshold crossings).
In this paper, we collect and discuss some of the recent theoretical results on channel identification using a random probe sequence. These results are part of the body of work known as compressive sampling, a rapidly developing field whose central message is that sparse vectors can be recovered from a set of “random” underdetermined measurements. In the context of channel estimation, if the channel’s impulse response is sparse, then it can be estimated by exciting the channel with a random probing sequence and then taking a relatively small number of samples of the output. We also overview recent results in multiple channel estimation that show the channel responses in a multiple-input multiple-output (MIMO) system can be efficiently estimated by exciting all of the inputs with independent random probing signals simultaneously.

Sparse signal priors help in a variety of modern signal processing tasks. In a typical sparse recovery problem, a sparse signal needs to be recovered from an underdetermined system of equations. For example, sparse representation of signal in an overcomplete dictionary or reconstruction of a sparse signal from a small number of linear measurements. In recent years, several results have been presented which guarantee reliable reconstruction under certain conditions. In this paper we investigate the problem of recovering sparse signals when elements are added to (or removed from) the overcomplete dictionary. We propose a dynamic update algorithm for basis pursuit denoising (BPDN), when new columns are added to (or removed from) the dictionary. We use this update procedure to iteratively update the working set of basis elements, chosen from a large library of basis elements, to compute the sparse solution. We also discuss the extension of the same ideas for the analysisbased BPDN.

The determination of the optical flow is a central problem in image processing, as it allows to describe how an image changes over time by means of a numerical vector field. The estimation of the optical flow is however a very complex problem, which has been faced using many different mathematical approaches. A large body of work has been recently published about variational methods, following the technique for total variation minimization proposed by Chambolle. Still, their hardware implementations do not offer good performance in terms of frames that can be processed per time unit, mainly because of the complex dependency scheme among the data. In this work, we propose a highly parallel and accelerated FPGA implementation of the Chambolle algorithm, which splits the original image into a set of overlapping sub-frames and efficiently exploits the reuse of intermediate results. We validate our hardware on large frames (up to 1024x768), and the proposed approach significantly improves state-of-the-art implementations, reaching up to 76x speedups, which enables real-time frame rates even at high resolutions.
This paper describes a novel framework for compressive sampling (CS) of multichannel signals that are highly dependent across the channels. In this work, we assume few number of sources are generating the multichannel observations based on a linear mixture model. Moreover, sources are assumed to have sparse/compressible representations in some orthonormal basis. The main contribution of this paper lies in 1) rephrasing the CS acquisition of multichannel data as a compressive blind source separation problem, and 2) proposing an optimization problem and a recovery algorithm to estimate both the sources and the mixing matrix (and thus the whole data) from the compressed measurements. A number of experiments on the acquisition of Hyperspectral images show that our proposed algorithm obtains a reconstruction error between 10 dB and 15 dB less than other state-of-the-art CS methods.

Shrinkage Rules for Variational Minimization Problems and Applications to Analytical Ultracentrifugation by Martin Ehler. The abstract reads:
Finding a sparse representation of a possibly noisy signal can be modeled as a variational minimization with l_q-sparsity constraints for q less than one. Especially for real-time, on-line, or iterative applications, in which problems of this type have to be solved multiple times, one needs fast algorithms to compute these minimizers. Identifying the exact minimizers is computationally expensive. We consider minimization up to a constant factor to circumvent this limitation. We verify that q-dependent modifications of shrinkage rules provide closed formulas for such minimizers. Therefore, their computation is extremely fast. We also introduce a new shrinkage rule which is adapted to q. To support the theoretical results, the proposed method is applied to Landweber iteration with shrinkage used at each iteration step. This approach is utilized to solve the ill-posed problem of analytic ultracentrifugation, a method to determine the size distribution of macromolecules. For relatively pure solutes, our proposed scheme leads to sparser solutions with sharper peaks, higher resolution, and smaller residuals than standard regularization for this problem.

Sparse reconstruction for quantitative bioluminescence tomography based on the incomplete variables truncated conjugate gradient method by Xiaowei He, Jimin Liang, Xiaorui Wang, Jingjing Yu, Xiaochao Qu, Xiaodong Wang, Yanbin Hou, Duofang Chen, Fang Liu, and Jie Tian. The abstract reads:
In this paper, we present an incomplete variables truncated conjugate gradient (IVTCG) method for bioluminescence tomography (BLT). Considering the sparse characteristic of the light source and insufficient surface measurement in the BLT scenarios, we combine a sparseness-inducing (ℓ1 norm) regularization term with a quadratic error term in the IVTCG-based framework for solving the inverse problem. By limiting the number of variables updated at each iterative and combining a variable splitting strategy to find the search direction more efficiently, it obtains fast and stable source reconstruction, even without a priori information of the permissible source region and multispectral measurements. Numerical experiments on a mouse atlas validate the effectiveness of the method. In vivo mouse experimental results further indicate its potential for a practical BLT system.

I also came across this announcement recently:
Identifying Bad Measurements Via Separation in Compressive Sensing

Abstract:
We consider the problem of identifying bad measurements in compressive sensing. These bad measurements can be present due to malicious attacks and system malfunction.

Since the linear system of equations in compressive sensing is underconstrained, errors introduced by these bad measurements can result in large changes in computed solutions. We describe a separation-based method for identifying bad measurements. In this  method we separate out top non-zero variables by ranking, eliminate these variables from the system of equations, and then solve the reduced overconstrained problem to identify bad measurements. Comparing to prior methods based on direct or joint l1-minimization, the separation-based method can addresses challenging cases when there are only a moderate number of samples. In developing the method we introduce the notion of inversions which govern the separability
of the non-zero variables.

Bio:
Tsung-Han Lin is a Ph.D. candidate in Computer Science of Harvard University. He has received his B.S. and M.S. degree in the Department of Electrical Engineering, National Taiwan University. His research interests include wireless networks, compressive sensing, wireless
networking for unmanned aerial vehicles (UAVs), and wireless sensor networks.
Here is an announcemnt for some talks at SAMPTA 2011


Topics: Chaired by:
Applications in Sampling Signals with Finite Rate of Innovation Yue M. LU, Harvard University, USA
Compressed Sensing Yonina ELDAR, Technion, Israel Institute of Technology, Israel
Design of A/D Converters Boris MURMANN, Stanford University, USA
Dictionary Learning  Volkan CEVHER and Pierre VANDERGHEYNST, Ecole Polytechnique Federale de Lausanne, Switzerland
Frame Theory Bernhard BODMANN, University of Houston, USA
Geometric Multiscale Analysis Gitta KUTYNIOK, University of Osnabrueck, Germany
High-dimensional Geometry Mauro MAGGIONI, Duke University, USA
Inverse Problems Massimo FORNASIER, Austrian Academy of Sciences, Austria
Quantization Alex POWELL, Vanderbilt University, USA
Sampling in Imaging Systems Rebecca WILLETT, Duke University, USA
Signal Dependent Sampling and Processing Rolands SHAVELIS  and Modris GREITANS, University of Latvia, Latvia
Sparse Approximation Holger RAUHUT, University of Bonn, Germany


A CFP for a conference on  Millimetre Wave and Terahertz Sensors and Technology

A press release got my attention: Gov. Perry Announces Two University Research Superiority Grants and TETF Investments In Four Companies 

"...InView - $1.5 million for the development of high-performance cameras with non-visual infrared (IR), ultraviolet (UV) and terahertz (THz) capabilities utilizing compressive sensing technology developed at Rice University. These cameras will offer increased optical resolution at a lower cost and have security and military applications, as well as value in drug and food inspection..."

 A webpage:found here:

CMOS Receiver for Compressive Sensing
Team members: Juhwan Yoo, Matthew Loh
System block diagram
Since 1970, there has been a roughly thousand-fold improvement in computational power, made possible by reduction of transistor feature size. As a result, sensing and communications technology has increasingly relied on the use of sophisticated digital signal processing algorithms to improve performance and enhance reliability.

As signals in the phyiscal world are ultimately analog in nature, analog-to-digital converters (ADCs) are a key enabling technology for this trend. However, ADC speed and resolution have not kept pace with the stunning improvements in computing power, thus creating a bottleneck which limits the performance and application space of advanced signal processing techniques in physical systems.

In order to address this performance gap, we propose using the recently developed field of Compressed Sensing to build a system for very high speed wideband spectrum sensing and analysis. Compressed sensing is a particular instance of a broader class of techniques in statistics and estimation theory, known as Dimensionality Reduction techniques. In most situations of interest, the actual information that is sought is 'sparse', that is, it is contained in only a fraction of the acquired data. For example, in a spectral sensing problem, this sparsity is reflected by the fact that the frequency occupancy of the spectrum is small or highly structured. Compressed sensing leverages this sparsity by making assumptions about the type of data being acquired to intelligently undersample while retaining all the information of interest.
And finally, behind a paywall

Image Credit: NASA/JPL/Space Science Institute
W00066060.jpg was taken on January 08, 2011 and received on Earth January 09, 2011. The camera was pointing toward SATURN-ERING at approximately 2,756,277 kilometers away, and the image was taken using the CL1 and IR1 filters.

No comments:

Printfriendly