Tuesday, April 27, 2010

CS: Rank Awareness in Joint Sparse Recovery, CS-MUSIC, Dim reduction of covariance matrices, Blegs from around the blogs, Combinatorial testing.


The last entry on Random Matrix Theory and Compressive Sensing and related subjects, triggered several reactions. First, Mike Davies just sent me the following:

Dear Igor

I would like to draw your attention to a new paper by myself and Yonina Eldar called:


This seems to follow a recent theme in Nuit Blanche regarding sparse Multiple Measurement Vector (MMV) recovery algorithms and the links with array signal processing [Igor's note: see CS: Random Matrix Theory and Compressive Sensing]). I have attached a copy of the paper which should appear on Arxiv tomorrow and also on our Edinburgh Compressed Sensing group website (http://ecos.maths.ed.ac.uk/).

The bottom line is that rank should be taken into account properly in the joint sparse recovery problem!

Specifically we show:

  1. rank based necessary and sufficient conditions for the sparse MMV problem showing rank helps (sufficiency was previously shown by Chen and Huo)
  2. rank based combinatorial search - again showing that rank helps
  3. almost all existing algorithms for joint sparse recovery are rank blind! That is, their worst case performance approaches that of the Single Measurement Vector (SMV) problem.
  4. we present some Rank Aware (RA) algorithms, most notably a RA-version of Order Recursive Matching Pursuit which seems to provide the best interpolation between the SMV case and guaranteed recovery in the full rank case.

I think this is just the beginning of a new flurry of research in MMV recovery algorithms. For example, it would be interesting to discover if there was a convex based joint sparse recovery algorithm that was fully rank aware - current mixed norm techniques are rank blind. Similarly are there Rank Aware versions of CoSaMP or IHT?

All the best

Mike
The full article is: Rank Awareness in Joint Sparse Recovery by Mike Davies, and Yonina Eldar. The abstract reads:
In this paper we revisit the sparse multiple measurement vector (MMV) problem, where the aim is to recover a set of jointly sparse multichannel vectors from incomplete measurements. This problem has received increasing interest as an extension of single channel sparse recovery, which lies at the heart of the emerging field of compressed sensing. However, MMV approximation has origins in the field of array signal processing as we discuss in this paper. Inspired by these links, we introduce a new family of MMV algorithms based on the well-know MUSIC method in array processing. We particularly highlight the role of the rank of the unknown signal matrix X in determining the difficulty of the recovery problem. We begin by deriving necessary and sufficient conditions for the uniqueness of the sparse MMV solution, which indicates that the larger the rank of X the less sparse X needs to be to ensure uniqueness. We also show that as the rank of X increases, the computational effort required to solve the MMV problem through a combinatorial search is reduced.
In the second part of the paper we consider practical suboptimal algorithms for MMV recovery. We examine the rank awareness of popular methods such as Simultaneous Orthogonal Matching Pursuit (SOMP) and mixed norm minimization techniques and show them to be rank blind in terms of worst case analysis. We then consider a family of greedy algorithms that are rank aware. The simplest such method is a discrete version of the MUSIC algorithm popular in array signal processing. This approach is guaranteed to recover the sparse vectors in the full rank MMV setting under mild conditions. We then extend this idea to develop a rank aware pursuit algorithm that naturally reduces to Order Recursive Matching Pursuit (ORMP) in the single measurement case. This approach also provides guaranteed recovery in the full rank setting. Numerical simulations demonstrate that the rank aware techniques are significantly better than existing methods in dealing with multiple measurements.
Thanks Mike.

On Twitter, Giuseppe Paleologo mentioned:
@igorcarron, for dim reduction of covariance matrices, you can check out the important recent work of A. Onatski http://bit.ly/aEG4pv
Thanks Giuseppe.

Of interest is the following paper: Honey I shrunk the Covariance Matrix, by Olivier Ledoit and
Michael Wolf (from @prometheia ).

Jong Chul Ye just mentioned to me that the paper is out now: Compressive MUSIC: A Missing Link Between Compressive Sensing and Array Signal Processing by Jong Min Kim, Ok Kyun Lee, Jong Chul Ye.The abstract reads:
Multiple measurement vector (MMV) problem addresses identification of unknown input vectors that share common sparse support sets, and has many practical applications. Even though MMV problems had been traditionally addressed within the context of sensory array signal processing, recent research trend is to apply compressive sensing (CS) theory due to its capability to estimate sparse support even with insufficient number of snapshots, in which cases classical array signal processing approaches fail. However, CS approaches guarantees the accurate recovery of support in a probabilistic manner, which often shows inferior performance in the regime where the traditional array signal processing approaches succeed. The main contribution of the present article is, therefore, a unified approach that unveils a {missing link} between compressive sensing and array signal processing approaches for the multiple measurement vector problem. The new algorithm, which we call {\em compressive MUSIC}, identifies the parts of support using compressive sensing, after which the remaining supports are estimated using a novel generalized MUSIC criterion. Using large system MMV model, we theoretically show that our compressive MUSIC requires smaller number of sensor elements for accurate support recovery compared to the existing compressive sensing and approaches the optimal $l_0$-bound when sufficient but finite number of snapshots are available. We also derive an explicit form of minimum signal-to-noise (SNR) expression that is required for the success of generalized MUSIC criterion. The expression clearly indicates the dependency on the unknown source distribution. Finally, we show that unknown sparsity level can be automatically estimated. We conclude the paper with numerical results that demonstrate that compressive MUSIC outperforms the conventional algorithms.
Thanks Jong.

Around the blogs, we have three authors blegging for an answer:

Sarah is asking a question in:
Alex Gittens is asking another in:
And Bob Sturm is asking several in:
In Ben Recht's class there was an interesting presentation on Bug hunting and Compressed Sensing by Suhail Shergill. Not much detail there, but it reminded me that maybe a CS-Group Tesitng type of approach might decrease the need for combinatorial testing in software (see also R. Kuhn, R. Kacker, Y. Lei, J. Hunter, "Combinatorial Software Testing", from here.). On a related note, instead of looking for a defective element, maybe CS-Group Testing type of method can be used efficiently in combinatorial chemistry or biology.



Five days ago there was a talk at Berkeley which sounded interesting:

Adaptive Data Analysis via Nonlinear Compressed Sensing

Seminar | April 22 | 2-3 p.m. | 740 Evans Hall

Speaker: Tom Hou, Caltech [who I thought was mostly into navier-stockes]

Sponsor: Mathematics, Department of

We introduce an Instantaneous Fourier Analysis method to analyze multiscale nonlinear and non-stationary data. The purpose of this work is to find the sparsest representation of a multiscale signal using basis that is adapted to the data instead of being prescribed a priori. Using a variation approach base on nonlinear L1 optimization, our method defines trends and Instantaneous Frequency of amultiscale signal. One advantage of performing such decomposition is to preserve some intrinsic physical property of the signal without introducing artificial scales or harminics. For multiscale data that have a nonlinear sparse representation, we prove that our nonlinear optimization method converges to the exact signal with the sparse representation. Moreover, we will show that our method is insensitive to noise and robust enough to apply to realistic physical data. For general data that do not have a sparse representation, our method will give an approximate decomposition and the accuracy is controlled by the scale separation property of the original signal.



Image Credit: NASA/JPL/Space Science Institute, Titan.

2 comments:

Anonymous said...

I just wanted to say I appreciate your linking to my blog. I'm learning a lot from Nuit Blanche.

Igor said...

No problem. I am learning from your blog as well.

Igor.

Printfriendly