Wednesday, July 17, 2013

Optimally weighted recovery of a low-rank signal matrix from a high-dimensional signal-plus-noise matrix - implementation -

Raj Rao Nadakuditi just sent me the following:

Hi Igor,

Hope you are well and that your summer is off to a good start.

I recently posted a paper on improving low-rank signal matrix denoising relative to the truncated SVD.



The algorithm described in the paper (see attached .m code) takes as its input the noisy matrix, the rank of the signal matrix and returns the denoised version. It'll never do worse than the truncated SVD and will do particularly well in low-SNR settings; the algorithm itself is completely data-driven way so there are no tuning parameters and one can use it as a black box wherever improving low rank signal matrix estimation is important. The paper shows how it can be applied to improve performance (by a lot) relative to singular value thresholding in the setting where there are missing entries.

A nice feature of the algorithm is that it 'correctly' mitigates the effect of rank over-estimation so one only needs a decent estimate of the rank. It is based on asymptotic random matrix theory but will even work for small matrix sizes. Try it, for example, with n = 6; m = 2000; theta= 1.5*(n/m)^(1/4) for the example listed in the help section of the optlowrank.m code

Thoughts, comments, feedback and suggestions are welcome and greatly appreciated.
Cheers,

Raj

Thanks Raj. Here is the paper:


The truncated singular value decomposition (SVD) of the measurement matrix is the optimal solution to the_representation_ problem of how to best approximate a noisy measurement matrix using a low-rank matrix. Here, we consider the (unobservable)_denoising_ problem of how to best approximate a low-rank signal matrix buried in noise by optimal (re)weighting of the singular vectors of the measurement matrix. We exploit recent results from random matrix theory to exactly characterize the large matrix limit of the optimal weighting coefficients and show that they can be computed directly from data for a large class of noise models that includes the i.i.d. Gaussian noise case.
Our analysis brings into sharp focus the soft-thresholding form of the optimal weights, the non-convex nature of the associated shrinkage function and explains why matrix regularization via singular value thresholding with convex penalty functions (such as the nuclear norm) will always be suboptimal. We validate our theoretical predictions with numerical simulations, develop an implementable algorithm that realizes the predicted performance gains and show how our methods can be used to improve estimation in the setting where the measured matrix has missing entries.
The attendant lightweight code is here.


Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Printfriendly