Wednesday, November 06, 2013

BiG-AMP: Bilinear Generalized Approximate Message Passing - implementation -



Two years ago, in If there is revolution in rank minimization and nobody hears about it, is this a revolution ? we wondered if the use of phase transition diagrams could help us make a difference between different matrix factorization models. Well it looks like we are there now. The following paper (see also the detailed attendant project page) promises a change in a number of matrix factorization operations (and is will be listed shortly in the Advanced Matrix Factorization page) : Bilinear Generalized Approximate Message Passing by Jason T. Parker, Philip Schniter, Volkan Cevher
We extend the generalized approximate message passing (G-AMP) approach, originally proposed for high-dimensional generalized-linear regression in the context of compressive sensing, to the generalized-bilinear case, which enables its application to matrix completion, robust PCA, dictionary learning, and related matrix-factorization problems. In the first part of the paper, we derive our Bilinear G-AMP (BiG-AMP) algorithm as an approximation of the sum-product belief propagation algorithm in the high-dimensional limit, where central-limit theorem arguments and Taylor-series approximations apply, and under the assumption of statistically independent matrix entries with known priors. In addition, we propose an adaptive damping mechanism that aids convergence under finite problem sizes, an expectation-maximization (EM)-based method to automatically tune the parameters of the assumed priors, and two rank-selection strategies. In the second part of the paper, we discuss the specializations of EM-BiG-AMP to the problems of matrix completion, robust PCA, and dictionary learning, and present the results of an extensive empirical study comparing EM-BiG-AMP to state-of-the-art algorithms on each problem. Our numerical results, using both synthetic and real-world datasets, demonstrate that EM-BiG-AMP yields excellent reconstruction accuracy (often best in class) while maintaining competitive runtimes and avoiding the need to tune algorithmic parameters.
I note something important here, JasonPhilVolkan let us know about this phenomena:

Meanwhile, the K-SVD, SPAMS, and EM-BiG-AMP algorithms appear to degrade gracefully in the presence of noise, yielding NMSE ≈ −50 dB at points below the noiseless PTCs.
It is not just noiseless PTCs but also how the algorithm degrade with noise. Phase Transition are becoming part of our landscape. 

I think the authors don't want us to miss where the implementation of BiG-AMP resides, I have noticed the address in three different locations  in the paper :) BiG-AMP is at http://sourceforge.net/projects/gampmatlab/ but the detailed project page is here, the overview says

Overview
Bilinear Generalized Approximate Message Passing (BiG-AMP) extends the Generalized AMP framework, originally proposed for high-dimensional generalized-linear regression in the context of compressive sensing, to the generalized-bilinear case, which enables its application to matrix completion, robust PCA, dictionary learning, and related matrix-factorization problems.


Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Printfriendly