Wednesday, June 15, 2016

Thesis: Recovering structured signals in high dimensions via non-smooth convex optimization: Precise performance analysis by Christos Thrampoulidis

 Congratulations Dr. Thrampoulidis !

Recovering structured signals in high dimensions via non-smooth convex optimization: Precise performance analysis by Christos Thrampoulidis

The typical scenario that arises in modern large-scale inference problems is one where the ambient dimension of the unknown signal is very large (e.g., high-resolution images, recommendation systems), yet its desired properties lie in some low-dimensional structure such as, sparsity or low-rankness. In the past couple of decades, non-smooth convex optimization methods have emerged as a powerful tool to extract those structures, since they are often computationally efficient, and also they offer enough flexibility while simultaneously being amenable to performance analysis. Especially, since the advent of Compressed Sensing (CS) there has been significant progress towards this direction. One of the key ideas is that random linear measurements offer an efficient way to acquire structured signals.
When the measurement matrix has entries iid from a wide class of distributions (including Gaussians), a series of recent papers have established a complete and transparent theory that precisely captures the performance in the noiseless setting.
In the more practical scenario of noisy measurements the performance analysis task becomes significantly more challenging and corresponding precise and unifying results have hitherto remained scarce. The available class of optimization methods, often referred to as regularized M-estimators, is now richer; additional factors (e.g., the noise distribution, the loss function, and the regularizer parameter) and several different measures of performance (e.g., squared-error, probability of support recovery) need to be taken into account.
This thesis develops a novel analytical framework that overcomes these challenges, and establishes precise asymptotic performance guarantees for regularized M-estimators under Gaussian measurement matrices. In particular, the framework allows for a unifying analysis among different instances (such as the Generalized LASSO, and the LAD, to name a few) and accounts for a wide class of performance measures. Among others, we show results on the mean-squared-error of the Generalized-LASSO method and make insightful connections to the classical theory of ordinary least squares and to noiseless CS. Empirical evidence is presented that suggests the Gaussian assumption is not necessary. Beyond iid measurement matrices, motivated by practical considerations, we study certain classes of random matrices with orthogonal rows and establish their superior performance when compared to Gaussians.
A prominent application of this generic theory is on the analysis of the bit-error fix rate (BER) of the popular convex-relaxation of the Maximum Likelihood decoder for recovering BPSK signals in a massive Multiple Input Multiple Output setting. Our precise BER analysis allows comparison of these schemes to the unattainable Matched-filter bound, and further suggests means to provably boost their performance.
The last challenge is to evaluate the performance under non-linear measurements.
For the Generalized LASSO, it is shown that this is (asymptotically) equivalent to the one under noisy linear measurements with appropriately scaled variance. This encompasses state-of-the art theoretical results of one-bit CS, and is also used to prove that the optimal quantizer of the measurements that minimizes the estimation error of the Generalized LASSO is the celebrated Lloyd-Max quantizer. The framework is based on Gaussian process methods; in particular, on a new strong and tight version of a classical comparison inequality (due to Gordon, 1988) in the presence of additional convexity assumptions. We call this the Convex Gaussian Min-max Theorem (CGMT). 

Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Printfriendly