Tuesday, May 22, 2012

Noise Aware Analysis Operator Learning for Approximately Cosparse Signals -implementation-

You have heard bout cosparsity before, now you can learn the analysis operator. I see only good things out of this in the calibration world, woohoo!

This paper investigates analysis operator learning for the recently introduced cosparse signal model that is a natural analysis complement to the more traditional sparse signal model. Previous work on such analysis operator learning has relied on access to a set of clean training samples. Here we introduce a new learning framework which can use training data which is corrupted by noise and/or is only approximately cosparse. The new model assumes that a p-cosparse signal exists in an epsilon neighborhood of each data point. The operator is assumed to be uniformly normalized tight frame (UNTF) to exclude some trivial operators. In this setting, a bi-level optimization algorithm is introduced to learn a suitable analysis operator.
The attendant code is here.

But there is also a more constrained version (no code yet): Constrained Overcomplete Analysis Operator Learning for Cosparse Signal Modelling bMehrdad YaghoobiSangnam NamRemi Gribonval, and Mike E. Davies. The abstract reads:
We consider the problem of learning a low-dimensional signal model from a collection of training samples. The mainstream approach would be to learn an overcomplete dictionary to provide good approximations of the training samples using sparse synthesis coefficients. This famous sparse model has a less well known counterpart, in analysis form, called the cosparse analysis model. In this new model, signals are characterised by their parsimony in a transformed domain using an overcomplete (linear) analysis operator. We propose to learn an analysis operator from a training corpus using a constrained optimisation framework based on L1 optimisation. The reason for introducing a constraint in the optimisation framework is to exclude trivial solutions. Although there is no final answer here for which constraint is the most relevant constraint, we investigate some conventional constraints in the model adaptation field and use the uniformly normalised tight frame (UNTF) for this purpose. We then derive a practical learning algorithm, based on projected subgradients and Douglas-Rachford splitting technique, and demonstrate its ability to robustly recover a ground truth analysis operator, when provided with a clean training set, of sufficient size. We also find an analysis operator for images, using some noisy cosparse signals, which is indeed a more realistic experiment. As the derived optimisation problem is not a convex program, we often find a local minimum using such variational methods. Some local optimality conditions are derived for two different settings, providing preliminary theoretical support for the well-posedness of the learning problem under appropriate conditions.

While we are on CoSparse modeling , you probably recall: Physics-Driven Structured CoSParse Modeling for Source Localization by Sangnam Nam and  Remi GribonvalThne abstract reads:

Cosparse modeling is a recent alternative to sparse modeling, where the notion of dictionary is replaced by that of an analysis operator. When a known analysis operator is well adapted to describe the signals of interest, the model and associated algorithms can be used to solve inverse problems. Here we show how to derive an operator to model certain classes of signals that satisfy physical laws, such as the heat equation or the wave equation. We illustrate the approach on an acoustic inverse problem with a toy model of wave propagation and discuss its potential extensions and the challenges it raises.

and The Cosparse Analysis Model and Algorithms by Sangnam NamMike E. Davies, Michael Elad and  Remi GribonvalThe abstract reads:
After a decade of extensive study of the sparse representation synthesis model, we can safely say that this is a mature and stable field, with clear theoretical foundations, and appealing applications. Alongside this approach, there is an analysis counterpart model, which, despite its similarity to the synthesis alternative, is markedly different. Surprisingly, the analysis model did not get a similar attention, and its understanding today is shallow and partial. In this paper we take a closer look at the analysis approach, better define it as a generative model for signals, and contrast it with the synthesis one. This work proposes effective pursuit methods that aim to solve inverse problems regularized with the analysis-model prior, accompanied by a preliminary theoretical study of their performance. We demonstrate the effectiveness of the analysis model in several experiments, and provide a detailed study of the model associated with the 2D finite difference analysis operator, a close cousin of the TV norm.



Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Printfriendly