## Thursday, April 30, 2015

David Brady the man behind compressive hyperspectral imaging is at it again with colleagues. He now investigates some hardware and adaptive sensing to perform some mass spectroscopic measurements. This is important as the next step after the whole genome sequencing adventure starts with understanding the proteins produced by the genome.

An information-theoretical adaptive sensing and classification framework is proposed for Quadrupole mass filter systems. The proposed algorithm designs the most discriminative measurement adaptively by maximizing the mutual information between  the  class  label  and  the  next  measurement  conditioned  on  all  previous measurements.  The  proposed  adaptive  sensing  algorithm  significantly  reduces  the number  of  measurements  needed  and  improves  classification  accuracy  compared with random measurement design. Simulation result on a 76-class mass spectra data library shows a 100% positive detection rate using only 7% adaptive measurements. The reduction of measurements shortens the mass analysis time and theoretically can reduce the required amount of compound material present in the sample for analysis, which potentially increases the sensitivity of the quadrupole mass filter systems.

somehow related:

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

### Finding a sparse vector in a subspace: Linear sparsity using alternating directions - implementation -

Here is another of Ju's implementation from an earlier paper: Finding a sparse vector in a subspace: Linear sparsity using alternating directions by Qing Qu, Ju Sun, John Wright
We consider the problem of recovering the sparsest vector in a subspace SRp with dim(S)=n<p. This problem can be considered a homogeneous variant of the sparse recovery problem, and finds applications in sparse dictionary learning, sparse PCA, and other problems in signal processing and machine learning. Simple convex heuristics for this problem provably break down when the fraction of nonzero entries in the target sparse vector substantially exceeds 1/n. In contrast, we exhibit a relatively simple nonconvex approach based on alternating directions, which provably succeeds even when the fraction of nonzero entries is Ω(1). To our knowledge, this is the first practical algorithm to achieve this linear scaling. This result assumes a planted sparse model, in which the target sparse vector is embedded in an otherwise random subspace. Empirically, our proposed algorithm also succeeds in more challenging data models arising, e.g., from sparse dictionary learning.
The implementation is here: https://github.com/sunju/psv

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

## Wednesday, April 29, 2015

### Complete Dictionary Recovery over the Sphere - implementation -

Ju just sent me the following:

Dear Igor,

Would you help mention our new paper on dictionary learning in your blog?

http://arxiv.org/abs/1504.06785

We show one can recover complete dictionaries with efficient algorithms when the coefficient matrix has up to linear sparsity (under suitable probability model).

Our earlier work

http://arxiv.org/abs/1412.4659

is also relevant, in the sense in both papers the core technical problem we try to tackle is how to retrieve the sparest vector(s) (directions) from a linear subspace.

Both papers demonstrate the power of nonconvex relaxation and optimization.

Thanks,
Absolutely Ju, specially if it includes implementations !  Complete Dictionary Recovery over the Sphere  by Ju Sun, Qing Qu, John Wright
We consider the problem of recovering a complete (i.e., square and invertible) matrix A0, from Y∈Rn×p with Y=A0X0, provided X0 is sufficiently sparse. This recovery problem is central to the theoretical understanding of dictionary learning, which seeks a sparse representation for a collection of input signals, and finds numerous applications in modern signal processing and machine learning. We give the first efficient algorithm that provably recovers A0 when X0 has O(n) nonzeros per column, under suitable probability model for X0. In contrast, prior results based on efficient algorithms provide recovery guarantees when X0 has only O(n−−√) nonzeros per column.
Our algorithmic pipeline centers around solving a certain nonconvex optimization problem with a spherical constraint, and hence is naturally phrased in the language of manifold optimization. To show this apparently hard problem is tractable, we first provide a geometric characterization of the high-dimensional objective landscape, which shows that with high probability there are no "spurious" local minima. This particular geometric structure allows us to design a Riemannian trust region algorithm over the sphere that provably converges to one local minimizer with an arbitrary initialization, despite the presence of saddle points. The geometric approach we develop here may also shed light on other problems arising from nonconvex recovery of structured signals.
The implementation is here: https://github.com/sunju/dl_focm

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

### High-speed flow microscopy using compressed sensing with ultrafast laser pulses

Wow !

High-speed flow microscopy using compressed sensing with ultrafast laser pulses by Bryan T. Bosworth, Jasper R. Stroud, Dung N. Tran, Trac D. Tran, Sang Chin, and Mark A. Foster
We demonstrate an imaging system employing continuous high-rate photonically-enabled compressed sensing (CHiRP-CS) to enable efficient microscopic imaging of rapidly moving objects with only a few percent of the samples traditionally required for Nyquist sampling. Ultrahigh-rate spectral shaping is achieved through chirp processing of broadband laser pulses and permits ultrafast structured illumination of the object flow. Image reconstructions of high-speed microscopic flows are demonstrated at effective rates up to 39.6 Gigapixel/sec from a 720-MHz sampling rate.

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

## Tuesday, April 28, 2015

### Statistical Image Recovery: A Message-Passing Perspective by Phil Schniter

Here is a nice review of what we know of AMP solvers so far: Statistical Image Recovery: A Message-Passing Perspective a presentation by Phil Schniter

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

### ICML Workshop on Machine Learning Open Source Software 2015: Open Ecosystems

This is just a short notice but since Open Source Machine Learning software development is so important, I have decided to run this here today (please note the deadline).

# Workshop on Machine Learning Open Source Software 2015: Open Ecosystems

The ICML Workshop on Machine Learning Open Source Software (MLOSS) will held in Lille, France on the 10th of July, 2015.

### Important Dates

• Submission Date: 28 April 2015, 23:59 UTC
• Notification of Acceptance: 11 May 2015
• Workshop date: 10 July 2015
Note that the submission deadline is a few days earlier than the ICML recommended deadline. This is to give our program committee a reasonable amount of time to review your submission.

### Description

Machine learning open source software (MLOSS) is one of the cornerstones of open science and reproducible research. Along with open access and open data, it enables free reuse and extension of current developments in machine learning. The mloss.org site exists to support a community creating a comprehensive open source machine learning environment, mainly by promoting new software implementations. This workshop aims to enhance the environment by fostering collaboration with the goal of creating tools that work with one another. Far from requiring integration into a single package, we believe that this kind of interoperability can also be achieved in a collaborative manner, which is especially suited to open source software development practices.
The workshop is aimed at all machine learning researchers who wish to have their algorithms and implementations included as a part of the greater open source machine learning environment. Continuing the tradition of well received workshops on MLOSS at NIPS 2006, NIPS 2008, ICML 2010 and NIPS 2013, we plan to have a workshop that is a mix of invited speakers, contributed talks and discussion/activity sessions. For 2015, we focus on building open ecosystems. Our invited speakers will illustrate the process for Python and Julia through presenting modern high-level high-performance computation engines, and we encourage submissions that showcase the benefits of multiple tools in the same ecosystem. All software presentations are required to include a live demonstration. The workshop will also include an active session (“hackathon”) for planning and starting to develop infrastructure for measuring software impact.
We have two confirmed invited speakers
• John Myles White (Facebook), lead developer of Julia statistics and machine learning (confirmed): “Julia for machine learning: high-level syntax with compiled-code speed”.
• Matthew Rocklin (Continuum Analytics), developer of Python computational tools, in particular Blaze (confirmed): “Blaze, a modern numerical engine with out-of-core and out-of-order computations”.

### Tentative Programme

• 2 hours of invited talks consisting of 30 minute tutorial (Gaël Varoquaux) and 2*45 min invited talks (John Myles White and Matthew Rocklin). Both invited speakers have confirmed that they will attend.
• 2 hours of submitted projects (contributed talks including a demo or spotlights + parallel demo session, depending on the number of high quality submissions)
• 1 hour unconference-style open discussion (topics voted by workshop participants)
• 1 hour hackathon/activity session on developing measurement of software impact
• 2*30 minute coffee breaks

### Call for Contributions

The organizing committee is currently seeking abstracts for talks at MLOSS 2015. MLOSS is a great opportunity for you to tell the community about your use, development, philosophy, or other activities related to open source software in machine learning. The committee will select several submitted abstracts for 20-minute talks.
All submissions must be made to https://www.easychair.org/conferences/?conf=mloss2015

#### Submission types

1. Software packages
Similar to the MLOSS track at JMLR, this includes (but is not limited to) numeric packages (as e.g. R, Octave, Python), machine learning toolboxes and implementations of ML-algorithms.
Submission format: 1 page abstract which must contain a link to the project description on mloss.org. Any bells and whistles can be put on your own project page, and of course provide this link on mloss.org.
Note:Projects must adhere to a recognized Open Source License (cf. http://www.opensource.org/licenses/ ) and the source code must have been released at the time of submission. Submissions will be reviewed based on the status of the project at the time of the submission deadline. If accepted, the presentation must include a software demo.
2. ML related projects
As the theme for this year is open ecosystems, projects of a more general nature such as software infrastructure or tools for general data analysis are encouraged. This category is open for position papers, interesting projects and ideas that may not be new software themselves, but link to machine learning and open source software.
Submission format: abstract with no page limit. Please note that there will be no proceedings, i.e. the abstracts will not be published.
We look forward for submissions that are novel, exciting and that appeal to the wider community.

### Program Committee

• Asa Ben-Hur (Colorado State University)
• Mathieu Blondel (NTT Communication Science Laboratories)
• Mikio Braun (Technical University of Berlin)
• Ryan Curtin (Georgia Tech)
• Alexandre Gramfort (Telecom ParisTech)
• James Hensman (University of Sheffield)
• Laurens van der Maaten (Facebook AI Research)
• Andreas Müller (New York University)
• Mark Reid (Australian National University)
• Peter Reutemann (University of Waikato)
• Konrad Rieck (University of Göttingen)
• Heiko Strathmann (University College London)
• Ameet Talwalkar (University of California LA)
• Lieven Vandenberghe (University of California LA)
• Aki Vehtari (Aalto University)
• Markus Weimer (Microsoft Research)

#### Organizers:

• Gaël Varoquaux
INRIA, France
• Antti Honkela
University of Helsinki, Helsinki Institute for Information Technology HIIT, Helsinki, Finland
• Cheng Soon Ong
Machine Learning Research Group, NICTA, Canberra, Australia

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

### Co-L1/Co-IRWL1: Iteratively Reweighted $\ell_1$ Approaches to Sparse Composite Regularization

Iteratively Reweighted $\ell_1$ Approaches to Sparse Composite Regularization by Rizwan Ahmad, Philip Schniter
Motivated by the observation that a given signal $\boldsymbol{x}$ admits sparse representations in multiple dictionaries $\boldsymbol{\Psi}_d$ but with varying levels of sparsity across dictionaries, we propose two new algorithms for the reconstruction of (approximately) sparse signals from noisy linear measurements. Our first algorithm, Co-L1, extends the well-known lasso algorithm from the L1 regularizer $\|\boldsymbol{\Psi x}\|_1$ to composite regularizers of the form $\sum_d \lambda_d \|\boldsymbol{\Psi}_d \boldsymbol{x}\|_1$ while self-adjusting the regularization weights $\lambda_d$. Our second algorithm, Co-IRW-L1, extends the well-known iteratively reweighted L1 algorithm to the same family of composite regularizers. We provide several interpretations of both algorithms: i) majorization-minimization (MM) applied to a non-convex log-sum-type penalty, ii) MM applied to an approximate $\ell_0$-type penalty, iii) MM applied to fully-Bayesian inference under a particular hierarchical prior, and iv) variational expectation-maximization (VEM) under a particular prior with deterministic unknown parameters. A detailed numerical study suggests that our proposed algorithms yield significantly improved recovery SNR when compared to their non-composite L1 and IRW-L1 counterparts.

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

## Monday, April 27, 2015

### Compressed Sensing Petrov-Galerkin Approximations for Parametric PDEs

Responding to a question I had on Twitter about the following paper and its connection with uncertainty quantification, Jean-Luc responded with the following:

Dear Igor,

Sorry for the late reply regarding your comment on Twitter. I prefered to reply per email as I'm guessing I'm going to go over the character limitation :)

Our work is directly related to problems in uncertainty quantification. The reason why it's not really obvious in this small note is that we had a 5 page restriction (this is mandatory for SampTA 2015) and decided to focus on the sampling/approximation results.

Why does it relate to uncertainty quantification? Consider the case where the 'parameters' y are in fact outputs of a random field with a certain probability distribution (see http://www.math.tamu.edu/~rdevore/publications/139.pdf or any other publications from Cohen, Devore, Schwab for more details), then you can recast the problem from uncertainty quantification into a parametric approach (in a sense): take a PCA / KL decomposition of the random field, and you get a parametric representation. Hence, yes, our results do apply for uncertainty quantification, even though they are phrased from another point of view right now.

We have other manuscripts in preparation that should be done (at least in a preprint form) by late June - I'll let you know when this is the case, if you have any interest? I will try to write a bit more details regarding the relation to uncertainty quantification and the relevance of our work on this topic.

Let me know if you have any questions,

Best,
Jean-Luc

...I forgot to mention, it might be interesting to link to the following papers:
Starting papers:
* Cohen, Devore, Schwab, Analytic regularity and polynomial approximation of parametric and stochastic elliptic PDEs: http://www.math.tamu.edu/~rdevore/publications/143.pdf
* Cohen, Devore, Schwab, Convergence rates of best N-term Galerkin approximations for a class of elliptic sPDEs: http://www.math.tamu.edu/~rdevore/publications/139.pdf
The previous two papers describe the general ideas and first results behind the compressibility of the polynomial chaos expansions of the solution map.

* Cohen, Chkifa, Schwab, Breaking the curse of dimensionality in sparse polynomial approximation of parametric PDEs: http://e-collection.library.ethz.ch/eserv/eth:47390/eth-47390-01.pdf

Compressed sensing Petrov-Galerkin:
* Doostan et al, A non-adaptive sparse approximation of pdes with stochastic inputs: http://ctr.stanford.edu/ResBriefs09/10_doostan.pdf (first numerical steps)
* Rauhut, Schwab, Compressive sensing Petrov-Galerkin approximation of high-dimensional parametric operator equations: http://www.mathc.rwth-aachen.de/~rauhut/files/csparampde.pdf (theoretical analysis)
* Bouchot et al, Compressed sensing Petrov-Galerkin approximations for Parametric PDEs (direct application from the previous paper)

There has been quite a bit of research lately also on L2 minimization and projections on polynomial spaces. But I guess it gets a little out of the scope here. I'll send you pointers if you're interested.

Cheers,
JL

We consider the computation of parametric solution families of high-dimensional stochastic and parametric PDEs. We review recent theoretical results on sparsity of polynomial chaos expansions of parametric solutions, and on compressed sensing based collocation methods for their efficient numerical computation.
With high probability, these randomized approximations realize best N-term approximation rates afforded by solution sparsity and are free from the curse of dimensionality, both in terms of accuracy and number of samples evaluations (i.e. PDE solves). Through various examples we illustrate the performance of Compressed Sensing Petrov-Galerkin (CSPG) approximations of parametric PDEs, for the computation of (functionals of) solutions of intregral and differential operators on high-dimensional parameter spaces. The CSPG approximations reduce the number of PDE solves, as compared to Monte-Carlo methods, while being likewise nonintrusive, and being “embarassingly parallel”, unlike dimension-adaptive collocation or Galerkin methods.

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

## Thursday, April 23, 2015

### AutoML Challenge: Python Notebooks for Round 1 and more...

In  the Sunday Morning Insight entry entitled The Hardest Challenges We Should be Unwilling to Postpone,  I mentioned a challenge set up by Isabelle Guyon entitled the AutoML challenge ( http://codalab.org/AutoML, her presentation is here). In short, the idea is to have a Kaggle like challenge that features several datasets of increasing difficulty and see how algorithm entries fare with these different datasets. Deep down, the algorithm needs to pay attention to its own running time and have a nice way of automatically select relevant features.

With Franck, we decided to use the mighty power of the large membership of the Paris Machine Learning meetup (Top 5 in the world) to help out in the setting up of a day long hackaton so that local teams could participate in the challenge. Currently round 1 of the challenge is over we are currently in the Tweakathon1 stage where you can submit codes that will eventually be run automatically on May 15 for AutoML2. From here:

Tweakathon1
Continue practicing on the same data (the phase 1 data are now available for download from the 'Get Data' page). In preparation for phase 2, submit code capable of producing predictions on both VALIDATION AND TEST DATA. The leaderboard shows scores on phase 1 validation data only.

AutoML2

Start: May 15, 2015, 11:59 p.m.

Description: INTERMEDIATE phase on multiclass classification problems. Blind test of the code on NEW DATA: There is NO NEW SUBMISSION. The last code submitted in phase 1 is run automatically on the new phase 2 datasets. [+] Prize winning phase.

Tweakathon2
Start: May 16, 2015, 11:59 p.m.

Description: Continue practicing on the same data (the data are now available for download from the 'Get Data' page). In preparation for phase 3, submit code capable of producing predictions on both VALIDATION AND TEST DATA. The leaderboard shows scores on phase 2 validation data only.

Here are some of the presentations made during the hackaton and some of the attendant python notebooks released for tweakaton 1:

The page for the hackaton is here. A big thank you to Pierre Roussel for hosting us at ESPCI ParisTech and to the coaches

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

### FPA-CS: Focal Plane Array-based Compressive Imaging in Short-wave Infrared

George sent me the following a few days ago:

Dear Igor,

My name is Huaijin (George) Chen, a PhD student at Rice working with Ashok Veeraraghavan. We recently got our paper "FPA-CS: Focal Plane Array-based Compressive Imaging in Short-wave Infrared" accepted at CVPR (arXiv pre-print attached). We were able to achieve 1 megapixel video at 32 fps using 64x64 focal plane sensor in our system. We were wondering, if it is possible, you could kindly mention it on your renowned Nuit Blanche website? Thank you so much!

http://arxiv.org/abs/1504.04085

Sincerely,

-George
Sure George, it is important to notice when CS Imaging hardware implementations go beyond the traditional plain vanilla single pixel camera concept. Without further ado. FPA-CS: Focal Plane Array-based Compressive Imaging in Short-wave Infrared by Huaijin Chen, M. Salman Asif, Aswin C. Sankaranarayanan, Ashok Veeraraghavan

Cameras for imaging in short and mid-wave infrared spectra are significantly more expensive than their counterparts in visible imaging. As a result, high-resolution imaging in those spectrum remains beyond the reach of most consumers. Over the last decade, compressive sensing (CS) has emerged as a potential means to realize inexpensive short-wave infrared cameras. One approach for doing this is the single-pixel camera (SPC) where a single detector acquires coded measurements of a high-resolution image. A computational reconstruction algorithm is then used to recover the image from these coded measurements. Unfortunately, the measurement rate of a SPC is insufficient to enable imaging at high spatial and temporal resolutions.

We present a focal plane array-based compressive sensing (FPA-CS) architecture that achieves high spatial and temporal resolutions. The idea is to use an array of SPCs that sense in parallel to increase the measurement rate, and consequently, the achievable spatio-temporal resolution of the camera. We develop a proof-of-concept prototype in the short-wave infrared using a sensor with 64$\times$ 64 pixels; the prototype provides a 4096$\times$ increase in the measurement rate compared to the SPC and achieves a megapixel resolution at video rate using CS techniques.

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

## Wednesday, April 22, 2015

### CSJob: Post-Doc: Learning Representations for Large Scale Multivariate Data

Jerome Bobin sent me the following opportunity the other day:

POST-DOC : LEARNING REPRESENTATIONS FOR LARGE-SCALE MULTIVARIATE DATA.

The concept of sparsity and sparse signal representations has led to the development of very efficient analysis methods in imaging science. Most state-of-the-art solutions to classical inverse problems in imaging are grounded on sparsity: denoising, deconvolution, inpainting, blind source separation, etc [SMF10]. Fixed or analytic signal representations, such as the celebrated wavelet transforms, curvelets frames, bandlets, to name only a few [SMF10], allow to compressively encode the inner geometrical structures of generic signals from a few basis elements or atoms. Since compressibility or sparsity is the key principle, dictionary learning techniques [AEB06,RPE13] have more recently been introduced to provide data-driven and therefore more efficient sparse signal representations.

The appeal of dictionary learning techniques lies in their ability to capture a very wide range of signal/image content or morphologies, which make it the perfect analysis tool for analyzing complex real-world datasets. However, these methods have seldom been extended to learn sparse representations of multivariate data such as multi/hyperspectral data, which play a prominent role in scientific fields as different as remote sensing, biomedical imaging or astrophysics. Studying extensions of dictionary learning techniques to derive sparse representations that are specifically tailored for multispectral data is therefore fundamental in imaging science. In this context, the goal of this research project is:

• Extend dictionary learning techniques to analyze multi/hyperspectral data. We will particularly focus on studying dedicated learning strategies to extract sparse multivariate representations.

• Apply and evaluate the proposed representations for solving key inverse problems in multispectral imaging such as missing data interpolation (inpainting), reconstruction from incomplete and incoherent measurements (compressed sensing), etc.

• A particular attention will be paid to the design of learning procedures that can perform in the large-scale setting. This implies that the project will include investigating computationally efficient learning/solving algorithms, with a specific focus on modern-day methods grounded upon non-smooth convex optimization.

These developments will be applied to analyze real-world datasets in astrophysics, which can include the Planck data 1

Any candidate must have a PhD and have a strong background in image/signal processing, especially in sparse signal processing. A good knowledge of convex optimization is a plus.

• Contact: jbobin@cea.fr or florent.sureau@cea.fr
• Laboratory: CEA/IRFU/Cosmostat in Saclay http://www.cosmostat.org
• Financing: European project DEDALE http://dedale.cosmostat.org
• Duration: 3 years : 2015-2018
• Applications are expected prior to May, 31st 2015, the Fermi/LAT data2

[AEB06] M. Aharon, M. Elad, and A. Bruckstein. K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation. ITSP, 54(11), 2006. 4311–4322.

[RPE13] Ron Rubinstein, Tomer Peleg, and Michael Elad. Analysis K-SVD: A Dictionary-Learning Algorithm for the Analysis Sparse Model. IEEE Transactions on Signal Processing, 61(3):661–677, 2013.
[SMF10] J.-L. Starck, F. Murtagh, and M.J. Fadili. Sparse Image and Signal Processing. Cambridge University Press,

2010.

1 http://sci.esa.int/planck/

2 http://fermi.gsfc.nasa.gov

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

## Tuesday, April 21, 2015

### Thesis: Empirical-Bayes Approaches to Recovery of Structured Spar se Signals via Approximate Message Passing, Jeremy Vila

In recent years, there have been massive increases in both the dimensionality and sample sizes of data due to ever-increasing consumer demand coupled with relatively inexpensive sensing technologies. These high-dimensional datasets bring challenges such as complexity, along with numerous opportunities. Though many signals of interest live in a high-dimensional ambient space, they often have a much smaller inherent dimensionality which, if leveraged, lead to improved recoveries. For example, the notion of sparsity is a requisite in the compressive sensing (CS) field, which allows for accurate signal reconstruction from sub-Nyquist sampled measurements given certain conditions.When recovering a sparse signal from noisy compressive linear measurements, the distribution of the signal’s non-zero coefficients can have a profound effect on recovery mean-squared error (MSE). If this distribution is apriori known, then one could use computationally efficient approximate message passing (AMP) techniques that yield approximate minimum MSE (MMSE) estimates or critical points to the maximum a posteriori (MAP) estimation problem. In practice, though, the distribution is unknown, motivating the use of robust, convex algorithms such as LASSO–which is nearly minimax optimal–at the cost of significantly larger MSE for non-least-favorable distributions. As an alternative, this dissertation focuses on empirical-Bayesian techniques that simultaneously learn the underlying signal distribution using the expectation-maximization (EM) algorithm while recovering the signal. These techniques are well-justified in the high-dimensional setting since, in the large system limit under specific problem conditions, the MMSE version ofAMP’s posteriors converge to the true posteriors and a generalization of the resulting EM procedure yields consistent parameter estimates. Furthermore, in many practical applications, we can exploit additional signal structure beyond simple sparsity for improved MSE. In this dissertation, we investigate signals that are non-negative, obey linear equality constraints, and exhibit amplitude correlation/structured sparsity across its elements. To perform statistical inference on these structured signals, we first demonstrate how to incorporate these structures into our Bayesian model, then employ a technique called “turbo” approximate message passing on the underlying factor graph. Specifically, we partition the factor graph into the Markov and generalized linear model subgraphs, the latter of which can be efficiently implemented using approximate message passing methods, and combine the subgraphs using a “turbo” message passing approach. Numerical experiments on the compressive sensing and hyperspectral unmixing applications confirm the state-of-the-art performance of our approach, in both reconstruction error
and runtime, on both synthetic and real-world datasets.

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

### Streaming: Memory Limited Matrix Completion with Noise, Verifiable Stream Computation, Count-Min-Log sketch

Streaming is bound to take a centerstage role if we are talking about Big Data...or Data as we call it in Texas. Today, we have an algorithm for matrix completion, a paper on verifying the properties of certain streams and some approximation of count-min sketch. Without further ado:

Streaming, Memory Limited Matrix Completion with Noise by Se-Young Yun, Marc Lelarge, Alexandre Proutiere

In this paper, we consider the streaming memory-limited matrix completion problem when the observed entries are noisy versions of a small random fraction of the original entries. We are interested in scenarios where the matrix size is very large so the matrix is very hard to store and manipulate. Here, columns of the observed matrix are presented sequentially and the goal is to complete the missing entries after one pass on the data with limited memory space and limited computational complexity. We propose a streaming algorithm which produces an estimate of the original matrix with a vanishing mean square error, uses memory space scaling linearly with the ambient dimension of the matrix, i.e. the memory required to store the output alone, and spends computations as much as the number of non-zero entries of the input matrix.
Verifiable Stream Computation and Arthur–Merlin Communication  by Amit Chakrabarti, Graham Cormode, Andrew McGregor, Justin Thaler, Suresh Venkatasubramanian
In the setting of streaming interactive proofs (SIPs), a client (verifier) needs to compute a given function on a massive stream of data, arriving online, but is unable to store even a small fraction of the data. It outsources the processing to a third party service (prover), but is unwilling to blindly trust answers returned by this service. Thus, the service cannot simply supply the desired answer; it must convince the verifier of its correctness via a short interaction after the stream has been seen.

In this work we study "barely interactive" SIPs. Specifically, we show that two or three rounds of interaction suffice to solve several query problems --- including Index, Median, Nearest Neighbor Search, Pattern Matching, and Range Counting --- with polylogarithmic space and communication costs. Such efficiency with O(1) rounds of interaction was thought to be impossible based on previous work.

On the other hand, we initiate a formal study of the limitations of constant-round SIPs by introducing a new hierarchy of communication models called Online Interactive Proofs (OIPs). The online nature of these models is analogous to the streaming restriction placed upon the verifier in an SIP. We give upper and lower bounds that (1) characterize, up to quadratic blowups, every finite level of the OIP hierarchy in terms of other well-known communication complexity classes, (2) separate the first four levels of the hierarchy, and (3) reveal that the hierarchy collapses to the fourth level. Our study of OIPs reveals marked contrasts and some parallels with the classic Turing Machine theory of interactive proofs, establishes limits on the power of existing techniques for developing constant-round SIPs, and provides a new characterization of (non-online) Arthur–Merlin communication in terms of an online model.

Comment: Some of the results in this paper appeared in an earlier technical report (http://eccc.hpi-web.de/report/2013/180/). That report has been subsumed by this manuscript and an upcoming manuscript by Thaler titled "Semi-Streaming Algorithms for Annotated Graphs Streams".

Count-Min-Log sketch: Approximately counting with approximate counters by Guillaume Pitel, Geoffroy Fouquier

Count-Min Sketch is a widely adopted algorithm for approximate event counting in large scale processing. However, the original version of the Count-Min-Sketch (CMS) suffers of some deficiences, especially if one is interested by the low-frequency items, such as in text-mining related tasks. Several variants of CMS have been proposed to compensate for the high relative error for low-frequency events, but the proposed solutions tend to correct the errors instead of preventing them. In this paper, we propose the Count-Min-Log sketch, which uses logarithm-based, approximate counters instead of linear counters to improve the average relative error of CMS at constant memory footprint.
Image Credit: NASA/JPL-Caltech/Space Science Institute

N00238257.jpg was taken on March 28, 2015 and received on Earth March 31, 2015. The camera was pointing toward IAPETUS, and the image was taken using the BL1 and GRN filters. This image has not been validated or calibrated.

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

## Monday, April 20, 2015

### SLOPE is Adaptive to Unknown Sparsity and Asymptotically Minimax - implementation -

SLOPE is Adaptive to Unknown Sparsity and Asymptotically Minimax by Weijie Su, Emmanuel Candes
We consider high-dimensional sparse regression problems in which we observe $y = X \beta + z$, where $X$ is an $n \times p$ design matrix and $z$ is an $n$-dimensional vector of independent Gaussian errors, each with variance $\sigma^2$. Our focus is on the recently introduced SLOPE estimator (Bogdan et al., 2014), which regularizes the least-squares estimates with the rank-dependent penalty $\sum_{1 \le i \le p} \lambda_i |\hat \beta|_{(i)}$, where $|\hat \beta|_{(i)}$ is the $i$th largest magnitude of the fitted coefficients. Under Gaussian designs, where the entries of $X$ are i.i.d. $\mathcal{N}(0, 1/n)$, we show that SLOPE, with weights $\lambda_i$ just about equal to $\sigma \cdot \Phi^{-1}(1-iq/(2p))$ ($\Phi^{-1}(\alpha)$ is the $\alpha$th quantile of a standard normal and $q$ is a fixed number in $(0,1)$) achieves a squared error of estimation obeying $\sup_{\|\beta\|_0 \le k} \,\, \mathbb{P}\left(\| \hat\beta_{SLOPE} - \beta \|^2 > (1+\epsilon) \, 2\sigma^2 k \log(p/k) \right) \longrightarrow 0$ as the dimension $p$ increases to $\infty$, and where $\epsilon > 0$ is an arbitrary small constant. This holds under weak assumptions on the sparsity level $k$ and is sharp in the sense that this is the best possible error {\em any} estimator can achieve. A remarkable feature is that SLOPE does not require any knowledge of the degree of sparsity, and yet automatically adapts to yield optimal total squared errors over a wide range of sparsity classes. We are not aware of any other estimator with this property.
Code and data are available from the paper's webpage.

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

### Scale Up Nonlinear Component Analysis with Doubly Stochastic Gradients

Here is a mix of both Stochastic Gradient descent and Random Features to approximate Kernel PCAs and more...

Scale Up Nonlinear Component Analysis with Doubly Stochastic Gradients by Bo Xie, Yingyu Liang, Le Song

Nonlinear component analysis such as kernel Principle Component Analysis (KPCA) and kernel Canonical Correlation Analysis (KCCA) are widely used in machine learning, statistics and data analysis, and they serve as invaluable preprocessing tools for various purposes such as data exploration, dimension reduction and feature extraction.
However, existing algorithms for nonlinear component analysis cannot scale up to millions of data points due to prohibitive computation and memory requirements. There are some recent attempts to scale up kernel version of component analysis using random feature approximations. However, to obtain high quality solutions, the number of required random features can be the same order of magnitude as the number of data points, making such approach not directly applicable to the regime with millions of data points.
We propose a simple, computationally efficient, and memory friendly algorithm based on the "doubly stochastic gradients" to scale up a range of kernel nonlinear component analysis, such as kernel PCA, CCA, SVD and latent variable model estimation. Despite the non-convex nature of these problems, we are able to provide theoretical guarantees that the algorithm converges at the rate $\tilde{O}(1/t)$ to the global optimum, even for the top $k$ eigen subspace. We demonstrate the effectiveness and scalability of our algorithm on large scale synthetic and real world datasets.

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

## Saturday, April 18, 2015

### Saturday Morning Video: Towards a Learning Theory of Causation - Implementation -

Here is the video:

We pose causal inference as the problem of learning to classify probability distributions. In particular, we assume access to a collection {(Si,li)}ni=1, where each Si is a sample drawn from the probability distribution of Xi×Yi, and li is a binary label indicating whether "Xi→Yi" or "Xi←Yi". Given these data, we build a causal inference rule in two steps. First, we featurize each Si using the kernel mean embedding associated with some characteristic kernel. Second, we train a binary classifier on such embeddings to distinguish between causal directions. We present generalization bounds showing the statistical consistency and learning rates of the proposed approach, and provide a simple implementation that achieves state-of-the-art cause-effect inference. Furthermore, we extend our ideas to infer causal relationships between more than two variables.
The code is here.

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

### Three biographies: Ken Case, Charles Johnson and Leo Breiman

Sometimes, it's always nice to get a context on certain things that happened in the past. Here are three biographies that I have read recently and which fits that bill.

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

## Friday, April 17, 2015

### The Glitter Telescopes

Dick just let me know of the following:

Dear Igor,
You might want to post:

Landau, E. (2015). Glitter Cloud May Serve as Space Mirror.  http://www.jpl.nasa.gov/news/news.php?feature=4553&utm_source=iContact&utm_medium=email&utm_campaign=NASAJPL&utm_content=daily20150415-1

Yours, -Dick Gordon DickGordonCan@gmail.com
Good catch, thank you Dick !

The NPR piece below mentions computational imagery for reconstructing the image but the papers on Grover Swartzlander publication page (including the one below) do not seem to indicate much in the way of L1 or similar regularization. It's just a question of time. Depending on how much they know about the glittering particles, it might be a compressive sensing or a compressive phase retrieval problem. Time will tell but I love the connection with some of technologies featured in "These Technologies Do Not Exist" page.

Image restoration from a sequence of random masks by Xiaopeng Peng ; Garreth J. Ruane ; Alexandra B. Artusio-Glimpse ; Grover A. Swartzlander

We experimentally explored the reconstruction of the image of two point sources using a sequence of random aperture phase masks. The speckled intensity profiles were combined using an improved shift-and-add and multi-frame blind deconvolution to achieve a near diffraction limited image for broadband light (600-670 nm). Using a numerical model we also explored various algorithms in the presence of noise and phase aberration. © (2015) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.

Other resources:

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.