Thursday, March 31, 2016

4000th post on Nuit Blanche

I just noticed that yesterday's post was the 4 000 th blog entry posted on Nuit Blanche since its inception. I know...it's just a number.
 
Potentially of interest:
 
Nuit Blanche community:
@Google+(1743) || @Facebook(151 likes) || @Reddit (1166)
Compressive Sensing @LinkedIn (3501)
Advanced Matrix Factorization @Linkedin (1118)

Paris Machine Learning
@Meetup.com (3489 members) || @archives || @LinkedIn (1059) || @Google+(292) ||
@Facebook (120) || @Twitter(474 followers)
 
 
 
 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

A Comparison between Deep Neural Nets and Kernel Acoustic Models for Speech Recognition

Random Features and Deep Neural Networks comparison in acoustic models:



A Comparison between Deep Neural Nets and Kernel Acoustic Models for Speech Recognition by Zhiyun Lu, Dong Guo, Alireza Bagheri Garakani, Kuan Liu, Avner May, Aurelien Bellet, Linxi Fan, Michael Collins, Brian Kingsbury, Michael Picheny, Fei Sha

We study large-scale kernel methods for acoustic modeling and compare to DNNs on performance metrics related to both acoustic modeling and recognition. Measuring perplexity and frame-level classification accuracy, kernel-based acoustic models are as effective as their DNN counterparts. However, on token-error-rates DNN models can be significantly better. We have discovered that this might be attributed to DNN's unique strength in reducing both the perplexity and the entropy of the predicted posterior probabilities. Motivated by our findings, we propose a new technique, entropy regularized perplexity, for model selection. This technique can noticeably improve the recognition performance of both types of models, and reduces the gap between them. While effective on Broadcast News, this technique could be also applicable to other tasks.


 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Wednesday, March 30, 2016

Miniature Compressive Ultra-spectral Imaging System Utilizing a Single Liquid Crystal Phase Retarder

Ah! here comes the Tsunami. Multispectral was fine and back in 2007 we already noted that hyperspectral imaging (Hyperion on EO-1) overloaded distribution channels such as TDRSS, (thereby elevating the issue of Making Hyperspectral Imaging Mainstream ). Just imagine what can be done if instead of 10 or 200 spectral bands, you could get 1000 spectral bands on a CubeSat ? We may not be far from this reality according to today's entry, woohoo !





Miniature Compressive Ultra-spectral Imaging System Utilizing a Single Liquid Crystal Phase Retarder by Isaac August, Yaniv Oiknine, Marwan AbuLeil, Ibrahim Abdulhalim, and Adrian Stern

Spectroscopic imaging has been proved to be an effective tool for many applications in a variety of fields, such as biology, medicine, agriculture, remote sensing and industrial process inspection. However, due to the demand for high spectral and spatial resolution it became extremely challenging to design and implement such systems in a miniaturized and cost effective manner. Using a Compressive Sensing (CS) setup based on a single variable Liquid Crystal (LC) retarder and a sensor array, we present an innovative Miniature Ultra-Spectral Imaging (MUSI) system. The LC retarder acts as a compact wide band spectral modulator. Within the framework of CS, a sequence of spectrally modulated images is used to recover ultra-spectral image cubes. Using the presented compressive MUSI system, we demonstrate the reconstruction of gigapixel spatio-spectral image cubes from spectral scanning shots numbering an order of magnitude less than would be required using conventional systems.

 and yes, there is also the issue of Making Hyperspectral Imaging Mainstream

Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Tuesday, March 29, 2016

There something about being positive: Robust Nonnegative Sparse Recovery and the Nullspace Property of 0/1 Measurements

Following up on this morning's entry, the phase transition with some noise seems to show that a non least squares solver does very well in recovering positive sparse signals with no L1 regularization. Let us note that there is something peculiar about being positive see Dustin's blog entry on A variant on the compressed sensing of Yves Meyer.

As a side note, I wonder if some of the work by Phil Schniter et al's investigation or Justin et al results in the group sparse setting should (or not) be included in the references of this preprint. Let us also note that using a 0/1 ensemble also impart a non negativity constraint of the measurement ensemble that may help the recovery.


Robust Nonnegative Sparse Recovery and the Nullspace Property of 0/1 Measurements by Richard Kueng, Peter Jung

We investigate recovery of nonnegative vectors from non--adaptive compressive measurements in the presence of noise of unknown power. It is known in literature that under additional assumptions on the measurement design recovery is possible in the noiseless setting with nonnegative least squares without any regularization. We show that such known uniquenes results carry over to the noisy setting. We present guarantees which hold instantaneously by establishing the relation to the robust nullspace property. As an important example, we establish that an m x n random iid. 0/1-valued Bernoulli matrix has with overwhelming probability the robust nullspace property for m=O(s log(n)) and is applicable in the nonnegative case. Our analysis is motivated by applications in wireless network activity detection.



Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Overcoming The Limitations of Phase Transition by Higher Order Analysis of Regularization Techniques

All the map makers know that computing a phase transition for a zero noise limit is a first step, the second step, especially when it comes to building some hardware, involves putting some noise in it and see how the new phase transition moves. Today, we have a mathematical approach to that, woohoo !


Overcoming The Limitations of Phase Transition by Higher Order Analysis of Regularization Techniques  by Haolei Weng, Arian Maleki, Le Zheng

We study the problem of estimating $\beta \in \mathbb{R}^p$ from its noisy linear observations $y= X\beta+ w$, where $w \sim N(0, \sigma_w^2 I_{n\times n})$, under the following high-dimensional asymptotic regime: given a fixed number $\delta$, $p \rightarrow \infty$, while $n/p \rightarrow \delta$. We consider the popular class of $\ell_q$-regularized least squares (LQLS) estimators, a.k.a. bridge, given by the optimization problem: \begin{equation*} \hat{\beta} (\lambda, q ) \in \arg\min_\beta \frac{1}{2} \|y-X\beta\|_2^2+ \lambda \|\beta\|_q^q, \end{equation*} and characterize the almost sure limit of $\frac{1}{p} \|\hat{\beta} (\lambda, q )- \beta\|_2^2$. The expression we derive for this limit does not have explicit forms and hence are not useful in comparing different algorithms, or providing information in evaluating the effect of $\delta$ or sparsity level of $\beta$. To simplify the expressions, researchers have considered the ideal "no-noise" regime and have characterized the values of $\delta$ for which the almost sure limit is zero. This is known as the phase transition analysis.
In this paper, we first perform the phase transition analysis of LQLS. Our results reveal some of the limitations and misleading features of the phase transition analysis. To overcome these limitations, we propose the study of these algorithms under the low noise regime. Our new analysis framework not only sheds light on the results of the phase transition analysis, but also makes an accurate comparison of different regularizers possible.



Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Saturday, March 26, 2016

It's Friday afternoon, it's Hamming's time: Antipodal Comet Deposits

 
In a tweet stream from the #LPSC2016 conference (47th Lunar and Planetary Science Conference (#lpsc2016)) , Parvathy Prem uses Direct Simulation Monte Carlo (DSMCto figure out how meteorite deposits on a planet with no atmosphere such as the Moon. Here are some of her papers:



Related:

 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Friday, March 25, 2016

XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks

A deep neural network that is lightweight enough to be implemented on low power devices requires new ways to reduce coefficients of such models.



XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks by Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, Ali Farhadi

We propose two efficient approximations to standard convolutional neural networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks, the filters are approximated with binary values resulting in 32x memory saving. In XNOR-Networks, both the filters and the input to convolutional layers are binary. XNOR-Networks approximate convolutions using primarily binary operations. This results in 58x faster convolutional operations and 32x memory savings. XNOR-Nets offer the possibility of running state-of-the-art networks on CPUs (rather than GPUs) in real-time. Our binary networks are simple, accurate, efficient, and work on challenging visual tasks. We evaluate our approach on the ImageNet classification task. The classification accuracy with a Binary-Weight-Network version of AlexNet is only 2.9% less than the full-precision AlexNet (in top-1 measure). We compare our method with recent network binarization methods, BinaryConnect and BinaryNets, and outperform these methods by large margins on ImageNet, more than 16% in top-1 accuracy.


 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Wednesday, March 23, 2016

ACDC: A Structured Efficient Linear Layer - implementation -

This is version 5 of ACDC, we mentioned version 1 recently (and certainly not just because it mentions ours , in fact the more the merrier). This time they added a few things including a Torch implementation:, woohoo !






ACDC: A Structured Efficient Linear Layer by  Marcin Moczulski, Misha Denil, Jeremy Appleyard, Nando de Freitas

The linear layer is one of the most pervasive modules in deep learning representations. However, it requires O(N2) parameters and O(N2) operations. These costs can be prohibitive in mobile applications or prevent scaling in many domains. Here, we introduce a deep, differentiable, fully-connected neural network module composed of diagonal matrices of parameters, A and D, and the discrete cosine transform C. The core module, structured as ACDC1, has O(N) parameters and incurs O(NlogN) operations. We present theoretical results showing how deep cascades of ACDC layers approximate linear layers. ACDC is, however, a stand-alone module and can be used in combination with any other types of module. In our experiments, we show that it can indeed be successfully interleaved with ReLU modules in convolutional neural networks for image recognition. Our experiments also study critical factors in the training of these structured modules, including initialization and depth. Finally, this paper also provides a connection between structured linear transforms used in deep learning and the field of Fourier optics, illustrating how ACDC could in principle be implemented with lenses and diffractive elements.
 
 An implementation of ACDC is available at: https://github.com/mdenil/acdc-torch
 
 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Tuesday, March 22, 2016

ICASSP poster: Random Projections through multiple optical scattering: Approximating kernels at the speed of light

So our paper got into ICASSP. It's a poster. If you are in Shanghai, Angélique and Laurent will be answering your questions tomorrow on Wednesday, March 23, 16:00 - 18:00. The paper code is BD-P1.5, it is in the Big Data session and located in poster Area G 

Go ask Angélique and Laurent some questions ! 
The best questions and answers will be featured on Nuit Blanche





Random Projections through multiple optical scattering: Approximating kernels at the speed of light  by Alaa Saade, Francesco Caltagirone, Igor Carron, Laurent Daudet, Angélique Drémeau, Sylvain Gigan, Florent Krzakala

Random projections have proven extremely useful in many signal processing and machine learning applications. However, they often require either to store a very large random matrix, or to use a different, structured matrix to reduce the computational and memory costs. Here, we overcome this difficulty by proposing an analog, optical device, that performs the random projections literally at the speed of light without having to store any matrix in memory. This is achieved using the physical properties of multiple coherent scattering of coherent light in random media. We use this device on a simple task of classification with a kernel machine, and we show that, on the MNIST database, the experimental results closely match the theoretical performance of the corresponding kernel. This framework can help make kernel methods practical for applications that have large training sets and/or require real-time prediction. We discuss possible extensions of the method in terms of a class of kernels, speed, memory consumption and different problems.
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Je maintiendrai

ICASSP: Intensity-only optical compressive imaging using a multiply scattering material and a double phase retrieval approach

At ICASSP. in ShanghaiLaurent will be presenting the following paper in the Compressed Sensing I session (Location:    Room 3H+3I+3J) tomorrow Wednesday, March 23 ( 13:30 - 15:30)

Go ask him questions ! 
The best questions and answers will be featured on Nuit Blanche




In this paper, the problem of compressive imaging is addressed using natural randomization by means of a multiply scattering medium. To utilize the medium in this way, its corresponding transmission matrix must be estimated. To calibrate the imager, we use a digital micromirror device (DMD) as a simple, cheap, and high-resolution binary intensity modulator. We propose a phase retrieval algorithm which is well adapted to intensity-only measurements on the camera, and to the input binary intensity patterns, both to estimate the complex transmission matrix as well as image reconstruction. We demonstrate promising experimental results for the proposed algorithm using the MNIST dataset of handwritten digits as example images.



Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Monday, March 21, 2016

Deep Fully-Connected Networks for Video Compressive Sensing

Here is another example of the great convergence:

Deep Fully-Connected Networks for Video Compressive Sensing by Michael Iliadis, Leonidas Spinoulas, Aggelos K. Katsaggelos

In this work we present a deep learning framework for video compressive sensing. The proposed formulation enables recovery of video frames in a few seconds at significantly improved reconstruction quality compared to previous approaches. Our investigation starts by learning a linear mapping between video sequences and corresponding measured frames which turns out to provide promising results. We then extend the linear formulation to deep fully-connected networks and explore the performance gains using deeper architectures. Our analysis is always driven by the applicability of the proposed framework on existing compressive video architectures. Extensive simulations on several video sequences document the superiority of our approach both quantitatively and qualitatively. Finally, our analysis offers insights into understanding how dataset sizes and number of layers affect reconstruction performance while raising a few points for future investigation.

Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Friday, March 18, 2016

Streaming Algorithms for News and Scientific Literature Recommendation: Submodular Maximization with a d-Knapsack Constraint

The following paper is important. The authors develop a streaming algorithm for maximizing a submodular function.


Streaming Algorithms for News and Scientific Literature Recommendation: Submodular Maximization with a d-Knapsack Constraint by Qilian Yu, Easton Li Xu, Shuguang Cui

Submodular maximization problems belong to the family of combinatorial optimization problems and enjoy wide applications. In this paper, we focus on the problem of maximizing a monotone submodular function subject to a d-knapsack constraint, for which we propose a streaming algorithm that achieves a (11+dϵ)-approximation of the optimal value, while it only needs one single pass through the dataset without storing all the data in the memory. In our experiments, we extensively evaluate the effectiveness of our proposed algorithm via two applications: news recommendation and scientific literature recommendation. It is observed that the proposed streaming algorithm achieves both execution speedup and memory saving by several orders of magnitude, compared with existing approaches.
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Provable Non-convex Phase Retrieval with Outliers: Median Truncated Wirtinger Flow

A phase transition for a phase retrieval problem:

Provable Non-convex Phase Retrieval with Outliers: Median Truncated Wirtinger Flow by  Huishuai Zhang, Yuejie Chi, Yingbin Liang

Solving systems of quadratic equations is a central problem in machine learning and signal processing. One important example is phase retrieval, which aims to recover a signal from only magnitudes of its linear measurements. This paper focuses on the situation when the measurements are corrupted by arbitrary outliers, for which the recently developed non-convex gradient descent Wirtinger flow (WF) and truncated Wirtinger flow (TWF) algorithms likely fail. We develop a novel median-TWF algorithm that exploits robustness of sample median to resist arbitrary outliers in the initialization and the gradient update in each iteration. We show that such a non-convex algorithm provably recovers the signal from a near-optimal number of measurements composed of i.i.d. Gaussian entries, up to a logarithmic factor, even when a constant portion of the measurements are corrupted by arbitrary outliers. We further show that median-TWF is also robust when measurements are corrupted by both arbitrary outliers and bounded noise. Our analysis of performance guarantee is accomplished by development of non-trivial concentration measures of median-related quantities, which may be of independent interest. We further provide numerical experiments to demonstrate the effectiveness of the approach.
 
 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Thursday, March 17, 2016

Near-Optimal Sample Complexity Bounds for Circulant Binary Embedding



Near-Optimal Sample Complexity Bounds for Circulant Binary Embedding by Samet Oymak

Binary embedding is the problem of mapping points from a high-dimensional space to a Hamming cube in lower dimension while preserving pairwise distances. An efficient way to accomplish this is to make use of fast embedding techniques involving Fourier transform e.g.~circulant matrices. While binary embedding has been studied extensively, theoretical results on fast binary embedding are rather limited. In this work, we build upon the recent literature to obtain significantly better dependencies on the problem parameters. A set of N points in Rn can be properly embedded into the Hamming cube {±1}k with δ distortion, by using kδ3logN samples which is optimal in the number of points N and compares well with the optimal distortion dependency δ2. Our optimal embedding result applies in the regime logNn1/3. Furthermore, if the looser condition logNn holds, we show that all but an arbitrarily small fraction of the points can be optimally embedded. We believe our techniques can be useful to obtain improved guarantees for other nonlinear embedding problems.
 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Wednesday, March 16, 2016

Searching for Topological Symmetry in Data Haystack




Searching for Topological Symmetry in Data Haystack by Kallol Roy, Anh Tong, Jaesik Choi

Finding interesting symmetrical topological structures in high-dimensional systems is an important problem in statistical machine learning. Limited amount of available high-dimensional data and its sensitivity to noise pose computational challenges to find symmetry. Our paper presents a new method to find local symmetries in a low-dimensional 2-D grid structure which is embedded in high-dimensional structure. To compute the symmetry in a grid structure, we introduce three legal grid moves (i) Commutation (ii) Cyclic Permutation (iii) Stabilization on sets of local grid squares, grid blocks. The three grid moves are legal transformations as they preserve the statistical distribution of hamming distances in each grid block. We propose and coin the term of grid symmetry of data on the 2-D data grid as the invariance of statistical distributions of hamming distance are preserved after a sequence of grid moves. We have computed and analyzed the grid symmetry of data on multivariate Gaussian distributions and Gamma distributions with noise.





Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Tuesday, March 15, 2016

Compressive sensing in medical imaging

 
 Emil just sent me the following:
 
  Hello Igor,

I hope all is well with you, and I still read your blog with interest. The article I wrote with Christian Graff on CS in Medical Imaging has passed the embargo period and is
freely available:
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4669980/

Best wishes,
Emil
 Thanks Emil ! Here is the article:

Compressive sensing in medical imaging by Christian G. Graff and Emil Y. Sidky
The promise of compressive sensing, exploitation of compressibility to achieve high quality image reconstructions with less data, has attracted a great deal of attention in the medical imaging community. At the Compressed Sensing Incubator meeting held in April 2014 at OSA Headquarters in Washington, DC, presentations were given summarizing some of the research efforts ongoing in compressive sensing for x-ray computed tomography and magnetic resonance imaging systems. This article provides an expanded version of these presentations. Sparsity-exploiting reconstruction algorithms that have gained popularity in the medical imaging community are studied, and examples of clinical applications that could benefit from compressive sensing ideas are provided. The current and potential future impact of compressive sensing on the medical imaging field is discussed.
 
 
 
 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Printfriendly