## Page Views on Nuit Blanche since July 2010

Please join/comment on the Google+ Community (1606), the CompressiveSensing subreddit (971), the Facebook page (84 likes), the LinkedIn Compressive Sensing group (3369) or the Advanced Matrix Factorization Group (1072)

## Saturday, October 10, 2015

### Saturday Morning Videos: Talks at RecSys2015 and RecSys2014

The twitter feed of the ACM Recsys conferences pointed me to their video channel that features 2015 and 2014 video presentations. Enjoy !
• 1:33:08

• 78 views
• 4 days ago

• 1:28:10

• 39 views
• 4 days ago
• 1:23:58

• 19 views
• 4 days ago
• 1:22:29

• 13 views
• 4 days ago
• 1:19:29

• 22 views
• 4 days ago
• 1:03:42

• 21 views
• 4 days ago
• 1:16:01

• 61 views
• 4 days ago
• 1:28:22

• 26 views
• 4 days ago
• 56:28

• 16 views
• 4 days ago
• 1:24:54

• 27 views
• 4 days ago
• 1:22:22

• 17 views
• 4 days ago
• 26:06

• 23 views
• 4 days ago
• 11:50
• ### RecSys 2014 Industry Session I Mainstream, Pt. 2

• 886 views
• 11 months ago
• 32:24
• ### RecSys 2014 Keynote by Jeff Dean Large Scale Machine Learning for Predictive Tasks Pt 2

• 1,220 views
• 11 months ago
• 23:46
• ### RecSys 2014 Industry Session I Mainstream

• 404 views
• 11 months ago
• 35:50

• 291 views
• 1 year ago
• 59:21

• 572 views
• 1 year ago
• 1:01:06
• ### RecSys 2014 Keynote by Hector Garcia-Molina: The Future of Recommender Systems

• 1,568 views
• 1 year ago
• 1:42:39

• 372 views
• 1 year ago
• 14:49

• 271 views
• 1 year ago
• 43:53
• ### RecSys 2014 Keynote by Jeff Dean: Large Scale Machine Learning for Predictive Tasks, Pt. 1

• 5,786 views
• 1 year ago
• 2:13:23
• ### RecSys 2014 Keynote by Neil Hunt: Quantifying the Value of Better Recommendations

• 2,846 views
• 1 year ago
• 1:01
• ### RecSys 2014 Supporters

• 542 views
• 1 year ago

Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

## Friday, October 09, 2015

### Sketching for Simultaneously Sparse and Low-Rank Covariance Matrices

Laurent Jacques tweeted about these two sketching preprints recently. The second one is the new version of a preprint we mentioned earlier:

Sketching for Simultaneously Sparse and Low-Rank Covariance Matrices
Sohail Bahmani, Justin Romberg

We introduce a technique for estimating a structu red covariance matrix from observations of a random vector which have been sketched. Each observed random vector xt is reduced to a single number by taking its inner product against one of a number of pre-selected vector a. These observations are used to form estimates of linear observations of the covariance matrix varSigma, which is assumed to be simultaneously sparse and low-rank. We show that if the sketching vectors a have a special structure, then we can use straightforward two-stage algorithm that exploits this structure. We show that the estimate is accurate when the number of sketches is proportional to the maximum of the rank times the number of significant rows/columns of Σ. Moreover, our algorithm takes direct advantage of the low-rank structure of Σ by only manipulating matrices that are far smaller than the original covariance matrix.

In this paper we show that for the purposes of dimensionality reduction certain class of structured random matrices behave similarly to random Gaussian matrices. This class includes several matrices for which matrix-vector multiply can be computed in log-linear time, providing efficient dimensionality reduction of general sets. In particular, we show that using such matrices any set from high dimensions can be embedded into lower dimensions with near optimal distortion. We obtain our results by connecting dimensionality reduction of any set to dimensionality reduction of sparse vectors via a chaining argument.

Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

### Linear-time Learning on Distributions with Approximate Kernel Embeddings

We continue on using the random feature hack to approximate distributions: Linear-time Learning on Distributions with Approximate Kernel Embeddings by Dougal J. Sutherland, Junier B. Oliva, Barnabás Póczos, Jeff Schneider

Many interesting machine learning problems are best posed by considering instances that are distributions, or sample sets drawn from distributions. Previous work devoted to machine learning tasks with distributional inputs has done so through pairwise kernel evaluations between pdfs (or sample sets). While such an approach is fine for smaller datasets, the computation of an $N \times N$ Gram matrix is prohibitive in large datasets. Recent scalable estimators that work over pdfs have done so only with kernels that use Euclidean metrics, like the $L_2$ distance. However, there are a myriad of other useful metrics available, such as total variation, Hellinger distance, and the Jensen-Shannon divergence. This work develops the first random features for pdfs whose dot product approximates kernels using these non-Euclidean metrics, allowing estimators using such kernels to scale to large datasets by working in a primal space, without computing large Gram matrices. We provide an analysis of the approximation error in using our proposed random features and show empirically the quality of our approximation both in estimating a Gram matrix and in solving learning tasks in real-world and synthetic data.

Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

### Stable recovery of low-dimensional cones in Hilbert spaces: One RIP to rule them all

Many inverse problems in signal processing deal with the robust estimation of unknown data from underdetermined linear observations. Low dimensional models, when combined with appropriate regularizers, have been shown to be efficient at performing this task. Sparse models with the 1-norm or low rank models with the nuclear norm are examples of such successful combinations. Stable recovery guarantees in these settings have been established using a common tool adapted to each case: the notion of restricted isometry property (RIP). In this paper, we establish generic RIP-based guarantees for the stable recovery of cones (positively homogeneous model sets) with arbitrary regularizers. These guarantees are illustrated on selected examples. For block structured sparsity in the infinite dimensional setting, we use the guarantees for a family of regularizers which efficiency in terms of RIP constant can be controlled, leading to stronger and sharper guarantees than the state of the art.

Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

## Thursday, October 08, 2015

### Efficient scalable compression of sparsely sampled images

Laurent Duval just sent me the following:

Hi Igor

From Québec City, a paper that caught my attention, a nice mix between  CS and compression:
Colas Schretter, David Blinder, Tim Bruylants, Peter Schelkens and Adrian Munteanu, Efficient scalable compression of sparsely sampled images, IEEE International Conference on Image Processing 2015, Quebec City, Canada, September 27-30 2015.
Poster: http://homepages.ulb.ac.be/~cschrett/posters/ICIP2015_poster.pdf
Advanced sparse sampling acquisition systems capture only scattered information from the continuous image domain. Unfortunately, conventional image encoders are not yet able to properly compress arbitrarily subsampled image data. This work introduces a system leveraging the JPEG 2000 image compression framework by enabling scalable compression of the selected image samples. Using a complete dictionary of CDF 9/7 wavelets, a minimum l1-norm compressed sensing solution is recovered which can be fed directly into the encoder, producing a bitstream that can be decoded with existing JPEG 2000-compliant implementations. Experiments on standard images with quasi-random subsampling demonstrate that the proposed system outperforms regular JPEG 2000 compression of stacked sample images and quad-tree based compression for point-clouds. We also demonstrate the robustness of the technique for images that infringe the sparsity prior of compressed sensing.

Thanks Laurent for the heads-up. Something is at play here but I am not sure what it is. In a way, if the CDF dictionary is complete, the initial dirac-like subsampling mask of the image requires it to be incoherent with that CDF dictionary.  The L_1 recovery of these coefficients then makes sense in that framework (modulo the recent work in infinite compressive sensing) but I wonder what the EBCOT nonlinear coding does to the resulting picture.
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

### Boolean Matrix Factorization and Completion via Message Passing

Boolean factor analysis is the task of decomposing a Binary matrix to the Boolean product of two binary factors. This unsupervised data-analysis approach is desirable due to its interpretability, but hard to perform due its NP-hardness. A closely related problem is low-rank Boolean matrix completion from noisy observations. We treat these problems as maximum a posteriori inference problems, and present message passing solutions that scale linearly with the number of observations and factors. Our empirical study demonstrates that message passing is able to recover low-rank Boolean matrices, in the boundaries of theoretically possible recovery and outperform existing techniques in real-world applications, such as large-scale binary valued collaborative filtering tasks.

Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

## Wednesday, October 07, 2015

### Tonight: Paris Machine Learning #2 Season 3: Extreme Multi-class, Infosec, RTB and more.

Le meetup will be taking place at SNIPS and will be sponsored by Mathworks. Un grand merci à eux ! We will start at 19h15 Paris time.
Right now the program is:

Raphael Puget, LIP6

Extreme multi-class classification with large number of categories / Classification multi-class dans un très grand nombre de catégories.

Abstract: Extreme multi-class classification concerns classification problems with very large number of classes, up to several millions. Such problems have now become quite frequent in many practical applications. Until recently, most classification methods had inference complexity at least linear in the number of classes. Several directions have been recently explored for limiting this complexity, but the challenge of learning an optimal compromise between inference complexity and classification accuracy is still largely open. We propose here a novel ensemble learning approach, where classifiers are dynamically chosen among a pre-trained set of classifiers and are iteratively combined in order to achieve an efficient trade-off between inference complexity and classification accuracy. The proposed model uses statistical bounds to discard during the inference process irrelevant classes and to choose the most informative classifier with respect to the information gathered during the previous steps. Experiments on real datasets of recent challenges show that the proposed approach is able to achieve a very high classification accuracy in comparison to baselines and recent proposed approaches for similar inference time complexity"

R , information security , large protocol inspection and state machine analysis, Imad Soltani,

The PRISMA package ( http://cran.r-project.org/web/packages/PRISMA/index.html) is capable of loading and processing huge text corpora processed with the sally toolbox (http://www.mlsec.org/sally/). sally acts as a very fast preprocessor
which splits the text files into tokens or n-grams. Today , with the deployement of specially honeypots , we can work with PRISMA to implement a protocol inspection and state analysis method . And in combination with sally toolbox , we can provide a more deeper analysis

"RTB à la Quant", JFT

Abstract:
Ce talk presente un approche "quant" pour le RTB. Le but c'est de montrer que c'est possible faire du "pricing" et "couverture de strategies" via des techniques type stochastic dynamic programming. On montre comment, sous certaines hypothèses, on peut garantir des performances a l'avance ainsi que savoir combien il doit couter l'algorithme. Ce travail presente une entrée des techniques puissantes de maths financières en RTB ainsi que signaler les points où l'on a besoin d'une formulation type reinforcement learning.

Mouhidine Seiv, Riminder.net

"I would like to present Riminder.net, a startup that I launched during my gap year in 2013-2014 with a deep belief that there are patterns to be discovered and used to improve Employment and Education. At Riminder we believe that Artificial Intelligence is the key to achieve that goal.

Today we are proud to have 200+ companies using Riminder​, a robot which analyzes the recruiter’s habits and the job seekers' CV to enable faster and better quality recruitment.

On 1st September 2015 we released our own artificially intelligent personal digital assistant : Jarvis (Just A Rather Very Intelligent System).
This new Question Answering machine system answers two of our main aspirations :
- Design and build software that is smart and fun !

The team is based in Paris and supported by some of the most iconic professors and technology entrepreneurs from Ecole Centrale Paris and Ecole Normale Supérieure.

Website: http://riminder.net
Beta of Jarvis : http://riminder.net/app/jarvis
LeMonde talking about Riminder : http://bit.ly/1eawCTh
Amine El HelouMathworks, Upcoming webinar Machine Learning for Sensor Data Analytics, 5/11/2015

Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

## Tuesday, October 06, 2015

### The Communities: Nuit Blanche, Defeating the Data Tsunami one Algorithm at a Time.

Over Twitter, Janessa mentioned that Nuit Blanche was listed as part of a list of 150 Data Science blogs. This is fine but what surprised me was the number of followers: 4000.

I think it is half as much but Feedly seems to think otherwise. Oh well, what's more important are the communities we have:

The Paris Machine Learning meetup has the following communities:
the Meetup Archives are here.

Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

### Memory and Computation Efficient PCA via Very Sparse Random Projections

Memory and Computation Efficient PCA via Very Sparse Random Projections by Farhad Pourkamali Anaraki, Shannon Hughes
Algorithms that can efficiently recover principal components in very high-dimensional, streaming, and/or distributed data settings have become an important topic in the literature. In this paper, we propose an approach to principal component estimation that utilizes projections onto very sparse random vectors with Bernoulli-generated nonzero entries. Indeed, our approach is simultaneously efficient in memory/storage space, efficient in computation, and produces accurate PC estimates, while also allowing for rigorous theoretical performance analysis. Moreover, one can tune the sparsity of the random vectors deliberately to achieve a desired point on the tradeoffs between memory, computation, and accuracy. We rigorously characterize these tradeoffs and provide statistical performance guarantees. In addition to these very sparse random vectors, our analysis also applies to more general random projections. We present experimental results demonstrating that this approach allows for simultaneously achieving a substantial reduction of the computational complexity and memory/storage space, with little loss in accuracy, particularly for very high-dimensional data.
supplementary material

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

## Monday, October 05, 2015

### Compressive Imaging in Scanning Transmission Electron Microscopy and Microwave Ghost Imaging

Here are two new hardware randomized acquisition of signals:

The concept of compressive sensing was recently proposed to significantly reduce the electron dose in scanning transmission electron microscopy (STEM) while still maintaining the main features in the image. Here, an experimental setup based on an electromagnetic shutter placed in the condenser plane of a STEM is proposed. The shutter blanks the beam following a random pattern while the scanning coils are moving the beam in the usual scan pattern. Experimental images at both medium scale and high resolution are acquired and then reconstructed based on a discrete cosine algorithm. The obtained results confirm the predicted usefulness of compressive sensing in experimental STEM even though some remaining artifacts need to be resolved.

Microwave Surveillance based on Ghost Imaging and Distributed Antennas
Xiaopeng Wang, Zihuai Lin

In this letter, we proposed a ghost imaging (GI) and distributed antennas based microwave surveillance scheme. By analyzing its imaging resolution and sampling requirement, the potential of employing microwave GI to achieve high-quality surveillance performance with low system complexity has been demonstrated. The theoretical analysis and effectiveness of the proposed microwave surveillance method are also validated via simulations.

Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.