Page Views on Nuit Blanche since July 2010







Please join/comment on the Google+ Community (1502), the CompressiveSensing subreddit (811), the Facebook page, the LinkedIn Compressive Sensing group (3293) or the Advanced Matrix Factorization Group (1017)

Saturday, May 23, 2015

Saturday Morning Videos: Slides and Videos from ICLR 2015

From the conference schedule
 
0900 0940 keynote Antoine Bordes (Facebook), Artificial Tasks for Artificial Intelligence (slides) Video1 Video2
0940 1000 oral Word Representations via Gaussian Embedding by Luke Vilnis and Andrew McCallum (Brown University) (slides) Video
1000 1020 oral Deep Captioning with Multimodal Recurrent Neural Networks (m-RNN) by Junhua Mao, Wei Xu, Yi Yang, Jiang Wang, Zhiheng Huang, Alan Yuille (Baidu and UCLA) (slides) Video
1020 1050 coffee break

1050 1130 keynote David Silver (Google DeepMind), Deep Reinforcement Learning (slides) Video1 Video2
1130 1150 oral Deep Structured Output Learning for Unconstrained Text Recognition by Text Recognition” by Max Jaderberg, Karen Simonyan, Andrea Vedaldi, Andrew Zisserman (Oxford University and Google DeepMind) (slides) Video
1150 1210 oral Very Deep Convolutional Networks for Large-Scale Image Recognition by Karen Simonyan, Andrew Zisserman (Oxford) (slides) Video
1210 1230 oral Fast Convolutional Nets With fbfft: A GPU Performance Evaluation by Nicolas Vasilache, Jeff Johnson, Michael Mathieu, Soumith Chintala, Serkan Piantino, Yann LeCun (Facebook AI Research) (slides) Video
1230 1400 lunch On your own

1400 1700 posters Workshop Poster Session 1 – The Pavilion

1730 1900 dinner South Poolside – Sponsored by Google



May 8 0730 0900 breakfast South Poolside – Sponsored by Facebook

0900 1230 Oral Session – International Ballroom

0900 0940 keynote Terrence Sejnowski (Salk Institute), Beyond Representation Learning Video1 Video2
0940 1000 oral Reweighted Wake-Sleep (slides) Video
1000 1020 oral The local low-dimensionality of natural images (slides) Video
1020 1050 coffee break

1050 1130 keynote Percy Liang (Stanford), Learning Latent Programs for Question Answering (slides) Video1 Video2
1130 1150 oral Memory Networks (slides) Video
1150 1210 oral Object detectors emerge in Deep Scene CNNs (slides) Video
1210 1230 oral Qualitatively characterizing neural network optimization problems (slides) Video
1230 1400 lunch On your own

1400 1700 posters Workshop Poster Session 2 – The Pavilion

1730 1900 dinner South Poolside – Sponsored by IBM Watson



May 9 0730 0900 breakfast South Poolside – Sponsored by Qualcomm

0900 0940 keynote Hal Daumé III (U. Maryland), Algorithms that Learn to Think on their Feet (slides) Video
0940 1000 oral Neural Machine Translation by Jointly Learning to Align and Translate (slides) Video
1000 1030 coffee break


1030 1330 posters Conference Poster Session – The Pavilion (AISTATS attendees are invited to this poster session)

1330 1700 lunch and break On your own

1700 1800 ICLR/AISTATS Oral Session – International Ballroom

1700 1800 keynote Pierre Baldi (UC Irvine), The Ebb and Flow of Deep Learning: a Theory of Local Learning Video
1800 2000 ICLR/AISTATS reception Fresco's (near the pool)

 
 

Conference Oral Presentations

May 9 Conference Poster Session

Board Presentation
2 FitNets: Hints for Thin Deep Nets, Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio
3 Techniques for Learning Binary Stochastic Feedforward Neural Networks, Tapani Raiko, Mathias Berglund, Guillaume Alain, and Laurent Dinh
4 Reweighted Wake-Sleep, Jorg Bornschein and Yoshua Bengio
5 Semantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFs, Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan Yuille
7 Multiple Object Recognition with Visual Attention, Jimmy Ba, Volodymyr Mnih, and Koray Kavukcuoglu
8 Deep Narrow Boltzmann Machines are Universal Approximators, Guido Montufar
9 Transformation Properties of Learned Visual Representations, Taco Cohen and Max Welling
10 Joint RNN-Based Greedy Parsing and Word Composition, Joël Legrand and Ronan Collobert
11 Adam: A Method for Stochastic Optimization, Jimmy Ba and Diederik Kingma
13 Neural Machine Translation by Jointly Learning to Align and Translate, Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio
15 Scheduled denoising autoencoders, Krzysztof Geras and Charles Sutton
16 Embedding Entities and Relations for Learning and Inference in Knowledge Bases, Bishan Yang, Scott Yih, Xiaodong He, Jianfeng Gao, and Li Deng
18 The local low-dimensionality of natural images, Olivier Henaff, Johannes Balle, Neil Rabinowitz, and Eero Simoncelli
20 Explaining and Harnessing Adversarial Examples, Ian Goodfellow, Jon Shlens, and Christian Szegedy
22 Modeling Compositionality with Multiplicative Recurrent Neural Networks, Ozan Irsoy and Claire Cardie
24 Very Deep Convolutional Networks for Large-Scale Image Recognition, Karen Simonyan and Andrew Zisserman
25 Speeding-up Convolutional Neural Networks Using Fine-tuned CP-Decomposition, Vadim Lebedev, Yaroslav Ganin, Victor Lempitsky, Maksim Rakhuba, and Ivan Oseledets
27 Deep Captioning with Multimodal Recurrent Neural Networks (m-RNN), Junhua Mao, Wei Xu, Yi Yang, Jiang Wang, and Alan Yuille
28 Deep Structured Output Learning for Unconstrained Text Recognition, Max Jaderberg, Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman
30 Zero-bias autoencoders and the benefits of co-adapting features, Kishore Konda, Roland Memisevic, and David Krueger
31 Automatic Discovery and Optimization of Parts for Image Classification, Sobhan Naderi Parizi, Andrea Vedaldi, Andrew Zisserman, and Pedro Felzenszwalb
33 Understanding Locally Competitive Networks, Rupesh Srivastava, Jonathan Masci, Faustino Gomez, and Juergen Schmidhuber
35 Leveraging Monolingual Data for Crosslingual Compositional Word Representations, Hubert Soyer, Pontus Stenetorp, and Akiko Aizawa
36 Move Evaluation in Go Using Deep Convolutional Neural Networks, Chris Maddison, Aja Huang, Ilya Sutskever, and David Silver
38 Fast Convolutional Nets With fbfft: A GPU Performance Evaluation, Nicolas Vasilache, Jeff Johnson, Michael Mathieu, Soumith Chintala, Serkan Piantino, and Yann LeCun
40 Word Representations via Gaussian Embedding, Luke Vilnis and Andrew McCallum
41 Qualitatively characterizing neural network optimization problems, Ian Goodfellow and Oriol Vinyals
42 Memory Networks, Jason Weston, Sumit Chopra, and Antoine Bordes
43 Generative Modeling of Convolutional Neural Networks, Jifeng Dai, Yang Lu, and Ying-Nian Wu
44 A Unified Perspective on Multi-Domain and Multi-Task Learning, Yongxin Yang and Timothy Hospedales
45 Object detectors emerge in Deep Scene CNNs, Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba

May 7 Workshop Poster Session

Board Presentation
2 Learning Non-deterministic Representations with Energy-based Ensembles, Maruan Al-Shedivat, Emre Neftci, and Gert Cauwenberghs
3 Diverse Embedding Neural Network Language Models, Kartik Audhkhasi, Abhinav Sethy, and Bhuvana Ramabhadran
4 Hot Swapping for Online Adaptation of Optimization Hyperparameters, Kevin Bache, Dennis Decoste, and Padhraic Smyth
5 Representation Learning for cold-start recommendation, Gabriella Contardo, Ludovic Denoyer, and Thierry Artieres
6 Training Convolutional Networks with Noisy Labels, Sainbayar Sukhbaatar, Joan Bruna, Manohar Paluri, Lubomir Bourdev, and Rob Fergus
7 Striving for Simplicity: The All Convolutional Net, Alexey Dosovitskiy, Jost Tobias Springenberg, Thomas Brox, and Martin Riedmiller
8 Learning linearly separable features for speech recognition using convolutional neural networks, Dimitri Palaz, Mathew Magimai Doss, and Ronan Collobert
9 Training Deep Neural Networks on Noisy Labels with Bootstrapping, Scott Reed, Honglak Lee, Dragomir Anguelov, Christian Szegedy, Dumitru Erhan, and Andrew Rabinovich
10 On the Stability of Deep Networks, Raja Giryes, Guillermo Sapiro, and Alex Bronstein
11 Audio source separation with Discriminative Scattering Networks , Joan Bruna, Yann LeCun, and Pablo Sprechmann
13 Simple Image Description Generator via a Linear Phrase-Based Model, Pedro Pinheiro, Rémi Lebret, and Ronan Collobert
15 Stochastic Descent Analysis of Representation Learning Algorithms, Richard Golden
16 On Distinguishability Criteria for Estimating Generative Models, Ian Goodfellow
18 Embedding Word Similarity with Neural Machine Translation, Felix Hill, Kyunghyun Cho, Sebastien Jean, Coline Devin, and Yoshua Bengio
20 Deep metric learning using Triplet network, Elad Hoffer and Nir Ailon
22 Understanding Minimum Probability Flow for RBMs Under Various Kinds of Dynamics, Daniel Jiwoong Im, Ethan Buchman, and Graham Taylor
23 A Group Theoretic Perspective on Unsupervised Deep Learning, Arnab Paul and Suresh Venkatasubramanian
24 Learning Longer Memory in Recurrent Neural Networks, Tomas Mikolov, Armand Joulin, Sumit Chopra, Michael Mathieu, and Marc'Aurelio Ranzato
25 Inducing Semantic Representation from Text by Jointly Predicting and Factorizing Relations, Ivan Titov and Ehsan Khoddam
27 NICE: Non-linear Independent Components Estimation, Laurent Dinh, David Krueger, and Yoshua Bengio
28 Discovering Hidden Factors of Variation in Deep Networks, Brian Cheung, Jesse Livezey, Arjun Bansal, and Bruno Olshausen
29 Tailoring Word Embeddings for Bilexical Predictions: An Experimental Comparison, Pranava Swaroop Madhyastha, Xavier Carreras, and Ariadna Quattoni
30 On Learning Vector Representations in Hierarchical Label Spaces, Jinseok Nam and Johannes Fürnkranz
31 In Search of the Real Inductive Bias: On the Role of Implicit Regularization in Deep Learning, Behnam Neyshabur, Ryota Tomioka, and Nathan Srebro
33 Algorithmic Robustness for Semi-Supervised (ϵ, γ, τ)-Good Metric Learning, Maria-Irina Nicolae, Marc Sebban, Amaury Habrard, Éric Gaussier, and Massih-Reza Amini
35 Real-World Font Recognition Using Deep Network and Domain Adaptation, Zhangyang Wang, Jianchao Yang, Hailin Jin, Eli Shechtman, Aseem Agarwala, Jon Brandt, and Thomas Huang
36 Score Function Features for Discriminative Learning, Majid Janzamin, Hanie Sedghi, and Anima Anandkumar
38 Parallel training of DNNs with Natural Gradient and Parameter Averaging, Daniel Povey, Xioahui Zhang, and Sanjeev Khudanpur
40 A Generative Model for Deep Convolutional Learning, Yunchen Pu, Xin Yuan, and Lawrence Carin
41 Random Forests Can Hash, Qiang Qiu, Guillermo Sapiro, and Alex Bronstein
42 Provable Methods for Training Neural Networks with Sparse Connectivity, Hanie Sedghi, and Anima Anandkumar
43 Visual Scene Representations: sufficiency, minimality, invariance and approximation with deep convolutional networks, Stefano Soatto and Alessandro Chiuso
44 Deep learning with Elastic Averaging SGD, Sixin Zhang, Anna Choromanska, and Yann LeCun
45 Example Selection For Dictionary Learning, Tomoki Tsuchida and Garrison Cottrell
46 Permutohedral Lattice CNNs, Martin Kiefel, Varun Jampani, and Peter Gehler
47 Unsupervised Domain Adaptation with Feature Embeddings, Yi Yang and Jacob Eisenstein
49 Weakly Supervised Multi-embeddings Learning of Acoustic Models, Gabriel Synnaeve and Emmanuel Dupoux

May 8 Workshop Poster Session

Board Presentation
2 Learning Activation Functions to Improve Deep Neural Networks, Forest Agostinelli, Matthew Hoffman, Peter Sadowski, and Pierre Baldi
3 Restricted Boltzmann Machine for Classification with Hierarchical Correlated Prior, Gang Chen and Sargur Srihari
4 Learning Deep Structured Models, Liang-Chieh Chen, Alexander Schwing, Alan Yuille, and Raquel Urtasun
5 N-gram-Based Low-Dimensional Representation for Document Classification, Rémi Lebret and Ronan Collobert
6 Low precision arithmetic for deep learning, Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David
7 Theano-based Large-Scale Visual Recognition with Multiple GPUs, Weiguang Ding, Ruoyan Wang, Fei Mao, and Graham Taylor
8 Improving zero-shot learning by mitigating the hubness problem, Georgiana Dinu and Marco Baroni
9 Incorporating Both Distributional and Relational Semantics in Word Representations, Daniel Fried and Kevin Duh
10 Variational Recurrent Auto-Encoders, Otto Fabius and Joost van Amersfoort
11 Learning Compact Convolutional Neural Networks with Nested Dropout, Chelsea Finn, Lisa Anne Hendricks, and Trevor Darrell
13 Compact Part-Based Image Representations: Extremal Competition and Overgeneralization, Marc Goessling and Yali Amit
15 Unsupervised Feature Learning from Temporal Data, Ross Goroshin, Joan Bruna, Jonathan Tompson, David Eigen, and Yann LeCun
16 Classifier with Hierarchical Topographical Maps as Internal Representation, Pitoyo Hartono, Paul Hollensen, and Thomas Trappenberg
18 Entity-Augmented Distributional Semantics for Discourse Relations, Yangfeng Ji and Jacob Eisenstein
20 Flattened Convolutional Neural Networks for Feedforward Acceleration, Jonghoon Jin, Aysegul Dundar, and Eugenio Culurciello
22 Gradual Training Method for Denoising Auto Encoders, Alexander Kalmanovich and Gal Chechik
23 Deep Gaze I: Boosting Saliency Prediction with Feature Maps Trained on ImageNet, Matthias Kümmerer, Lucas Theis, and Matthias Bethge
24 Difference Target Propagation, Dong-Hyun Lee, Saizheng Zhang, Asja Fischer, Antoine Biard, and Yoshua Bengio
25 Predictive encoding of contextual relationships for perceptual inference, interpolation and prediction, Mingmin Zhao, Chengxu Zhuang, Yizhou Wang, and Tai Sing Lee
27 Purine: A Bi-Graph based deep learning framework, Min Lin, Shuo Li, Xuan Luo, and Shuicheng Yan
28 Pixel-wise Deep Learning for Contour Detection, Jyh-Jing Hwang and Tyng-Luh Liu
29 Ensemble of Generative and Discriminative Techniques for Sentiment Analysis of Movie Reviews, Grégoire Mesnil, Tomas Mikolov, Marc'Aurelio Ranzato, and Yoshua Bengio
30 Fast Label Embeddings for Extremely Large Output Spaces, Paul Mineiro and Nikos Karampatziakis
31 An Analysis of Unsupervised Pre-training in Light of Recent Advances, Tom Paine, Pooya Khorrami, Wei Han, and Thomas Huang
33 Fully Convolutional Multi-Class Multiple Instance Learning, Deepak Pathak, Evan Shelhamer, Jonathan Long, and Trevor Darrell
35 What Do Deep CNNs Learn About Objects?, Xingchao Peng, Baochen Sun, Karim Ali, and Kate Saenko
36 Representation using the Weyl Transform, Qiang Qiu, Andrew Thompson, Robert Calderbank, and Guillermo Sapiro
38 Denoising autoencoder with modulated lateral connections learns invariant representations of natural images, Antti Rasmus, Harri Valpola, and Tapani Raiko
40 Towards Deep Neural Network Architectures Robust to Adversarial Examples, Shixiang Gu and Luca Rigazio
41 Explorations on high dimensional landscapes, Levent Sagun, Ugur Guney, and Yann LeCun
42 Generative Class-conditional Autoencoders, Jan Rudy and Graham Taylor
43 Attention for Fine-Grained Categorization, Pierre Sermanet, Andrea Frome, and Esteban Real
44 A Baseline for Visual Instance Retrieval with Deep Convolutional Networks, Ali Sharif Razavian, Josephine Sullivan, Atsuto Maki, and Stefan Carlsson
45 Visual Scene Representation: Scaling and Occlusion, Stefano Soatto, Jingming Dong, and Nikolaos Karianakis
46 Deep networks with large output spaces, Sudheendra Vijayanarasimhan, Jon Shlens, Jay Yagnik, and Rajat Monga
47 Efficient Exact Gradient Update for training Deep Networks with Very Large Sparse Targets, Pascal Vincent
49 Self-informed neural network structure learning, David Warde-Farley, Andrew Rabinovich, and Dragomir Anguelov
 
 
 
 
 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Friday, May 22, 2015

PCANet: A Simple Deep Learning Baseline for Image Classification? - implementation -

Iteration of matrix factorizations as a way to build deep architectures. Interesting !



PCANet: A Simple Deep Learning Baseline for Image Classification? by Tsung-Han Chan, Kui Jia, Shenghua Gao, Jiwen Lu, Zinan Zeng, Yi Ma

In this work, we propose a very simple deep learning network for image classification which comprises only the very basic data processing components: cascaded principal component analysis (PCA), binary hashing, and block-wise histograms. In the proposed architecture, PCA is employed to learn multistage filter banks. It is followed by simple binary hashing and block histograms for indexing and pooling. This architecture is thus named as a PCA network (PCANet) and can be designed and learned extremely easily and efficiently. For comparison and better understanding, we also introduce and study two simple variations to the PCANet, namely the RandNet and LDANet. They share the same topology of PCANet but their cascaded filters are either selected randomly or learned from LDA. We have tested these basic networks extensively on many benchmark visual datasets for different tasks, such as LFW for face verification, MultiPIE, Extended Yale B, AR, FERET datasets for face recognition, as well as MNIST for hand-written digits recognition. Surprisingly, for all tasks, such a seemingly naive PCANet model is on par with the state of the art features, either prefixed, highly hand-crafted or carefully learned (by DNNs). Even more surprisingly, it sets new records for many classification tasks in Extended Yale B, AR, FERET datasets, and MNIST variations. Additional experiments on other public datasets also demonstrate the potential of the PCANet serving as a simple but highly competitive baseline for texture classification and object recognition.
An implementation of PCAnet is on Tsung-Han's source code page.
 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Four million page views: a million here, a million there and soon enough we're talking real readership...

 
 
I know it's just a number but there is some Long Distance Blogging behind with about a million page views per year. Here are the historical figures:
a page view is not the same as a "unique visit", here is that figure:

 which amounts to 650 unique visits per day on average, a number that is consistent with Google's sessions numbers.
Here as some interesting tags developed over the years:
  • CS (2161) for Compressive Sensing
  • MF (514) for Matrix Factorization
  • implementation (355) features work that has code implementation associated with them.
  • ML (208)  for Machine Learning
 the social network "extension" of the blog:
But also 
Finally, the Paris Machine Learning Meetup
 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Thursday, May 21, 2015

The Great Convergence: FlowNet: Learning Optical Flow with Convolutional Networks

The great convergence is upon us, here is clue #734: Andrew Davison mentioning recent work in optical flow using CNNs.

Whoa, this is a wake up call... CNN based learned optical flow (trained on synthetic flying chairs!) running at 10fps on a laptop which claims state of the art accuracy among real-time optical flow methods. So time for those of us working on non learning-based vision to pack up and go home?
This is a pretty powerful statement from one of the specialist of SLAM. Here is the paper:
 
 
 
 

FlowNet: Learning Optical Flow with Convolutional Networks by Philipp Fischer, Alexey Dosovitskiy, Eddy Ilg, Philip Häusser, Caner Hazırbaş, Vladimir Golkov, Patrick van der Smagt, Daniel Cremers, Thomas Brox

Convolutional neural networks (CNNs) have recently been very successful in a variety of computer vision tasks, especially on those linked to recognition. Optical flow estimation has not been among the tasks where CNNs were successful. In this paper we construct appropriate CNNs which are capable of solving the optical flow estimation problem as a supervised learning task. We propose and compare two architectures: a generic architecture and another one including a layer that correlates feature vectors at different image locations.
Since existing ground truth data sets are not sufficiently large to train a CNN, we generate a synthetic Flying Chairs dataset. We show that networks trained on this unrealistic data still generalize very well to existing datasets such as Sintel and KITTI, achieving competitive accuracy at frame rates of 5 to 10 fps.
 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

CSjob: Post-Doc on Structured Low-Rank Approximations, Grenoble, France

Julien Mairal just sent me the following annoucement:

Hi Igor,


here is a call for a post-doc for an ANR project.
http://lear.inrialpes.fr/people/mairal/resources/pdf/postdoc_macaron.pdf
When you have time, could you advertise it on your blog ? This is about local low-rank approximations for applications in bioinformatics and image processing. Thus, this would be a good match for nuit blanche !


Best regards.

 from the announcement:


Research Topic and Objectives:
The goal of the MACARON project is to use data for solving scientific problems and automatically converting data into scientific knowledge by using machine learning techniques. We propose a research direction motivated by applications in bioinformatics and image processing. Low-rank matrix approximation is a popular tool for building web recommender systems [1] and plays an important role in large-scale classification problems in computer vision [2]. In many applications, we need however a different point of view. Data matrices are not exactly low-rank, but admit local low-rank structures [3]. This shift of paradigm is expected to achieve groundbreaking improvements over the classical low-rank paradigm, but it raises significant challenges that should be solved during the post-doc. The first objective is to develop new methodological tools to efficiently learn local low-rank structures in data. This will require both modeling skills (designing the right model) and good knowledge of optimization techniques (for efficient learning). The second objective is to adaptthese tools to genomic imputation problems and inverse problems in image processing

 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Low-rank Modeling and its Applications in Image Analysis




Xiaowei Zhou sent me the following the other day:


Dear Dr Carron,

We had the following survey paper published several months ago:

X. Zhou, C. Yang, H. Zhao, W. Yu. Low-Rank Modeling and its Applications in Image Analysis. ACM Computing Surveys, 47(2): 36, 2014. (http://arxiv.org/abs/1401.3409)

Could you kindly post it on your matrix factorization jungle website? I hope it will be helpful to some new comers.

Thanks,

Xiaowei

Thanks Xiaowei ! Here is the review that I will shortly add to the Advanced Matrix Factorization Jungle page. 

Low-rank Modeling and its Applications in Image Analysis by Xiaowei Zhou, Can Yang, Hongyu Zhao, Weichuan Yu . ACM Computing Surveys, 47(2): 36, 2014.
Low-rank modeling generally refers to a class of methods that solves problems by representing variables of interest as low-rank matrices. It has achieved great success in various fields including computer vision, data mining, signal processing, and bioinformatics. Recently, much progress has been made in theories, algorithms, and applications of low-rank modeling, such as exact low-rank matrix recovery via convex programming and matrix completion applied to collaborative filtering. These advances have brought more and more attention to this topic. In this article, we review the recent advances of low-rank modeling, the state-of-the-art algorithms, and the related applications in image analysis. We first give an overview of the concept of low-rank modeling and the challenging problems in this area. Then, we summarize the models and algorithms for low-rank matrix recovery and illustrate their advantages and limitations with numerical experiments. Next, we introduce a few applications of low-rank modeling in the context of image analysis. Finally, we conclude this article with some discussions.

From the paper:

In this paper, we have introduced the concept of low-rank modeling and reviewed some representative low-rank models, algorithms and applications in image analysis. For additional reading on theories, algorithms and applications, the readers are referred to online documents such as the Matrix Factorization Jungle3 and the Sparse and Low-rank Approximation Wiki4, which are updated on a regular basis. 
Yes !
I also note that in the Robust PCA comparison, GoDec does consistently better than the other solvers. GoDec also happens being the reason Cable and I used it in "It's CAI, Cable And Igor's Adventures in Matrix Factorization ". Here is an example: CAI: A Glimpse of Lana and Robust PCA

 
 
More can be found here.
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Wednesday, May 20, 2015

Solving Random Quadratic Systems of Equations Is Nearly as Easy as Solving Linear Systems - implementation -

So phase retrieval can actually be fast and with near sample complexity ! Wow !
 

We consider the fundamental problem of solving quadratic systems of equations in variables, where , and is unknown. We propose a novel method, which starting with an initial guess computed by means of a spectral method, proceeds by minimizing a nonconvex functional as in the Wirtinger flow approach. There are several key distinguishing features, most notably, a distinct objective functional and novel update rules, which operate in an adaptive fashion and drop terms bearing too much influence on the search direction. These careful selection rules provide a tighter initial guess, better descent directions, and thus enhanced practical performance. On the theoretical side, we prove that for certain unstructured models of quadratic systems, our algorithms return the correct solution in linear time, i.e. in time proportional to reading the data and as soon as the ratio between the number of equations and unknowns exceeds a fixed numerical constant. We extend the theory to deal with noisy systems in which we only have and prove that our algorithms achieve a statistical accuracy, which is nearly un-improvable. We complement our theoretical study with numerical examples showing that solving random quadratic systems is both computationally and statistically not much harder than solving linear systems of the same size — hence the title of this paper. For instance, we demonstrate empirically that the computational cost of our algorithm is about four times that of solving a least-squares problem of the same size.
The attendant code is here: http://web.stanford.edu/~yxchen/TWF/
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Tuesday, May 19, 2015

Identifiability in Blind Deconvolution with Subspace or Sparsity Constraints

Here is some new sample complexity results for blind deconvolution, a certain kind of matrix factorization technique.


Identifiability in Blind Deconvolution with Subspace or Sparsity Constraints by Yanjun Li, Kiryung Lee, Yoram Bresler

Blind deconvolution (BD), the resolution of a signal and a filter given their convolution, arises in many applications. Without further constraints, BD is ill-posed. In practice, subspace or sparsity constraints have been imposed to reduce the search space, and have shown some empirical success. However, existing theoretical analysis on uniqueness in BD is rather limited. As an effort to address the still mysterious question, we derive sufficient conditions under which two vectors can be uniquely identified from their circular convolution, subject to subspace or sparsity constraints. These sufficient conditions provide the first algebraic sample complexities for BD. We first derive a sufficient condition that applies to almost all bases or frames. For blind deconvolution of vectors in $\mathbb{C}^n$, with two subspace constraints of dimensions $m_1$ and $m_2$, the required sample complexity is $n\geq m_1m_2$. Then we impose a sub-band structure on one basis, and derive a sufficient condition that involves a relaxed sample complexity $n\geq m_1+m_2-1$, which we show to be optimal. We present the extensions of these results to BD with sparsity constraints or mixed constraints, with the sparsity level replacing the subspace dimension. The cost for the unknown support in this case is an extra factor of 2 in the sample complexity.
 

Image Credit: NASA/JPL-Caltech
This image was taken by Navcam: Left B (NAV_LEFT_B) onboard NASA's Mars rover Curiosity on Sol 987 (2015-05-17 08:39:24 UTC).
Full Resolution 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Tensor time: Adaptive Higher-order Spectral Estimators / Bayesian Sparse Tucker Models for Dimension Reduction and Tensor Completion

 
Adaptive Higher-order Spectral Estimators by David Gerard, Peter Hoff

Many applications involve estimation of a signal matrix from a noisy data matrix. In such cases, it has been observed that estimators that shrink or truncate the singular values of the data matrix perform well when the signal matrix has approximately low rank. In this article, we generalize this approach to the estimation of a tensor of parameters from noisy tensor data. We develop new classes of estimators that shrink or threshold the mode-specific singular values from the higher-order singular value decomposition. These classes of estimators are indexed by tuning parameters, which we adaptively choose from the data by minimizing Stein's unbiased risk estimate. In particular, this procedure provides a way to estimate the multilinear rank of the underlying signal tensor. Using simulation studies under a variety of conditions, we show that our estimators perform well when the mean tensor has approximately low multilinear rank, and perform competitively when the signal tensor does not have approximately low multilinear rank. We illustrate the use of these methods in an application to multivariate relational data.
Bayesian Sparse Tucker Models for Dimension Reduction and Tensor Completion by Qibin Zhao, Liqing Zhang, Andrzej Cichocki

Tucker decomposition is the cornerstone of modern machine learning on tensorial data analysis, which have attracted considerable attention for multiway feature extraction, compressive sensing, and tensor completion. The most challenging problem is related to determination of model complexity (i.e., multilinear rank), especially when noise and missing data are present. In addition, existing methods cannot take into account uncertainty information of latent factors, resulting in low generalization performance. To address these issues, we present a class of probabilistic generative Tucker models for tensor decomposition and completion with structural sparsity over multilinear latent space. To exploit structural sparse modeling, we introduce two group sparsity inducing priors by hierarchial representation of Laplace and Student-t distributions, which facilitates fully posterior inference. For model learning, we derived variational Bayesian inferences over all model (hyper)parameters, and developed efficient and scalable algorithms based on multilinear operations. Our methods can automatically adapt model complexity and infer an optimal multilinear rank by the principle of maximum lower bound of model evidence. Experimental results and comparisons on synthetic, chemometrics and neuroimaging data demonstrate remarkable performance of our models for recovering ground-truth of multilinear rank and missing entries. 
 


 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Printfriendly