Page Views on Nuit Blanche since July 2010







Please join/comment on the Google+ Community (1465), the CompressiveSensing subreddit (777),
the LinkedIn Compressive Sensing group (3250) or the Advanced Matrix Factorization Group (1017)

Saturday, April 18, 2015

Saturday Morning Video: Towards a Learning Theory of Causation - Implementation -

Here is the video:

We pose causal inference as the problem of learning to classify probability distributions. In particular, we assume access to a collection {(Si,li)}ni=1, where each Si is a sample drawn from the probability distribution of Xi×Yi, and li is a binary label indicating whether "Xi→Yi" or "Xi←Yi". Given these data, we build a causal inference rule in two steps. First, we featurize each Si using the kernel mean embedding associated with some characteristic kernel. Second, we train a binary classifier on such embeddings to distinguish between causal directions. We present generalization bounds showing the statistical consistency and learning rates of the proposed approach, and provide a simple implementation that achieves state-of-the-art cause-effect inference. Furthermore, we extend our ideas to infer causal relationships between more than two variables.
The code is here.
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Three biographies: Ken Case, Charles Johnson and Leo Breiman

Sometimes, it's always nice to get a context on certain things that happened in the past. Here are three biographies that I have read recently and which fits that bill.
 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Friday, April 17, 2015

The Glitter Telescopes



Dick just let me know of the following:

  Dear Igor,
You might want to post:

Landau, E. (2015). Glitter Cloud May Serve as Space Mirror.  http://www.jpl.nasa.gov/news/news.php?feature=4553&utm_source=iContact&utm_medium=email&utm_campaign=NASAJPL&utm_content=daily20150415-1

Yours, -Dick Gordon DickGordonCan@gmail.com
Good catch, thank you Dick !

The NPR piece below mentions computational imagery for reconstructing the image but the papers on Grover Swartzlander publication page (including the one below) do not seem to indicate much in the way of L1 or similar regularization. It's just a question of time. Depending on how much they know about the glittering particles, it might be a compressive sensing or a compressive phase retrieval problem. Time will tell but I love the connection with some of technologies featured in "These Technologies Do Not Exist" page. 

Image restoration from a sequence of random masks by Xiaopeng Peng ; Garreth J. Ruane ; Alexandra B. Artusio-Glimpse ; Grover A. Swartzlander

We experimentally explored the reconstruction of the image of two point sources using a sequence of random aperture phase masks. The speckled intensity profiles were combined using an improved shift-and-add and multi-frame blind deconvolution to achieve a near diffraction limited image for broadband light (600-670 nm). Using a numerical model we also explored various algorithms in the presence of noise and phase aberration. © (2015) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.


Other resources:

 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

8 days of Nuit Blanche

 
Since the ClustrMaps counter re-started 8 days ago, I took a screen grab of the visits from the past 8 days to get a sense as to where Nuit Blanche is being read. Every time, I am surprised by the diversity of locations. wow.
 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Thesis: Turning Big Data into Small Data, Hardware Aware Approximate Clustering with Randomized SVD and Coresets, Tarik Adnan Moon



Organizing data into groups using unsupervised learning algorithms such as k-means clustering and GMM are some of the most widely used techniques in data exploration and data mining. As these clustering algorithms are iterative by nature, for big datasets it is increasingly challenging to find clusters quickly. The iterative nature of k-means makes it inherently difficult to optimize such algorithms for modern hardware, especially as pushing data through the memory hierarchy is the main bottleneck in modern systems. Therefore, performing on-the- fly unsupervised learning is particularly challenging.

In this thesis, we address this challenge by presenting an ensemble of algorithms to provide hardware-aware clustering along with a road-map for hardware-aware machine learning algorithms. We move beyond simple yet aggressive parallelization useful only for the embarrassingly parallel parts of the algorithms by employing data reduction, re-factoring of the algorithm, as well as, parallelization through SIMD commands of a general purpose processor. We find that careful engineering employing the SIMD instructions available by the processor and hand-tuning reduces response time by about 4 times. Further, by reducing both data dimensionality and data-points by PCA and then coreset-based sampling we get a very good representative sample of the dataset.
This data reduction technique reduces data dimensionality and data-points, effectively reducing the cost of the k-means algorithm by reducing the number of iteration and the total amount of computations. Last but not least, using we can save pre-computed data to compute cluster variations on the fly. Compared to the state of the art using k-means++, our approach offers comparable accuracy while running about 14 times faster, by moving less data fewer times through the memory hierarchy.
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Thursday, April 16, 2015

Compressed Sensing Recovery via Nonconvex Shrinkage Penalties

Hat tip to Thomas Arildsen for pointing out this preprint:


Compressed Sensing Recovery via Nonconvex Shrinkage Penalties by Joseph Woodworth, Rick Chartrand

The ℓ0 minimization of compressed sensing is often relaxed to ℓ1, which yields easy computation using the shrinkage mapping known as soft thresholding, and can be shown to recover the original solution under certain hypotheses. Recent work has derived a general class of shrinkages and associated nonconvex penalties that better approximate the original ℓ0 penalty and empirically can recover the original solution from fewer measurements. We specifically examine p-shrinkage and firm thresholding. In this work, we prove that given data and a measurement matrix from a broad class of matrices, one can choose parameters for these classes of shrinkages to guarantee exact recovery of the sparsest solution. We further prove convergence of the algorithm iterative p-shrinkage (IPS) for solving one such relaxed problem.
 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

CSJob: 18 Funded PhD opportunities (ESRs) in Machine Sensing (Machine Learning, Sparse Representations and Compressed Sensing)

Mark Plumbley just sent me the following:

Dear Igor,

We think that these PhD opportunities might be of interest to people on Nuit Blanche. They are part of a new "MacSeNet" EU-funded Marie Sklodowska-Curie Innovative Training Network in "Machine Sensing" (including Machine Learning, Sparse Representations and Compressed Sensing). They also come with a competitive salary compared to regular PhD scholarships, so might also be of interest to young researchers who had not otherwise thought of taking a PhD.

Best wishes,

Mark Plumbley

----

18 Early Stage Researchers (ESRs) in Machine Sensing (Machine Learning, Sparse Representations and Compressed Sensing)

EU H2020 MSCA Innovative Training Network (ITN) "MacSeNet: Machine Sensing Training Network" (H2020-MSCA-ITN-2014 642685)

Project Website: http://macsenet.eu/

Applications are invited to a number of Early Stage Researcher (ESR) positions as part of the new EU-funded Marie Sklodowska-Curie Actions (MSCA) Innovative Training Network (ITN) "MacSeNet: Machine Sensing Training Network".

The MacSeNet ITN (http://macsenet.eu/) brings together leading academic and industry groups to train a new generation of creative, entrepreneurial and innovative early stage researchers (ESRs) in the research area of measurement and estimation of signals using knowledge or data about the underlying structure. With its combination of ideas from machine learning and sensing, we refer to this research topic as "Machine Sensing". We will apply these new methods to problems such as: advanced brain imaging; inverse imaging problems; audio and music signals; and non-traditional signals such as signals on graphs.

Early Stage Researcher (ESR) positions allow the researcher to work towards a PhD, for a duration of 36 months. ESRs should be within four years of the diploma granting them access to doctorate studies at the time of recruitment, and must not have spent more than 12 months in the host country in the 3 years prior to starting. MSCA ESRs are paid a competitive salary which is adjusted for their host country.

Each of the ESR posts being recruited across MacSeNet has its own application process and closing date. The full list of Early Stage Researcher (ESR) Positions is as follows:

* ESR1: Robust unsupervised learning (INRIA/CNRS/ENS Paris, France)
* ESR2: Non-linear adaptive sensing/learning (INRIA/CNRS/ENS Paris, France)
* ESR3: Beyond sparse representations: efficient structured representations (University of Edinburgh, UK)
* ESR4: Next generation compressed sensing techniques for quantitative MRI (University of Edinburgh, UK)
* ESR5: Next generation compressed sensing techniques for a fast and data-driven reconstruction of multi-contrast MRI (Technical University Munich, Germany)
* ESR6: Next generation compressed sensing techniques for the fast and dynamic MRI (Technical University Munich, Germany)
* ESR7: Blind source separation of functional dynamic MRI signals via distributed dictionary learning (University of Athens/Computer Technology Institute, Athens, Greece)
* ESR8: Functional neuroimaging data characterisation via tensor representations (University of Athens/Computer Technology Institute, Athens, Greece)
* ESR9: Phase imaging via sparse coding in the complex domain (Instituto de Telecomunicações, Portugal)
* ESR10: Patch-based, non-local, and dictionary-based methods for blind image deblurring (Instituto de Telecomunicações, Portugal)
* ESR11: Sparse coding in the complex domain for phase retrieval and lensless coherent diffractive imaging (Tampere University of Technology, Finland)
* ESR12: Non-local HOSVD methods for denoising and super-resolution imaging (Noiseless Imaging, Finland)
* ESR13: Audio Restoration and Inpainting (University of Surrey, UK)
* ESR14: Sound Scene Analysis (University of Surrey, UK)
* ESR15: Music source separation beyond sparse decomposition (Fraunhofer IDMT, Germany)
* ESR16**: Sparse models and algorithms for data on large graphs (EPFL,Switzerland)
* ESR17**: Towards efficient processing of 4D point clouds (EPFL,Switzerland)
* ESR18**: Big data analysis of time series of origin-destination (OD) matrices (VisioSafe, Switzerland)

** - ESRS 16,17,18 will be funded by Swiss national funding

For more details of all ESR positions, and information on how to apply, see http://macsenet.eu/#1

--
Prof Mark D Plumbley
Professor of Signal Processing
Centre for Vision, Speech and Signal Processing (CVSSP)
University of Surrey
Guildford, Surrey, GU2 7XH, UK
Email: m.plumbley@surrey.ac.uk

Thanks Mark !
 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Wednesday, April 15, 2015

Paris Machine Learning Meetup #8, Season 2: Deep Learning and more...

 
This is the where the meetup will be streamed starting at around 7:15PM Paris time.


The theme of today's meetup will about (mostly) Deep Learning. It will be held in conjunction with the Deep Learning Paris meetup . We should also have the Kiev Deep Learning Meetup as a guest audience as well. All the presentation slides should be available below by the time the meetup starts. Our host and sponsor in Paris will be Criteo. Here is the tentative schedule/program that starts at 19h15 Paris time.


and the presentation slides:

+ Yoshua Bengio, Title: Deep Learning Theory (remote from the London Machine Learning meetup)

Although neural networks have long been considered lacking in theory and much remains to be done, theoretical evidence is mounting and will be discussed, to support distributed representations, depth of representation, the non-convexity of the training objective, and the probabilistic interpretation of learning algorithms (especially of the auto-encoder type, which were lacking one). The talk will focus on the intuitions behind these theoretical results.

+ Sander Dieleman and Ira Korshunova, Ghent University

Title: Classifying plankton with deep neural networks by the Deep Sea team from Reservoir Lab
Deep learning has become a very popular approach for solving computer vision problems in recent years. In this talk we'll demonstrate how this approach can be applied in practice. We'll show how our team of 7 built a model for the automated classification of plankton based on convolutional neural networks. Using this model, we placed 1st in the National Data Science Bowl competition on Kaggle.
+ Gabriella Contardo, LIP6, UPMC 


Learning to build representations from partial information: Application to cold-start recommendation
Most of the successful machine learning algorithms rely on data representation, i.e a way to disentangle and extract useful information from data, which will help the model in its objective task. Classical approaches build representations based on fully observed data. But in many cases, one wants to build representations ''on the fly'', based on a partially observed information. As an example, learning representations over users can be done by progressively gathering information about their profiles. This paper presents an inductive representation-based model to tackle the twofold more general problem of (i) selecting the right information to collect for building relevant representations, (ii) updating these representations based on new incoming information. It is developed in this paper to design static interview for the cold-start collaborative filtering problem but it can also be used to go smoothly to the warm context where all information has been gathered.

+ Guillaume Wenzek
Sentiment Analysis With Recursive Neural Tensor Network / Analyse de sentiment à l'aide de réseaux de neurones récursifs
Sentiment analysis is one of the hardest NLP (Natural Language Processing) task, due to complex linguistic structures such as negation or double-negation. Socher et al. introduced a method that combines a classic NLP tool, a syntaxic parser, with a special kind of neural networks. We will review this method and introduce a few improvements in order to train on a corpus with fewer annotations than the Stanford Sentiment Treebank used in the paper.


 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Tuesday, April 14, 2015

GPU Accelerated Randomized Singular Value Decomposition and Its Application in Image Compression

Unless I am mistaken, this is the first time I see RandNLA operation performed using a GPU. It is not the last.



GPU Accelerated Randomized Singular Value Decomposition and Its Application in Image Compression by Hao Ji and Yaohang Li

In this paper, we present a GPU-accelerated implementation of randomized Singular Value Decomposition (SVD) algorithm on a large matrix to rapidly approximate the top-k dominating singular values and correspondent singular vectors. The fundamental idea of randomized SVD is to condense a large matrix into a small dense matrix by random sampling while keeping the important information. Then performing traditional deterministic SVD on this small dense matrix reveals the top-k dominating singular values/singular vectors approximation. The randomized SVD algorithm is suitable for the GPU architecture; however, our study finds that the key bottleneck lies on the SVD computation of the small matrix. Our solution is to modify the randomized SVD algorithm by applying SVD to a derived small square matrix instead as well as a hybrid GPU-CPU scheme. Our GPU-accelerated randomized SVD implementation is around 6~7 times faster than the corresponding CPU version. Our experimental results demonstrate that the GPU-accelerated randomized SVD implementation can be effectively used in image compression.


From the paper:


The elapsed time spent on each primary computational component in randomized SVD is shown in Figure 2 for a 4,096 x 4,096 random matrix where k is 128 and p is 3. Multiplication between A and a “tall-and-skinny” or “short-and-wide” matrix can be efficiently carried out on the GPU’s SIMT architecture and hence the computational time in generating matrix \omega and performing matrix-matrix multiplications shrinks to nearly negligible. Nevertheless, deterministic SVD, particularly when the target matrix is small, has difficulty in fully taking advantage of GPU architecture, due to a series of sequential Householder transformations need to be applied. As a result, deterministic SVD becomes the main bottleneck and thus this GPU implementation has only 1.65 over that of the CPU.
I wonder what happens when the data becomes larger than 4096 x 4096., i.e; if these ratios still hold.
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

SOOT l1/l2 norm ratio sparse blind deconvolution - implementation -

Laurent just let me know of the release of an implementation for blind deconvolution:

Dear Igor

You have been kind enough to publicize the following paper (and you contributed to the build-up as you are indeed in the acknowledgment section featured in http://nuit-blanche.blogspot.fr/2014/07/euclid-in-taxicab-sparse-blind.html )

Euclid in a Taxicab: Sparse Blind Deconvolution with Smoothed {\ell _1}/{\ell _2} Regularization  http://dx.doi.org/10.1109/LSP.2014.2362861, http://arxiv.org/abs/1407.5465

The ℓ1/ℓ2 ratio regularization function has shown good performance for retrieving sparse signals in a number of recent works, in the context of blind deconvolution. Indeed, it benefits from a scale invariance property much desirable in the blind context. However, the ℓ1/ℓ2 function raises some difficulties when solving the nonconvex and nonsmooth minimization problems resulting from the use of such a penalty term in current restoration methods. In this paper, we propose a new penalty based on a smooth approximation to the ℓ1/ℓ2 function. In addition, we develop a proximal-based algorithm to solve variational problems involving this function and we derive theoretical convergence results. We demonstrate the effectiveness of our method through a comparison with a recent alternating optimization strategy dealing with the exact ℓ1/ℓ2 term, on an application to seismic data blind deconvolution.

After a little delay, the code is made available at Matlab Central:

http://www.mathworks.com/matlabcentral/fileexchange/50481-soot-l1-l2-norm-ratio-sparse-blind-deconvolution

but also here:

http://lc.cx/soot

just a few days before its presentation at ICASSP 2015 in Brisbane, Australia

https://www2.securecms.com/ICASSP2015/Papers/ViewPapers.asp?PaperNum=4910

Thank you

Laurent

Laurent Duval
IFP Energies nouvelles - Direction Mécatronique et Numérique
http://www.laurent-duval.eu
Thanks Laurent !
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Printfriendly