Saturday, February 13, 2016

Upcoming Paris Machine Learning Meetup on Fair and Ethical Algorithms (Wednesday 17th, 2016)



How do we go about designing fair and ethical Algorithms? It looks like the subject is gaining some traction. Here is an AMA on Reddit by Bart Selman, Moshe Vardi et Wendell Wallach on the societal impact of  AI .Coincidentally, the 2nd International Workshop on the future of AI is taking place right now in Phoenix, AZ (see the lineup at "What is the future of AI? And what should we be doing about it now?")

This week, in Paris, we'll hold the third meetup of the month (here is a link the first and the second): This time the topic will be Fair and Ethical Algorithms. Dojocrea is kindly hosting us (we are still looking for a sponsor for the pizzas and beer). The meetup will be streamed (stay tuned on the details) Here is our lineup:

Suresh VenkatasubramanianAn Axiomatic treatment of fairness
We propose a mathematical framework for reasoning about fairness in machine learning. 
Here is an Al Jazzera show on the general subject of fair algorithms (with Suresh in it). Suresh is coming from the world of TCS, so his presentation will provide an interesting take on fairness I am sure.

Michael Benesty Application of advanced NLP techniques to French legal decisions: ​​Demonstration of a significant bias of some French court of appeal judges in decisions about the rights of asylum.

The presentation will start with a brief overview of the French legal system and the legal decisions that have been analyzed. The main part will be dedicated to Word2vec and a custom multi input and multi task learning algorithm based on bi-directional GRU and classical deep learning used for extraction of information from public law decisions. In the last part, some basic descriptive statistics will be used to analyze the extracted information and reveal the apparent bias of some French court of appeal judges. 
Michel Blancard, CMAP / EtaLab, A sunny day in the CDO team of the French gov

The OpenSolarMap.org project aims to create a roof orientation map of the French territory. I will present how we crowdsourced a training dataset and how we used deep learning on it.n
Pierre Saurel,   "L'éthique des algorithmes"
Let me finally note that one of the Phoenix workshop presenter is Mark Reidl, he will be a speaker of the Paris Machine Learnng meetup in June . His current paper is titled: "Using Stories to Teach Human Values to Artificial Agents"


Image Credit: NASA/JPL-Caltech
This image was taken by Front Hazcam: Right B (FHAZ_RIGHT_B) onboard NASA's Mars rover Curiosity on Sol 1251 (2016-02-12 14:12:12 UTC).

Full Resolution
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Friday, February 12, 2016

Random Feature Maps via a Layered Random Projection (LaRP) Framework for Object Classification

From the paper:
"...While these state-of-the-art nonlinear random projection methods have been demonstrated to provide significantly improved accuracy and reduced computational costs on large- scale real-world datasets, they have all primarily focused on embedding nonlinear feature spaces into low dimensional spaces to create nonlinear kernels. As such, alternative strategies for achieving low complexity, nonlinear random projection beyond such kernel methods have not been well-explored, and can have strong potential for improved accuracy and reduced complexity. In this work, we propose a novel method for modelling nonlinear kernels using a Layered Random Projection (LaRP) framework. Contrary to existing kernel methods, LaRP models nonlinear kernels as alternating layers of linear kernel ensembles and nonlinearities. This strategy allows the proposed LaRP framework to overcome the curse of dimensionality while producing more compact and discriminative random features...."
Interesting choice of nonlinearity. As it stands it is the one we also used, great work ! Random Feature Maps via a Layered Random Projection (LaRP) Framework for Object Classification by A. G. Chung, M. J. Shafiee, A. Wong

The approximation of nonlinear kernels via linear feature maps has recently gained interest due to their applications in reducing the training and testing time of kernel-based learning algorithms. Current random projection methods avoid the curse of dimensionality by embedding the nonlinear feature space into a low dimensional Euclidean space to create nonlinear kernels. We introduce a Layered Random Projection (LaRP) framework, where we model the linear kernels and nonlinearity separately for increased training efficiency. The proposed LaRP framework was assessed using the MNIST hand-written digits database and the COIL-100 object database, and showed notable improvement in object classification performance relative to other state-of-the-art random projection methods.
 
 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

BinaryNet: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1 - implementation -

Very interesting in terms of eventual hardware implementation and in line with what we seem to know that usual architectures are redundant:


BinaryNet: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1 by Matthieu Courbariaux, Yoshua Bengio

We introduce BinaryNet, a method which trains DNNs with binary weights and activations when computing parameters' gradient. We show that it is possible to train a Multi Layer Perceptron (MLP) on MNIST and ConvNets on CIFAR-10 and SVHN with BinaryNet and achieve nearly state-of-the-art results. At run-time, BinaryNet drastically reduces memory usage and replaces most multiplications by 1-bit exclusive-not-or (XNOR) operations, which might have a big impact on both general-purpose and dedicated Deep Learning hardware. We wrote a binary matrix multiplication GPU kernel with which it is possible to run our MNIST MLP 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy. The code for BinaryNet is available.
 The implementation is here: https://github.com/MatthieuCourbariaux/BinaryNet/tree/master/Train-time

 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Thursday, February 11, 2016

These gravitational waves represent the vaporization of three suns

 


 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

De nouveau ce soir, Paris Machine Learning Meetup #7 Season 3: Neural Networks for Predictive Maintenance, Machine Learning in Quantitative Finance, Introduction to scikit-learn



This second meetup for February (videos and slides of yesterday's meetup are here ) will be hosted and sponsored by Quantmetry at Village by CA. The video streaming is visible on the video above. Here is the program and attendant slides:
* Franck Bardol and Igor Carron, introduction
Nous menons actuellement un projet d'expérimentation sur la maintenance prédictive de matériel roulant dans le domaine du ferroviaire. Plusieurs phases de preuve de concept afin d'identifier les données utiles ont été menées. Nous présenterons une étude prospective avec plusieurs approches utilisant des réseaux de neurones pour détecter les défaillances du matériel : méthodologie, résultats préliminaires et perspectives.
• Delaney Granizo-Mackenzie, Quantopian, "Machine Learning in Quantitative Finance"

We will cover how machine learning techniques might fit into quantitative finance. This includes techniques to rank assets and construct spread based portfolios, and which types of machine learning applications don't work.
Scikit-learn is a popular Open Source library for Machine Learning in Python. This presentation will introduce the project and give demo how to use it in conjunction with other tools from the PyData ecosystem such as NumPy, pandas and Jupyter notebook. Finally we will review recent and ongoing developments if time allows.



Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Wednesday, February 10, 2016

Ce soir: Paris Machine Learning Meetup #6 Season 3; ASTEC #NecMergitur, Beauty and danger of matrix completion, E-commerce and DL, Topic Modeling on Twitter streams and Cross-Lingual Systems

This February, we will have four meetups (on the 10th, 11th, 17th and 22nd)! Maybe it's a sign of times or maybe it's because there are 29 days this month, who knows. Let us note that we now have 3200 members and more than 1000 members on our LinkedIn group. Today, we will have the first of two meetups this week.



 

We will be hosted by Maltem Consulting Group who are also sponsoring the networking event afterwards. For this meetup we will have the following presentation (slides will be linked here before the actual meetup)

One short presentation of a project presented at the #Necmergitur:hackaton:

* Pitch; Manga Zossou, Projet AZTEC : Audio sensors for threat detection/système de capteurs audio pour détecter des menaces) at 11 minutres in the video. (in French)
and then:

* Franck Bardol and Igor Carron, introduction.

• Julie Josse, Beauty and danger of matrix completion methods: unveiling a black box's subtleties for better decision at 1h14 in the video. (in French)

• Andrei Yigal Lopatenko, Head of Search Quality @ WalmartLabs,  (remote from SF) What problems of ecommerce can deep learning solve? at 52 minutes in the video. (in English)
A short overview of ecommerce problem which can be solved with deep learning method with a tech dive into image similarity as a product recommendation problem. 

• Alex Perrier, (remote from Boston) at 24 minutes in the video. (in French)
Topic modeling avec LSA, LDA et STM appliqué aux streams de followers Twitter.
Topic modeling: LSA, LDA, STM avec code en python et R, Text mining, comment determiner le nombre de topics, comment visualiser les topics.

• Jean-Marc Marty, Proxem, The Quest for Cross-Lingual Systems  at 1h57 minutes in the video. (in French)
At Proxem, our clients ask us to extract information from e-mails, social medias, press articles, and basically any type of text you can imagine. In the standard case, the text to process is written in various languages. To establish systems that support a wide scale of languages and formats is one of the mission of our Research team.
I will focus during this talk on a paper that we've presented at EMNLP 2015 called Trans-Gram: Fast Cross-lingual Word Embeddings. The objective of this paper is to introduce a model that learns aligned word embeddings throughout a significant number of languages in a scalable way.
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Tuesday, February 09, 2016

Two-day workshop on "Computational and statistical trade-offs in learning", IHES, France, March 22-23

Francis just let me know of this two-day workshop on "Computational and statistical trade-offs in learning" which will take place at IHES on March 22-23, 2016

Computational and statistical trade-offs in learning

March 22-23, 2016

Institut des hautes etudes scientifiques, Centre de conference Marilyn et James Simons, 35 route de Chartres 91440 Bures sur Yvette

This workshop focuses on the computational and statistical trade-offs arising in various domains (optimization, statistical/machine learning). This is a challenging question since it amounts to optimize the performance under limited computational resources, which is crucial in the large-scale data context. One main goal is to identify important ideas independently developed in some communities that could benefit the others.

Invited speakers:

  • Pierre Alquier (ENSAE, Paris-Saclay)
  • Alexandre d'Aspremont (D.I., CNRS / ENS Paris)
  • Quentin Berthet (DPMMS, Cambridge Univ., UK)
  • Alain Celisse (Université de Lille)
  • Rémi Gribonval (INRIA, Rennes)
  • Emilie Kaufmann (CNRS, Lille)
  • Vianney Perchet (CREST, ENSAE Paris-Saclay)
  • Garvesh Raskutti (Wisconsin Institute for Discovery, Madison, USA)
  • Ohad Shamir (Weizmann Insitute, Rehovot, Israel)
  • Silvia Villa (Istituto Italiano di Tecnologia, Genova & MIT, Cambridge, USA)

    The conference is free and open to all, but registration is mandatory before March, 19. Please fill-in the form at

    https://indico.math.cnrs.fr/event/1007/

    where you will also find detailed information about the conference.


    Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
    Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

    Job: Lectureship in Computer Vision/Image Analysis for Medicine & Healthcare , CVSSP, University of Surrey, UK

    Mark just sent the following: 


    Dear Igor, The following Lecturer job (similar to Assistant Professor level) may be of interest to Nuit Blanche readers, especially those interested in Medicine & Healthcare applications of vision/image analysis. Best wishes, Mark

    -----

    Lectureship in Computer Vision/Image Analysis for Medicine & Healthcare

    Centre for Vision, Speech and Signal Processing, University of Surrey, UK

    Closing Date: Thursday 18 February 2016

    http://jobs.surrey.ac.uk/003116

    The University offers a unique opportunity for an outstanding research leader to join the Centre for Vision, Speech and Signal Processing (CVSSP).

    The successful candidate is expected to build a research project portfolio to complement existing CVSSP strengths. The centre seeks to appoint an individual with an excellent research track-record and international profile to lead future growth of research activities in one or more of the following areas:

    *       Medical Image Analysis
    *       Image and Sensor Analysis for Healthcare
    *       Big Data Understanding for Healthcare
    *       Machine Learning & Pattern Recognition
    *       Machine Intelligence

    We now seek a strong research leader who can develop the existing activities of CVSSP and exploit the synergetic possibilities that exist within the centre, across the University and regionally with UK industry. You will possess proven management and leadership qualities, demonstrating achievements in scholarship and research at a national and international level, and will have substantial experience of teaching within HE.

    CVSSP is one of the primary centres for computer vision & audio-visual signal processing in Europe with over 120 researchers, a grant portfolio of £18M and a track-record of pioneering research leading to technology transfer in collaboration with UK industry.  Related to this post CVSSP, in collaboration with the Surrey Centres for Clinical & Sleep Research, has recently been awarded £1.2M equipment funding to support research in sensor networks to monitor & measure people for healthcare in the community.  CVSSP forms part of the Department of Electronic Engineering, recognised as a top department for both Teaching and Research. Further details of CVSSP: surrey.ac.uk/cvssp

    Closing date for applications: 18th February 2016

    For an informal discussion, please contact Professor Adrian Hilton, Director of CVSSP (a.hilton@surrey.ac.uk).

    --
    Prof Mark D Plumbley
    Professor of Signal Processing
    Centre for Vision, Speech and Signal Processing (CVSSP)
    University of Surrey
    Guildford, Surrey, GU2 7XH, UK
    Email: m.plumbley@surrey.ac.uk


    Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
    Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

    Compressive Spectral Clustering



    Compressive Spectral Clustering by Nicolas Tremblay, Gilles Puy, Remi Gribonval, Pierre Vandergheynst

    Spectral clustering has become a popular technique due to its high performance in many contexts. It comprises three main steps: create a similarity graph between N objects to cluster, compute the first k eigenvectors of its Laplacian matrix to define a feature vector for each object, and run k-means on these features to separate objects into k classes. Each of these three steps becomes computationally intensive for large N and/or k. We propose to speed up the last two steps based on recent results in the emerging field of graph signal processing: graph filtering of random signals, and random sampling of bandlimited graph signals. We prove that our method, with a gain in computation time that can reach several orders of magnitude, is in fact an approximation of spectral clustering, for which we are able to control the error. We test the performance of our method on artificial and real-world network data.
     
     
     
    Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
    Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

    Monday, February 08, 2016

    ICLR 2016: List of accepted papers:

      
    From Hugo's twitter feed:
    Here is the list of accepted papers:



    Release Date: February 4, 2016
    Keywords: MVIC, Pluto, Ralph
    Credit: NASA/Johns Hopkins University Applied Physics Laboratory/Southwest Research Institute


    Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
    Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

    Printfriendly