Monday, November 20, 2017

One Network to Solve Them All --- Solving Linear Inverse Problems using Deep Projection Models - implementation -

Wow, using the same network to perform inpainting, compressive sensing and superresolution on the precious !




While deep learning methods have achieved state-of-the-art performance in many challenging inverse problems like image inpainting and super-resolution, they invariably involve problem-specific training of the networks. Under this approach, different problems require different networks. In scenarios where we need to solve a wide variety of problems, e.g., on a mobile camera, it is inefficient and costly to use these specially-trained networks. On the other hand, traditional methods using signal priors can be used in all linear inverse problems but often have worse performance on challenging tasks. In this work, we provide a middle ground between the two kinds of methods --- we propose a general framework to train a single deep neural network that solves arbitrary linear inverse problems. The proposed network acts as a proximal operator for an optimization algorithm and projects non-image signals onto the set of natural images defined by the decision boundary of a classifier. In our experiments, the proposed framework demonstrates superior performance over traditional methods using a wavelet sparsity prior and achieves comparable performance of specially-trained networks on tasks including compressive sensing and pixel-wise inpainting.


A TensorFlow implementation is here: https://github.com/rick-chang/OneNet





Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !

Friday, November 17, 2017

DCASE 2017 TASK 1: Acoustic Scene Classification Using Shift-Invariant Kernels and Random Features

Thanks Mark and Laurent for the heads-up on finding this work.




DCASE 2017 TASK 1: Acoustic Scene Classification Using Shift-Invariant Kernels and Random Features by Abelino Jimenez, Benjamın Elizalde and Bhiksha Raj 

Acoustic scene recordings are represented by different types of handcrafted or Neural Network features. These features, typically of thousands of dimensions, are classified in state of the art approaches using kernel machines, such as the Support Vector Machines (SVM). However, the complexity of training these methods increases with the dimensionality of these input features and the size of the dataset. A solution is to map the input features to a randomized low-dimensional feature space. The resulting random features can approximate non-linear kernels with faster linear kernel computation. In this work, we computed a set of 6,553 input features and used them to compute random features to approximate three types of kernels, Guassian, Laplacian and Cauchy. We compared their performance using an SVM in the context of the DCASE Task 1 - Acoustic Scene Classification. Experiments show that both, input and random features outperformed the DCASE baseline by an absolute 4%. Moreover, the random features reduced the dimensionality of the input by more than three times with minimal loss of performance and by more than six times and still outperformed the baseline. Hence, random features could be employed by state of the art approaches to compute low-storage features and perform faster kernel computations.




Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

CSJob: 2 Postdocs, Computer Vision and Machine Learning for Robotics / Navigation, decision making and teleoperation of autonomous vehicles, INRIA, France

Natalia let me know of two postdoc positions at INRIA on deep learning, reinforcement learning and unsupervised learning for computer vision and robotics at www.robotsthatdream.eu


POSITION 1
Post-doc - Computer Vision and Machine Learning for Robotics, ENSTA ParisTech/INRIA, Palaiseau, France # Position description #
The Autonomous Systems and Robotics team at ENSTA ParisTech (http://asr.ensta.fr) and INRIA (http://flowers.inria.fr) is opening an 12 months postdoc position in the area of vision and machine learning, and in particular representation learning for robotics. The goal is to develop approaches that make it possible to learn relevant low dimensional representations from images that will support behavior learning. The position will take place in the context of the DREAM H2020 european project.
# Context #
The DREAM European project (http://www.robotsthatdream.eu/) is focused on the bootstrap of a developmental process allowing a robot to learn about its environment and the objects it contains. DREAM is one of the five selected projects of the FET call (Future and Emerging Technologies) on « Knowing, doing, being: cognition beyond problem solving ». The position will be located in ENSTA ParisTech, Palaiseau, France, in the frame of the ENSTA ParisTech/INRIA FLOWERS team (http://flowers.inria.fr). The position involve robotics experiments that will be done in simulation and on the Baxter robot in close collaboration with the ISIR laboratory at UPMC (http://www.isir.upmc.fr/).
 
# Requirements #
We are looking for highly motivated candidates with a strong academic record. An excellent background is expected with a PhD in computer vision for robotics, representation learning (e.g., deep learning), reinforcement learning or more generally machine learning. Significant experience in real robotics applications will be appreciated. Programming skills in modern C++ and python are expected, along with experience with ROS.
 
# To apply #
As part of its work in robotics and intelligent vehicles, ENSTA ParisTech
(http://www.ensta-paristech.fr) is recruiting a post-doctoral student to participate in the development of its activities. The areas covered will focus on navigation, path planning, decision making and remote operation of autonomous vehicles.
Inquiry about the position can be directed to david.filliat@ensta-paristech.fr Applicants should send a CV, motivation letter (max 1 page), and a list of three references in a single pdf file to david.filliat@ensta-paristech.fr. Please put [DREAM post-doc] in the subject of the mail. Review of applicants will begin immediately, and will continue until the position is filled.


POSITION 2 PostDoc - Navigation, decision making and teleoperation of autonomous vehicles (24 months - ENSTA ParisTech, Palaiseau, France) 
# Job Description # 
# Main missions # - a PhD in the field of mobile robotics or autonomous vehicles, in particular in navigation, trajectory planning, decision making and / or teleoperation. The post-doctoral student will integrate the Robotics and Autonomous Systems team of the Computer Science and Systems Engineering Unit (http://asr.ensta-paristech.fr) as part of the EVAPS project (http://pole-moveo.org/projects/evaps/). In collaboration with Renault and the other members of this project, its role will be:
  • to orient the research work and to propose axes of development to the team of engineers working on these projects;
  • ensure the development and evolution of navigation and remote operation functions for autonomous vehicles;
  • participate in testing and validation campaigns in partnership with industry; 
# Required Skills # We are looking for candidates with:
  • an experience of participation in collaborative research projects
  • proficiency in programming and development languages ​​(python, C, C ++) and Linux,
  • a good experience with the ROS system - http://www.ros.org,
  • proven skills in software development and integration, 

# Profile # 
  • Engineer or master and PhD specialty robotics / computer science. 
  • - a PhD in the field of mobile robotics or autonomous vehicles, in particular in navigation, trajectory planning, decision making and / or teleoperation.
Salary based on candidate's experience and profile. Duration of 24 months, to be filled as soon as possible. Send CV and cover letter in one file, in PDF format, to David Filliat - david.filliat@ensta-paristech.fr

Thursday, November 16, 2017

CSJob: Four Postdoctoral Research Assistants, Mathematical Institute and the Oxford-Emirates Data Science Lab (OEDSL)

Jared just sent me the following: 
Dear Igor,
Enjoying following you on twitter. Would you be so kind as to advertise the following postdoc position:
Sure Jared !
Machine and Deep Learning have proven to be some of the most effective tools in data science, now reliably surpassing human ability to perform tasks such as classification of images and sophisticated games. The holder of this post will work to advance our understanding of the efficacy and interpretability of these methods, as well as apply these techniques to real world problems such as those of interest to Emirates Airlines. Methods to be explored are likely to include: the scattering transform, models for data which can be rigorously analysed such as deep neural nets with Gaussian weights, dictionary learning, convolutional sparse coding, reversibility, adversarial nets, optimal approximation, function learning, the information bottleneck, and high dimensional geometry. Previous experience in some of the following topics expected to be useful: Machine and deep learning, sparse approximation or signal processing, random matrix theory, and information theory. A successful candidate in this field will be supervised by Professor Jared Tanner within the Numerical Analysis Group.
For more information please see,
https://www.maths.ox.ac.uk/node/26933

Note the deadline for this position is Dec. 6th at noon UK time.
-------
Jared Tanner
Professor of the Mathematics of Information
Oxford University Liaison Director to the Alan Turing Institute
Mathematics Institute
University of Oxford
01865615311 (office - Andrew Wiles Building S2.40)
http://people.maths.ox.ac.uk/tanner



Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Wednesday, November 15, 2017

Ce soir: Paris Machine Learning #3 Season 5, PokémonGO, Unsupervised ML in high dimension, Prevision.io, Learning to program



Thanks to Cap Digital for hosting us. The streaming of the event is here:



The capacity of the room is about 100 seats. Audience comes as first-come-first-serve then door closes. Registration is here. Here is the tentative schedule : 

6:45PM doors open / 7-9:00PM talks / 9-10:00PM drinks/foods / 10:00PM end

and the presentations

Franck Bardol, Igor Carron, Welcome.

short presentation (1-2 minutes)

David Perlmutter, Implicity
IMPLICITY analyzes pacemaker data with machine learning and web semantics. We are hiring data scientists. You're welcome !
Longer presentations:


This talk uses Pokémon GO to illustrate how data science can help metagaming. In particular, it uses game theory (payoff matrix, minimax criterion), machine learning and operation research to solve practical problems in Pokémon GO. 


In this talk, i will present some work carried out during my PhD CIFRE at Artefact on unsupervised learning, clustering in high dimension on GMM and mixture density estimation. 


A working machine learning system involves a lot of moving parts, and the typical workflow, from data acquisition to industrialization is not well standardized and will usually take you through a variety of languages and frameworks. We introduce Prevision.io, a platform that allows to reduce this complexity by automatically managing model training and deployment, so that data scientists can focus on solving problems and creating value for customers. 


BeepAI, a multiplatform, "connected" artificial intelligence that learns to program. BeepAI is able to find the most efficient algorithm to solve logic, mathematics and other computer problems in a network of "connected virtual machines". Once a solution found BeepAI actually benefit the other instances of the network thanks to a library of "solutions" that is built up as the results are found.
http://robot.beepmaster.com/beepai.php







Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Tuesday, November 14, 2017

Biologically Inspired Random Projections


At LightOn, we are often asked by different stakeholders about Random Projections. One way we explain them, is through its universal compression properties derived from the Johnson-Lindenstrauss lemma.  Everyone know about the benefits of mp3, jpeg, mpeg file formats, thereby making compression a simple and easy concept to explain.


Sometimes, we also have to explain why it is a good thing to "blow up" the initial data in a higher dimension so that it can easily be separable. While this is a perfectly good explanation, this "blowing up" of data still requires some getting used to.


 As it turns out it looks like these sort of random projections have a biological analog in the brains of fruit flies and rats as explained in this recent paper published in Science by Sanjoy DasguptaCharles F Stevens and Saket Navlakha (featured earlier on BiorXiv and below). These techniques can be used for Deep neural networks approaches as well (see second paper below).




A neural algorithm for a fundamental computing problem by Sanjoy Dasgupta, Charles F Stevens, Saket Navlakha
Similarity search, such as identifying similar images in a database or similar documents on the Web, is a fundamental computing problem faced by many large-scale information retrieval systems. We discovered that the fly's olfactory circuit solves this problem using a novel variant of a traditional computer science algorithm (called locality-sensitive hashing). The fly's circuit assigns similar neural activity patterns to similar input stimuli (odors), so that behaviors learned from one odor can be applied when a similar odor is experienced. The fly's algorithm, however, uses three new computational ingredients that depart from traditional approaches. We show that these ingredients can be translated to improve the performance of similarity search compared to traditional algorithms when evaluated on several benchmark datasets. Overall, this perspective helps illuminate the logic supporting an important sensory function (olfaction), and it provides a conceptually new algorithm for solving a fundamental computational problem.



Current deep learning architectures are growing larger in order to learn from complex datasets. These architectures require giant matrix multiplication operations to train millions of parameters. Conversely, there is another growing trend to bring deep learning to low-power, embedded devices. The matrix operations, associated with both training and testing of deep networks, are very expensive from a computational and energy standpoint. We present a novel hashing based technique to drastically reduce the amount of computation needed to train and test deep networks. Our approach combines recent ideas from adaptive dropouts and randomized hashing for maximum inner product search to select the nodes with the highest activation efficiently. Our new algorithm for deep learning reduces the overall computational cost of forward and back-propagation by operating on significantly fewer (sparse) nodes. As a consequence, our algorithm uses only 5% of the total multiplications, while keeping on average within 1% of the accuracy of the original model. A unique property of the proposed hashing based back-propagation is that the updates are always sparse. Due to the sparse gradient updates, our algorithm is ideally suited for asynchronous and parallel training leading to near linear speedup with increasing number of cores. We demonstrate the scalability and sustainability (energy efficiency) of our proposed algorithm via rigorous experimental evaluations on several real datasets.
h/t Peter and Iacopo



Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Monday, November 13, 2017

Phase Transitions, Optimal Errors and Optimality of Message-Passing in Generalized Linear Models

Theoretical phase transitions for nonlinear problems !




We consider generalized linear models (GLMs) where an unknown n-dimensional signal vector is observed through the application of a random matrix and a non-linear (possibly probabilistic) componentwise output function. We consider the models in the high-dimensional limit, where the observation consists of m points, and m/nα where α stays finite in the limit m,n. This situation is ubiquitous in applications ranging from supervised machine learning to signal processing. A substantial amount of theoretical work analyzed the model-case when the observation matrix has i.i.d. elements and the components of the ground-truth signal are taken independently from some known distribution. While statistical physics provided number of explicit conjectures for special cases of this model, results existing for non-linear output functions were so far non-rigorous. At the same time GLMs with non-linear output functions are used as a basic building block of powerful multilayer feedforward neural networks. Therefore rigorously establishing the formulas conjectured for the mutual information is a key open problem that we solve in this paper. We also provide an explicit asymptotic formula for the optimal generalization error, and confirm the prediction of phase transitions in GLMs. Analyzing the resulting formulas for several non-linear output functions, including the rectified linear unit or modulus functions, we obtain quantitative descriptions of information-theoretic limitations of high-dimensional inference. Our proof technique relies on a new version of the interpolation method with an adaptive interpolation path and is of independent interest. Furthermore we show that a polynomial-time algorithm referred to as generalized approximate message-passing reaches the optimal generalization error for a large set of parameters.











Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Friday, November 10, 2017

Globally and Locally Consistent Image Completion

Using GANs for inpainting or matrix completion whichever field you are in !  



We present a novel approach for image completion that results in images that are both locally and globally consistent. With a fully-convolutional neural network, we can complete images of arbitrary resolutions by filling-in missing regions of any shape. To train this image completion network to be consistent, we use global and local context discriminators that are trained to distinguish real images from completed ones. The global discriminator looks at the entire image to assess if it is coherent as a whole, while the local discriminator looks only at a small area centered at the completed region to ensure the local consistency of the generated patches. The image completion network is then trained to fool the both context discriminator networks, which requires it to generate images that are indistinguishable from real ones with regard to overall consistency as well as in details. We show that our approach can be used to complete a wide variety of scenes. Furthermore, in contrast with the patch-based approaches such as PatchMatch, our approach can generate fragments that do not appear elsewhere in the image, which allows us to naturally complete the images of objects with familiar and highly specific structures, such as faces.
The project webpage is here: http://hi.cs.waseda.ac.jp/~iizuka/projects/completion/en/ .It will eventually host the attendant implementation but it currently features a video !

Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !

Job: Harvard University Data Science Initiative Postdoctoral Fellows Program

Kevin just sent me the following:

Dear Igor,
 
Harvard University has opened its Data Science Postdoctoral Fellows program this year and I would be grateful for your help in making sure it reaches interested applicants via your blog. The details are below.
 
Best,
Kevin Doyle
-- 
Kevin Doyle
Program Coordinator
 
 
Call for Postdoctoral Fellowship Applications
 
The Harvard University Data Science Initiative is seeking applications for its Harvard Data Science Postdoctoral Fellows Program for the 2018-2019 academic year. The normal duration of the Fellowship is two years. Fellows will receive a generous salary as well as an annual allocation for research and travel expenses.
 
We are looking for researchers whose interests are in data science, broadly construed, and including researchers with both a methodological and applications focus. Fellows will be provided with the opportunity to pursue their research agenda in an intellectually vibrant environment with ample mentorship. We are looking for independent researchers who will seek out collaborations with other fellows and with Harvard faculty.
 
The Data Science Postdoctoral Fellows Program is supported by the Harvard Data Science Initiative. The Data Science Initiative involves faculty from across the university. The Fellows program will concentrate some of its activity in new physical spaces provided by the Computer Science-Statistics Data Science Lab, the Longwood Data Science Lab, as well as space in the Institute for Quantitative Social Sciences.
 

From the page:


The Harvard Data Science Postdoctoral Fellows
The Harvard University Data Science Initiative is seeking applications for its Harvard Data Science Postdoctoral Fellows Program for the 2018-2019 academic year. The normal duration of the Fellowship is two years. Fellows will receive a generous salary as well as an annual allocation for research and travel expenses.
We are looking for researchers whose interests are in data science, broadly construed, and including researchers with both a methodological and applications focus. Fellows will be provided with the opportunity to pursue their research agenda in an intellectually vibrant environment with ample mentorship. We are looking for independent researchers who will seek out collaborations with other fellows and with Harvard faculty.
The Data Science Postdoctoral Fellows Program is supported by the Harvard Data Science Initiative. The Data Science Initiative involves faculty from across the university. The Fellows program will concentrate some of its activity in new physical spaces provided by the Computer Science-Statistics Data Science Lab, the Longwood Data Science Lab, as well as space in the Institute for Quantitative Social Sciences.
Funding Priorities
The Data Science Postdoctoral Fellows Program will support outstanding researchers whose interests relate to the following themes:
  1. Methodological foundations, including for example, causal inference, data systems design, deep learning, experimental design, modeling of structured data, random matrix theory, non-parametric Bayesian methods, scalable inference, statistical computation, and visualization.
  1. Development of data science approaches tailored to analytical challenges in substantive fields that span the full intellectual breadth of Harvard’s faculties.  To give some purely illustrative examples, these fields include health sciences (e.g. life and population sciences), earth systems (e.g. climate change research); society (e.g. data that can affect the experience of individuals, or policy and ethical questions); and the economy (e.g. automation, Internet of Things, digital economy). This list is by no means exhaustive.
Successful applicants will be expected to lead their own research agenda, but also work collaboratively with others including with members of the Harvard faculty, and to contribute to building the data science intellectual community. The Fellows program will offer numerous opportunities to engage with the broader data science community, including through seminar series, informal lunches, mentoring opportunities, opportunities for fellow-led programming, and other networking events. Fellows should expect to spend most of their time in residence at Harvard.
Available Funding
Stipend: $80,000 is available in salary support per year for an initial two year appointment.  Appointments may be extended for a third year, budget and performance allowing. 
Travel: An additional $10,000 will be allocated for research and travel expenses each year.
Eligibility
Applicants must be outstanding, intellectually curious researchers at an early stage of their scholarly career. Applicants are required to have a doctorate in a related area by the expected start date. Applicants should have demonstrated a capacity for independent work, and will be expected to engage with researchers and faculty and participate in activities convened by the Data Science Initiative. We recognize that strength comes through diversity and actively seek and welcome people with diverse backgrounds, experiences, and identities.
Application
The deadline for applications is December 4, 2017. Applicants should apply through the application portal (http://bit.ly/2yTNw6D).  Required application documents include:
  1. A CV
  1. A cover letter that identifies up to five (and at least two) Harvard faculty members with whom the applicant would like to work.
  1. A statement of research interests of up to three pages that succinctly describes the applicant’s research interests. The statement should explain the importance and potential impact of this research. If the research is anticipated to require significant space and/or equipment, the candidate should explore arrangements with relevant faculty prior to submitting an application.
  1. Up to three representative papers.
  1. Names and contact information for at least two and up to five references (the application is complete only when two letters have been submitted). Referees will be provided with a link to the submission portal.
All materials should be submitted as PDF documents. We will strive to make decisions by February 1, 2018.
We are an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability status, protected veteran status, or any other characteristic protected by law.






Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Wednesday, November 08, 2017

Learned D-AMP: Principled Neural Network based Compressive Image Recovery - implementation -


Chris just sent me the following item to be immediatly classified in The Great Convergence list:

Hi Igor,

Thanks for maintaining Nuit Blanche, it continues to be a great resource.

We recently unrolled the D-AMP compressive sensing recovery algorithm to form the Learned D-AMP neural network. This network is interpretable, is easy to train, isn't specific to a particular measurement matrix, and offers great run-times and accuracy. We thought it might be of interest to your readers.

The paper can be found here: https://arxiv.org/abs/1704.06625

Matlab and TensorFlow implementations can be found here: https://github.com/ricedsp/D-AMP_Toolbox

Best regards,
Chris





Compressive image recovery is a challenging problem that requires fast and accurate algorithms. Recently, neural networks have been applied to this problem with promising results. By exploiting massively parallel GPU processing architectures and oodles of training data, they can run orders of magnitude faster than existing techniques. However, these methods are largely unprincipled black boxes that are difficult to train and often-times specific to a single measurement matrix.
It was recently demonstrated that iterative sparse-signal-recovery algorithms can be "unrolled" to form interpretable deep networks. Taking inspiration from this work, we develop a novel neural network architecture that mimics the behavior of the denoising-based approximate message passing (D-AMP) algorithm. We call this new network Learned D-AMP (LDAMP).
The LDAMP network is easy to train, can be applied to a variety of different measurement matrices, and comes with a state-evolution heuristic that accurately predicts its performance. Most importantly, it outperforms the state-of-the-art BM3D-AMP and NLR-CS algorithms in terms of both accuracy and run time. At high resolutions, and when used with sensing matrices that have fast implementations, LDAMP runs over 50\timesfaster than BM3D-AMP and hundreds of times faster than NLR-CS.


Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !

Monday, November 06, 2017

Job: Post-doctoral fellow/PhD position in statistical genomics and imaging analysis, Tulane University, LA

Yu-Ping just sent me the following:

Hi Igor, 
Have been enjoying Nuit Blanche. Could you please post the following postdoc opportunity at Facebook and LinkedIn? We are developing sparse modeling, matrix factorization and other signal processing and machine learning approaches for imaging and genomics data analysis.
http://www.tulane.edu/~wyp/Opportunity.htmlThanks!
Yu-Ping


Sure Yu-Ping, here it is:

Position:
Post-doctoral fellow/PhD study in genomics and imaging data analysis

Organization:
Tulane University

Location:
New Orleans, USA

Description:
Several post-doctoral fellow or PhD graduate research assistant positions are available immediately to work on the development of machine learning, image processing, statistical and signal processing approaches for the analysis and integration of genomic and medical imaging data. More information about our research work atMultiscale Bioimaging and Bioinformatics Laboratory of Tulane Biomedical Engineering Department can be found at our website (http://www.tulane.edu/~wyp/). The position will be funded by both NIH and NSF. The candidate will have a chance to collaborate with people at Tulane School of Sciences and Engineering, School of Public Health and Tropic Medicine and School of Medicine. Tulane is a private university and a member of the 63 prestigious Association of American Universities (AAU). Tulane is ranked as the 39th best national university in 2016 by US News Report, providing a unique environment for learning and research. The salary for post-doctoral fellow is negotiable, commensurate with the experiences of the candidate.

Qualifications:
(1) A degree in Applied and Computational Mathematics, Biomedical Engineering, Electrical Engineering, Computer Science, Statistics or other related fields; (2) Programming skills with MATLAB or C; (3) Experience and knowledge of signal processing, machine learning and statistical analysis; (4) Knowledge of biology and genomics is desirable but not required.
To apply for the position, please send CV with a list of three references to wyp@tulane.edu





Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Jobs: Two Postdocs, MIT Institute for Foundations of Data Science (MIFODS)

Piotr just sent me the following:

Hi Igor, 
Hope things are well with you. I was wondering if you could post the announcement below, about two postdoc positions available in our new MIT Institute for Foundations of Data Science (MIFODS). Thanks! 
Piotr
Sure Piotr ! here is the announcement:

The new MIT Institute for Foundations of Data Science (MIFODS), funded by the National Science Foundation TRIPODS program, is announcing two competitive postdoctoral fellowships. MIFODS (at mifods.mit.edu) is an interdisciplinary entity across the Department of Mathematics, Computer Science and Artificial Intelligence Lab (CSAIL), and the Institute for Data Systems and Society (IDSS) at MIT. It provides a structured environment for exploring interdisciplinary research spanning Mathematics, Statistics, and Theoretical Computer Science.
Qualified candidates will have interest in the aforementioned three areas, with expertise in at least one of them. Duties include performing independent, interdisciplinary research in collaboration with any of the MIFODS members, as well as leading activities within MIFODS.
More details and the application form available at https://academicjobsonline.org/ajo/fellowship/10371




Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Wednesday, November 01, 2017

CfP: Workshop on Approximating high dimensional functions, Alan Turing Institute, London,18-19 December 2017.


Hemant just sent me the following:

Dear Igor,

Aretha Teckentrup and I are organizing a workshop on “Approximating high dimensional functions” at the Alan Turing Institute, London in December. We were wondering if it would be possible for you to post the announcement for this workshop (below) on your blog Nuit blanche? We would very much appreciate it, thanks a lot in advance!
Many thanks,
Hemant Tyagi
https://hemant-tyagi.github.io/


Sure Hemant, here it is:


================= Announcement ==============================


A workshop on "Approximating high dimensional functions" will be held at the Alan Turing Institute, London on 18, 19 December 2017.
The workshop will focus on problems centered around approximating a high dimensional function from limited information, featuring talks by eminent researchers in the fields of multivariate approximation theory, ridge functions, stochastic PDEs and non-parametric regression.


Confirmed speakers are:
Pierre Alquier (ENSAE, Universite Paris-Saclay, France),
Albert Cohen (Université Pierre et Marie Curie, France),
Sergey Dolgov (University of Bath, UK),
Arthur Gretton / Dougal Sutherland (UCL, UK),
Sandra Keiper (Technische Universität Berlin, Germany),
Sebastian Mayer (Universität Bonn, Germany),
Richard Samworth (University of Cambridge, UK),
Jan Vybiral (Czech Technical University, Czech Republic),
Sören Wolfers (KAUST, Saudi Arabia)


The complete program can be found here: https://www.turing.ac.uk/approximating-high-dimensional-functions-agenda/
Registration for the workshop is free, but mandatory, and can be done here: https://www.turing.ac.uk/events/approximating-high-dimensional-functions/
Best regards,
Aretha Teckentrup,
Hemant Tyagi








Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Tuesday, October 31, 2017

Linear Dimensionality Reduction in Linear Time: Johnson-Lindenstrauss-type Guarantees for Random Subspace / Structure-aware error bounds for linear classification with the zero-one loss

Bob sent me the following:

Hi Igor,

I have a couple of recent reports posted to the arxiv, which may be of interest to you? One is a data-dependent Johnson-Lindenstrauss type result for feature subsampling, the other is some theory for linear classifiers and randomly-projected linear classifiers.
Links follow: JLL: https://arxiv.org/abs/1705.06408Learning theory: https://arxiv.org/abs/1709.09782

Hope you like them.

ATB Bob

Thanks Bob  !



We consider the problem of efficient randomized dimensionality reduction with norm-preservation guarantees. Specifically we prove data-dependent Johnson-Lindenstrauss-type geometry preservation guarantees for Ho's random subspace method: When data satisfy a mild regularity condition -- the extent of which can be estimated by sampling from the data -- then random subspace approximately preserves the Euclidean geometry of the data with high probability. Our guarantees are of the same order as those for random projection, namely the required dimension for projection is logarithmic in the number of data points, but have a larger constant term in the bound which depends upon this regularity. A challenging situation is when the original data have a sparse representation, since this implies a very large projection dimension is required: We show how this situation can be improved for sparse binary data by applying an efficient `densifying' preprocessing, which neither changes the Euclidean geometry of the data nor requires an explicit matrix-matrix multiplication. We corroborate our theoretical findings with experiments on both dense and sparse high-dimensional datasets from several application domains.


We prove risk bounds for binary classification in high-dimensional settings when the sample size is allowed to be smaller than the dimensionality of the training set observations. In particular, we prove upper bounds for both 'compressive learning' by empirical risk minimization (ERM) (that is when the ERM classifier is learned from data that have been projected from high-dimensions onto a randomly selected low-dimensional subspace) as well as uniform upper bounds in the full high-dimensional space. A novel tool we employ in both settings is the 'flipping probability' of Durrant and Kaban (ICML 2013) which we use to capture benign geometric structures that make a classification problem 'easy' in the sense of demanding a relatively low sample size for guarantees of good generalization. Furthermore our bounds also enable us to explain or draw connections between several existing successful classification algorithms. Finally we show empirically that our bounds are informative enough in practice to serve as the objective function for learning a classifier (by using them to do so).



Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Printfriendly