Thursday, September 21, 2017

CSHardware: InView Multi-Pix Camera Demonstrates 1FPS SWIR Imaging



This is rare to see an embodiment in hardware of CS and DL ideas in the sensing area and in production. We mentioned the development at InView a while back, Here is a new announcement using compressive sensing technology and neural networks in the SWIR sensing realm. From the press release:
"...Having already harnessed the computational power of the famous Single-Pixel Camera architecture of the InView210 SWIR imager, InView has now enhanced its speed and image processing capability by incorporating a small array of pixels and new compressive computational methods. InView takes advantage of parallel measurements, matrix processing and efficient reconstruction algorithms to produce the highest resolution SWIR images at rates of just a few seconds per frame. As shown below, multi-pixel Compressive Sensing magnifies the resolution of a small pixel array. On the left, is a low-resolution image directly measured from a 64 x 64 InGaAs pixel array. When that same 64 x 64 array is used with compressive sensing, the image is transformed computationally into a detailed 512 x 512 image...."
The rest is here.


Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !

Tuesday, September 19, 2017

Stabilizing GAN Training with Multiple Random Projections - implementation -

Iacopo recently pointed out the following to us. How can you use the fact that most manifolds are low dimensional in training generative adversarial networks ? Random projections look like the answer !


Training generative adversarial networks is unstable in high-dimensions when the true data distribution lies on a lower-dimensional manifold. The discriminator is then easily able to separate nearly all generated samples leaving the generator without meaningful gradients. We propose training a single generator simultaneously against an array of discriminators, each of which looks at a different random low-dimensional projection of the data. We show that individual discriminators then provide stable gradients to the generator, and that the generator learns to produce samples consistent with the full data distribution to satisfy all discriminators. We demonstrate the practical utility of this approach experimentally, and show that it is able to produce image samples with higher quality than traditional training with a single discriminator.

Source codes and models are here: http://www.cse.wustl.edu/~ayan/rpgan/



Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Friday, September 15, 2017

Deep Null Space, Deep Factorization and the Last Image of Cassini


Deep Null Space property: what an aptly named property for a blog entry featuring the last image of Cassini taken before it entered Saturn's atmosphere.

We have followed Cassini since 2004 here on Nuit Blanche.

In a different direction, Deep Factorization is another aspect of The Great  Convergence. Here  are two instances of it in the following three papers:

We study a deep matrix factorization problem. It takes as input a matrix X obtained by multiplying K matrices (called factors). Each factor is obtained by applying a fixed linear operator to a vector of parameters satisfying a sparsity constraint. We provide sharp conditions on the structure of the model that guarantee the stable recovery of the factors from the knowledge of X and the model for the factors. This is crucial in order to interpret the factors and the intermediate features obtained when applying a few factors to a datum. When K = 1: the paper provides compressed sensing statements; K = 2 covers (for instance) Non-negative Matrix Factorization, Dictionary learning, low rank approximation, phase recovery. The particularity of this paper is to extend the study to deep problems. As an illustration, we detail the analysis and provide (entirely computable) guarantees for the stable recovery of a (non-neural) sparse convolutional network.


We study a deep matrix factorization problem. It takes as input a matrix X obtained by multiplying K matrices (called factors). Each factor is obtained by applying a fixed linear operator to a short vector of parameters satisfying a model (for instance sparsity, grouped sparsity, non-negativity, constraints defining a convolution network\ldots). We call the problem deep or multi-layer because the number of factors is not limited. In the practical situations we have in mind, we can typically have K=10 or 100. This work aims at identifying conditions on the structure of the model that guarantees the stable recovery of the factors from the knowledge of X and the model for the factors.We provide necessary and sufficient conditions for the identifiability of the factors (up to a scale rearrangement). We also provide a necessary and sufficient condition called Deep Null Space Property (because of the analogy with the usual Null Space Property in the compressed sensing framework) which guarantees that even an inaccurate optimization algorithm for the factorization stably recovers the factors. We illustrate the theory with a practical example where the deep factorization is a convolutional network.

Speech signals are complex intermingling of various informative factors, and this information blending makes decoding any of the individual factors extremely difficult. A natural idea is to factorize each speech frame into independent factors, though it turns out to be even more difficult than decoding each individual factor. A major encumbrance is that the speaker trait, a major factor in speech signals, has been suspected to be a long-term distributional pattern and so not identifiable at the frame level. In this paper, we demonstrated that the speaker factor is also a short-time spectral pattern and can be largely identified with just a few frames using a simple deep neural network (DNN). This discovery motivated a cascade deep factorization (CDF) framework that infers speech factors in a sequential way, and factors previously inferred are used as conditional variables when inferring other factors. Our experiment on an automatic emotion recognition (AER) task demonstrated that this approach can effectively factorize speech signals, and using these factors, the original speech spectrum can be recovered with high accuracy. This factorization and reconstruction approach provides a novel tool for many speech processing tasks.





Image Credit: NASA/JPL-Caltech/Space Science Institute
File name: W00110282.jpg, https://saturn.jpl.nasa.gov/raw_images/426594
Taken: Sep. 14, 2017 7:59 PM
Received: Sep. 15, 2017 7:04 AM

The camera was pointing toward SATURN, and the image was taken using the CL1 and CL2 filters. This image has not been validated or calibrated. A validated/calibrated image will be archived with the NASA Planetary Data System.

Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !

Random Subspace with Trees for Feature Selection Under Memory Constraints / Learning Mixture of Gaussians with Streaming Data


Probably the last Image of Titan by the Cassini spacecraft. Taken: Sep. 12, 2017 9:26 PM. Received: Sep. 13, 2017 10:19 AM. Image Credit: NASA/JPL-Caltech/Space Science Institute


As our capabilities to produce features from data is getting larger everyday, we are now getting into the stage where we have to learn/infer under a streaming constraint: i.e; we get to see the feature once and then have to produce some inference. The first paper tries to do in the random forest approach while the second paper looks at it in building a mixture of gaussians ( relevant: Compressive Statistical Learning with Random Feature MomentsSketching for Large-Scale Learning of Mixture ModelsSketchMLbox) . Enjoy !



Dealing with datasets of very high dimension is a major challenge in machine learning. In this paper, we consider the problem of feature selection in applications where the memory is not large enough to contain all features. In this setting, we propose a novel tree-based feature selection approach that builds a sequence of randomized trees on small subsamples of variables mixing both variables already identified as relevant by previous models and variables randomly selected among the other variables. As our main contribution, we provide an in-depth theoretical analysis of this method in infinite sample setting. In particular, we study its soundness with respect to common definitions of feature relevance and its convergence speed under various variable dependance scenarios. We also provide some preliminary empirical results highlighting the potential of the approach.



In this paper, we study the problem of learning a mixture of Gaussians with streaming data: given a stream of $N$ points in $d$ dimensions generated by an unknown mixture of $k$ spherical Gaussians, the goal is to estimate the model parameters using a single pass over the data stream. We analyze a streaming version of the popular Lloyd's heuristic and show that the algorithm estimates all the unknown centers of the component Gaussians accurately if they are sufficiently separated. Assuming each pair of centers are $C\sigma$ distant with $C=\Omega((k\log k)^{1/4}\sigma)$ and where $\sigma^2$ is the maximum variance of any Gaussian component, we show that asymptotically the algorithm estimates the centers optimally (up to constants); our center separation requirement matches the best known result for spherical Gaussians \citep{vempalawang}. For finite samples, we show that a bias term based on the initial estimate decreases at $O(1/{\rm poly}(N))$ rate while variance decreases at nearly optimal rate of $\sigma^2 d/N$.
Our analysis requires seeding the algorithm with a good initial estimate of the true cluster centers for which we provide an online PCA based clustering algorithm. Indeed, the asymptotic per-step time complexity of our algorithm is the optimal $d\cdot k$ while space complexity of our algorithm is $O(dk\log k)$.
In addition to the bias and variance terms which tend to $0$, the hard-thresholding based updates of streaming Lloyd's algorithm is agnostic to the data distribution and hence incurs an approximation error that cannot be avoided. However, by using a streaming version of the classical (soft-thresholding-based) EM method that exploits the Gaussian distribution explicitly, we show that for a mixture of two Gaussians the true means can be estimated consistently, with estimation error decreasing at nearly optimal rate, and tending to $0$ for $N\rightarrow \infty$.



Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Thursday, September 14, 2017

Overviews: Deep Learning on Reinforcement Learning, Music Generation and Recommender Systems

Cassini is taking its last image.right now. 



Today we have three overviews/review/tutorial on different aspect of Deep Learning. The first one is about Reinforcement Learning, the second is a book on music generation and the third is on recommender systems (as taught in the latest RecSys meeting at Lake Como). 

We give an overview of recent exciting achievements of deep reinforcement learning (RL). We discuss six core elements, six important mechanisms, and twelve applications. We start with background of machine learning, deep learning and reinforcement learning. Next we discuss core RL elements, including value function, in particular, Deep Q-Network (DQN), policy, reward, model, planning, and exploration. After that, we discuss important mechanisms for RL, including attention and memory, unsupervised learning, transfer learning, multi-agent RL, hierarchical RL, and learning to learn. Then we discuss various applications of RL, including games, in particular, AlphaGo, robotics, natural language processing, including dialogue systems, machine translation, and text generation, computer vision, neural architecture design, business management, finance, healthcare, Industry 4.0, smart grid, intelligent transportation systems, and computer systems. We mention topics not reviewed yet. After listing a collection of RL resources, we present a brief summary, and close with discussions. 


This book is a survey and an analysis of different ways of using deep learning (deep artificial neural networks) to generate musical content. At first, we propose a methodology based on four dimensions for our analysis: - objective - What musical content is to be generated? (e.g., melody, accompaniment...); - representation - What are the information formats used for the corpus and for the expected generated output? (e.g., MIDI, piano roll, text...); - architecture - What type of deep neural network is to be used? (e.g., recurrent network, autoencoder, generative adversarial networks...); - strategy - How to model and control the process of generation (e.g., direct feedforward, sampling, unit selection...). For each dimension, we conduct a comparative analysis of various models and techniques. For the strategy dimension, we propose some tentative typology of possible approaches and mechanisms. This classification is bottom-up, based on the analysis of many existing deep-learning based systems for music generation, which are described in this book. The last part of the book includes discussion and prospects. 







Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Deep Learning and Inverse Problems


Photojournal: PIA21345
September 11, 2017
Credit
NASA/JPL-Caltech/Space Science Institute


Much like what happening in compressive sensing, where sparse reconstruction solvers are being learned as if they were deep neural networks (LISTA,....), the more general field of inverse problems (with a larger variety of regularizers) is also falling in this Great Convergence vortex (see previous here or here). Today we have the following two approaches:

We propose a new method that uses deep learning techniques to accelerate the popular alternating direction method of multipliers (ADMM) solution for inverse problems. The ADMM updates consist of a proximity operator, a least squares regression that includes a big matrix inversion, and an explicit solution for updating the dual variables. Typically, inner loops are required to solve the first two sub-minimization problems due to the intractability of the prior and the matrix inversion. To avoid such drawbacks or limitations, we propose an inner-loop free update rule with two pre-trained deep convolutional architectures. More specifically, we learn a conditional denoising auto-encoder which imposes an implicit data-dependent prior/regularization on ground-truth in the first sub-minimization problem. This design follows an empirical Bayesian strategy, leading to so-called amortized inference. For matrix inversion in the second sub-problem, we learn a convolutional neural network to approximate the matrix inversion, i.e., the inverse mapping is learned by feeding the input through the learned forward network. Note that training this neural network does not require ground-truth or measurements, i.e., it is data-independent. Extensive experiments on both synthetic data and real datasets demonstrate the efficiency and accuracy of the proposed method compared with the conventional ADMM solution using inner loops for solving inverse problems.

Much of the recent research on solving iterative inference problems focuses on moving away from hand-chosen inference algorithms and towards learned inference. In the latter, the inference process is unrolled in time and interpreted as a recurrent neural network (RNN) which allows for joint learning of model and inference parameters with back-propagation through time. In this framework, the RNN architecture is directly derived from a hand-chosen inference algorithm, effectively limiting its capabilities. We propose a learning framework, called Recurrent Inference Machines (RIM), in which we turn algorithm construction the other way round: Given data and a task, train an RNN to learn an inference algorithm. Because RNNs are Turing complete [1, 2] they are capable to implement any inference algorithm. The framework allows for an abstraction which removes the need for domain knowledge. We demonstrate in several image restoration experiments that this abstraction is effective, allowing us to achieve state-of-the-art performance on image denoising and super-resolution tasks and superior across-task generalization.




Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Wednesday, September 13, 2017

Paris Machine Learning #1 Season 5: Code Mining, Mangas, Drug Discovery, Open Law, RAMP


So Season 5 of the Paris Machine Learning meetup starts today, woohoo ! The video of the streaming for the meetup can be found below. 

 
 



Thanks to Deep Algo for hosting this meetup and sponsoring the food and drinks afterwards. 

LightOn is sponsoring the streaming.

Schedule:

6:30 PM doors opening ; 6:45 PM : talks beginning ; 9:00 PM : talks ending
10:00 PM : end

TALKS:

Short: Franck Bardol, Igor Carron, We know what the AI world did Last Summer 
Short: Xavier Lagarrigue, Presentation de Deep Algo, La Piscine
Dans le cadre des Journées nationales de l'ingénieur, L'IESF organise avec le CDS un challenge en vision par ordinateur consistant à classifier les différentes especes de pollinisateurs. Un prix sera remis à la JNI le 19 Octobre à l'UNESCO. Le peu de nombre d'exemple dans la majorité des classes rend le challenge techniquement intéressant (one shot learning / domain adaptation). lien : http://bee-o-diversity-challenge.strikingly.com/
Short: Open Law: IA et droit, Dataset d'apprentissage , Olivier JeulinLefebvre-sarrut.eu
Abstract : L’association Open Law Le droit ouvert*, avec le soutien de la CNIL et de la Cour de Cassation, a décidé de créer un jeu de données d’apprentissage dans le domaine juridique. L'objectif est de zoner les décisions de justice des cours d'appel (discourse parsing). L’annotation est en cours et le jeu de données sera rendu public début décembre..
http://openlaw.fr/travaux/communs-numeriques/ia-droit-datasets-dapprentissage
15 minutes presentations:

Challenges in code mining, Information theoretic approach, Jérôme Forêt, Head of R&D de Deep Algo, Deep Algo - English-

The mission of Deep Algo is to make code understandable by anyone. This involves automatic extraction of the business logic from a code base. One of the challenges is to understand the developper's intentions that led to a specific organization of this business logic.
Using Posters to Recommend Anime and Mangas by Jill-Jênn Vie,  (livestream from Japan) - English-
The classic recommendation problem is the following: given a user and the items (mangas) that they like, how can we recommend new items (mangas) that they are also likely to enjoy? Typically this is done via collaborative filtering, i.e. people with similar taste also enjoy other mangas, so we recommend these to the original user. A very common problem occurs when you have a new or obscure manga, aka the cold-start problem. There are no reviews to use for this manga, so a cooler option is to build a system that actually understands the content it recommends. We propose extracting visual information from the posters of these little-known mangas, using a deep neural net called Illustration2Vec. The theory is that users that like mangas with "girl with sword" will also like other mangas that have "girl with sword" or perhaps "girl with bow" but probably not "multiple boys in a swimming pool".
Site: http://research.mangaki.frRelevant ArXiv: Using Posters to Recommend Anime and Mangas in a Cold-Start Scenario, https://arxiv.org/abs/1709.01584

Early-stage drug discovery requires a constant supply of new molecules, to be fed into High Throughput Screening robots. To increase this supply, virtual molecules can be generated on-demand with neural networks. In this talk, I present a Reinforcement Learning generative model, and a variant using Generative Adversarial Networks. I also present two challenges that both are facing: 1. multitasking
between different objectives and 2. generating chemically diverse molecules. Finally, I sketch how these generative models could become a useful proof-of-work for a 'Drugcoin' crypto-currency, in place of the 'useless' Hashcash proof-of-work of Bitcoin. 

Motivated by the shortcomings of traditional data challenges, we have developed a unique concept and platform, called Rapid Analytics and Model Prototyping (RAMP), based on modularization and code submission.
Open code submission allows participants to build on each other’s ideas, provides the organizers with a fully functioning prototype, and makes it possible to build complex machine learning workflows while keeping the contributions simple. Besides running public data challenges, the tool may also be useful for managing the building of data science workflows internally in a data science team. In the presentation I will focus on what you can use the tool for if you are a data scientist, a student, or a data science instructor. Links: https://www.ramp.studio https://github.com/paris-saclay-cds/ramp-workflow https://medium.com/@balazskegl


Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !

Tuesday, September 12, 2017

NIPS 2017 accepted papers

NIPSLogo

So it looks it is going to be difficult to get a ticket to NIPS if you are not buying one of these in the coming days !

The workshop are still open for submissions. The list is here

Accepted papers are now showing up on
Oh! and if you are a company. The sponsorship for NIPS is already oversubscribed. They cannot take your money anymore.





Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Modeling random projection for tensor objects




In this investigation, we discuss high order data structure (called tensor) for efficient information retrieval and show especially how well reduction techniques of dimensionality goes while preserving Euclid distance between information. High order data structure requires much amount of space. One of the effective approaches comes from dimensionality reduction such as Latent Semantic Indexing (LSI) and Random Projection (RP) which allows us to reduce complexity of time and space dramatically. The reduction techniques can be applied to high order data structure. Here we examine High Order Random Projection (HORP) which provides us with efficient information retrieval keeping feasible dimensionality reduction.






Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Monday, September 11, 2017

CfP: "Tensor Image Processing" in Signal Processing: Image Communcation, 2018.

Yipeng sent me the following this past week:

Dear Igor, 
We are currently accepting submissions for the special issue titled "Tensor Image Processing" which will be published in Signal Processing: Image Communcation in 2018.
I guess it might be interesting for some of the Nuit-Blanche readers. If you agree it is suitable, could you help to put it on Nuit Blanche, please? Thank you very much!

Best Regards,

Yipeng 
Sure Yipeng ! Here is the call:


Special Issue on Tensor Image Processing


Tensor (i.e. multidimensional array) is a natural representation for image and video. The related advances in applied mathematics allow us to gradually move from classical matrix based methods to tensor methods for image processing methods and applications. The resulted new research topic, called tensor image processing, offers new tools to exploit the multi-dimensional and intrinsic structures in image data. In this inter-disciplinary research field, there are fast emerging works on tensor theory, tensor based models, numerical computation and efficient algorithms, and applications on image and video processing.
This special issue aims to collect the latest original contributions in tensor image processing, and offer new ideas, experiences and discussions by experts in this field. We encourage the submission of papers with new theory, analysis, methods, and applications in tensor image processing. The list of possible topics of interest include, but are not limited to:
  • tensor factorization/decomposition and its analysis
  • tensor computation
  • low rank tensor approximation
  • tensor regression and classification
  • tensor independent component analysis
  • tensor principal component analysis
  • tensor dictionary learning
  • tensor subspace clustering
  • tensor based blind source separation
  • tensor image data fusion
  • tensor image compression
  • tensor image completion
  • tensor image denoising/deblurring
  • tensor image segmentation
  • tensor image registration
  • tensor image feature extraction
  • tensor Image Interpolation
  • tensor image’s quality assessment
Submission Guideline: 
Original papers to report the latest advances on the relevant topics are invited to be submitted through Elsevier Editorial System (EES) http://ees.elsevier.com/image/ by selecting “SI: Tensor Image Processing” as the article type. All the papers will be peer-reviewed following the journal’s reviewing procedures. All the accepted papers should be prepared according to the guidelines set out by the journal.
Important dates:
  • Paper submission due: Feb 9, 2018
  • First notification: May 9, 2018
  • Revision: Jul 9, 2018
  • Final decision: Sept 10, 2018
  • Publication date: Nov 10, 2018
Guest Editors:





  

Friday, September 08, 2017

CSjob: Assistant Professor, High Dimensional Data Analysis, University of Colorado at Boulder

Stephen let me know of the following opportunity

Hi Igor,
We have a faculty search in the area of high dimensional data analysis that might interest Nuit Blanche readers. The main text of the ad is below. We start considering applications October 16, 2017, so the deadline is coming up quickly.
Best,
Stephen
Sure Stephen , Randomized projection and theoretical Deep learning, what's not to like !

The Department of Applied Mathematics at the University of Colorado at Boulder invites applications for a tenure track faculty position at the Assistant Professor level to begin August 2018. The position is in the area of high dimensional data analysis (big data), with possible areas of emphasis including nonlinear optimization, analysis in high-dimensional spaces, randomized projections, probabilistic numerics, harmonic analysis, theoretical deep learning, and related areas. However, exceptional candidates in all fields of Applied Mathematics may be considered.
For the full advertisement as well as instructions on how to apply, please see: https://tinyurl.com/APPMCU2017

High Dimensional Data Analysis - 10148
 
Faculty
 
Description
 
The Department of Applied Mathematics at the University of Colorado at Boulder invites applications for a tenure track faculty position at the Assistant Professor level to begin August 2018. The position is in the area of high dimensional data analysis (big data), with possible areas of emphasis including nonlinear optimization, analysis in high-dimensional spaces, randomized projections, probabilistic numerics, harmonic analysis, theoretical deep learning, and related areas. However, exceptional candidates in all fields of Applied Mathematics may be considered. Note that the most competitive candidates will have likely had postdoctoral training.
 
Description of Department/Program:
The Department of Applied Mathematics is home to 22 tenured and tenure track faculty members whose research span computational mathematics, nonlinear mathematics including nonlinear dynamics and waves, mathematical biology, physical applied mathematics, probability, and statistics. The Department sits at the crossroads of the university's two largest colleges, being rostered in the College of Arts and Sciences while teaching students in College of Engineering. Our faculty have won numerous awards including: four being named fellows of the Society of Industrial and Applied Mathematics, two being named fellows of the American mathematical Society, two Guggenheim Fellows, and American Statistical Association Fellow, and a Sloan Fellow. For more information see http://www.colorado.edu/amath.
 
Overview of Job Duties:
The candidate is expected to take an active role in undergraduate and graduate teaching, conduct a vigorous externally funded research program, advise graduate students, and participate in department and university governance. We also seek candidates who demonstrate effectiveness in teaching, mentoring, nurturing, and inspiring diverse students of all ethnicities, nationalities and genders, including first generation college undergraduates. We encourage applications from women, racial and ethnic minorities, individuals with disabilities and veterans.
 
Alternative formats of this ad can be provided upon request for individuals with disabilities by contacting the ADA Coordinator at hr-ada@colorado.edu. The University of Colorado, an Equal Opportunity Employer, is one of the largest employers in Boulder County and offers an inspiring higher education environment and excellent benefits. Learn more about the University of Colorado by visiting https://www.cu.edu/cu-careers.
 
Qualifications
 
Minimum Qualifications:
Applicants must have a Ph.D. in applied mathematics or a related area, a strong record of research accomplishments, and have excellent teaching and communication skills.
   
Special Instructions to Applicants: All applications completed before October 16, 2017 will receive full consideration; applications will be accepted until the position is filled.
If you have technical difficulties submitting application information, please contact the CU Careers help desk at 303-860-4200, extension 2 or cucareershelp@cu.edu. All other job related inquiries should be directed to the posting contact for this posting.
If you have technical difficulties submitting application information, please contact the CU Careers help desk at 303-860-4200, extension 2 or cucareershelp@cu.edu. All other job related inquiries should be directed to the posting contact for this posting.
  
Application Materials Required: Cover Letter, Resume/CV, List of References, Statement of Research Philosophy, Statement of Teaching Philosophy
 
Application Materials Instructions: To apply, please submit the materials listed below in PDF format to this posting at www.cu.edu/cucareers with the following naming convention “LastName_FirstName_NameofDocument”, i.e., Smith_John_CV:
1. Cover Letter, which specifically addresses the job requirements and outlines qualifications (LastName_FirstName_Cover.pdf).
2. Current Curriculum Vitae (LastName_FirstName_CV.pdf).
3. List of References, minimum of three individuals who may be contacted, upon review of your application, to submit written letters of reference upon request (LastName_FirstName_References.pdf).
4. Statement of Research Philosophy (LastName_FirstName_Research).
5. Statement of Teaching Philosophy (LastName_FirstName_Teaching).
1. Cover Letter, which specifically addresses the job requirements and outlines qualifications (LastName_FirstName_Cover.pdf).
2. Current Curriculum Vitae (LastName_FirstName_CV.pdf).
3. List of References, minimum of three individuals who may be contacted, upon review of your application, to submit written letters of reference upon request (LastName_FirstName_References.pdf).
4. Statement of Research Philosophy (LastName_FirstName_Research).
5. Statement of Teaching Philosophy (LastName_FirstName_Teaching).  Job Category Primary Location Department: B0001 -- Boulder Campus - 10159 - Applied Mathematics Schedule Posting Date Closing Date Posting Contact Name: David Bortz Posting Contact Email: dmbortz@colorado.edu Position Number: 00735508
: Ongoing: Aug 28, 2017: Full-time: Boulder: Faculty





Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Super-Resolution Imaging Through Scattering Medium Based on Parallel Compressed Sensing / Cell Detection with Deep Convolutional Neural Network and Compressed Sensing / Exploit imaging through opaque wall via deep learning

So the great convergence between sensing and deep learning continues. The first paper is a potential improvement to our approach, the second mixes compressive sensing and Deep Learning while the last paper uses our approach and builds a deep learning solver (several other groups have done similar things in the past see some of the blog entries under the MLHardware hashtag). Enjoy !


Recent studies show that compressed sensing (CS) can recover sparse signal with much fewer measurements than traditional Nyquist theorem. From another point of view, it provides a new idea for super-resolution imaging, like the emergence of single pixel camera. However, traditional methods implemented measurement matrix by digital mirror device (DMD) or spatial light modulator, which is a serial imaging process and makes the method inefficient. In this paper, we propose a super resolution imaging system based on parallel compressed sensing. The proposed method first measures the transmission matrix of the scattering sheet and then recover high resolution objects by “two-step phase shift” technology and CS reconstruction algorithm. Unlike traditional methods, the proposed method realizes parallel measurement matrix by a simple scattering sheet. Parallel means that charge-coupled device camera can obtain enough measurements at once instead of changing the patterns on the DMD repeatedly. Simulations and experimental results show the effectiveness of the proposed method.


The ability to automatically detect certain types of cells in microscopy images is of significant interest to a wide range of biomedical research and clinical practices. Cell detection methods have evolved from employing hand-crafted features to deep learning-based techniques to locate target cells. The essential idea of these methods is that their cell classifiers or detectors are trained in the pixel space, where the locations of target cells are labeled. In this paper, we seek a different route and propose a convolutional neural network (CNN)-based cell detection method that uses encoding of the output pixel space. For the cell detection problem, the output space is the sparsely labeled pixel locations indicating cell centers. Consequently, we employ random projections to encode the output space to a compressed vector of fixed dimension. Then, CNN regresses this compressed vector from the input pixels. Using L1-norm optimization, we recover sparse cell locations on the output pixel space from the predicted compressed vector. In the past, output space encoding using compressed sensing (CS) has been used in conjunction with linear and non-linear predictors. To the best of our knowledge, this is the first successful use of CNN with CS-based output space encoding. We experimentally demonstrate that proposed CNN + CS framework (referred to as CNNCS) exceeds the accuracy of the state-of-the-art methods on many benchmark datasets for microscopy cell detection. Additionally, we show that CNNCS can exploit ensemble average by using more than one random encodings of the output space.



Imaging through scattering media is encountered in many disciplines or sciences, ranging from biology, mesescopic physics and astronomy. But it is still a big challenge because light suffers from multiple scattering is such media and can be totally decorrelated. Here, we propose a deep-learning-based method that can retrieve the image of a target behind a thick scattering medium. The method uses a trained deep neural network to fit the way of mapping of objects at one side of a thick scattering medium to the corresponding speckle patterns observed at the other side. For demonstration, we retrieve the images of a set of objects hidden behind a 3mm thick white polystyrene slab, the optical depth of which is 13.4 times of the scattering mean free path. Our work opens up a new way to tackle the longstanding challenge by using the technique of deep learning.


Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !

Friday, September 01, 2017

Nuit Blanche in Review (August 2017)

NASA’s Lunar Reconnaissance Orbiter shows the shadow of the Moon cast on the United States during the Aug. 21, 2017, total solar eclipse.
Credits: NASA/GSFC/Arizona State University


Many things have happened in the last month since the last Nuit Blanche in Review (July 2017) including an Eclipse and Harvey. We also have had several implementations made available by their respective authors, a thesis, a sruvey, more in-depth articles, some interesting videos, some job openings... Enjoy !



Survey

In-depth


Conferences:

Videos:

Job:
Other:


Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Printfriendly