Monday, March 19, 2018

Institute for Advanced Study - Princeton University Joint Symposium on "The Mathematical Theory of Deep Neural Networks", Tuesday March 20th,



Adam just sent me the following:
Hi Igor,

I'm a long-time reader of your blog from back in the day when compressed sensing was still up-and-coming. I wanted to bring to your attention a workshop a few of my fellow post-docs at Princeton and I are hosting this Tuesday at the Princeton Neuroscience Institute: The "Institute for Advanced Study - Princeton University Joint Symposium on 'The Mathematical Theory of Deep Neural Networks'". I thought that this symposium would be of interest to both yourself and your readers. Since space is limited, we are going to be live-streaming the talks online (and will post videos once the dust settles). The link to the live-stream is available on the symposium website:

https://sites.google.com/site/princetondeepmath/

Cheers!

-Adam

----------------------------
Adam Charles
Post-doctoral associate
Princeton Neuroscience Institute
Princeton, NJ, 08550
Awesome, Adam ! I love the streaming bit. Here is the announcement and the program

Institute for Advanced Study - Princeton University Joint Symposium on "The Mathematical Theory of Deep Neural Networks" 
Tuesday March 20th, Princeton Neuroscience Institute.
PNI  Lecture Hall A32 
This event will be live-streamed at: https://mediacentrallive.princeton.edu. Additionally, video recordings of the talks will be posted after the event.
Registration is now open: register here. 
Recent advances in deep networks, combined with open, easily-accessible implementations, have moved the fields empirical results far faster than formal understanding. The lack of rigorous analysis for these techniques limits their use in addressing scientific questions in the physical and biological sciences, and prevents systematic design of the next generation of networks. Recently, long-past-due theoretical results have begun to emerge. These results, and those that will follow in their wake, will begin to shed light on the properties of large, adaptive, distributed learning architectures, and stand to revolutionize how computer science and neuroscience understand these systems.
This intensive one-day technical workshop will focus on state of the art theoretical understanding of deep learning. We aim to bring together researchers from the Princeton Neuroscience Institute (PNI) and Center for Statistics and Machine Learning (CSML) at Princeton University and of the theoretical machine learning group at the Institute for Advanced Studies (IAS) interested in more rigorously understanding deep networks to foster increased discussion and collaboration across these intrinsically related groups.

10:00-10:15: Adam Charles (PNI)"Introductory remarks"

 10:15-11:15: Sanjeev Arora (IAS)"Why do deep nets generalize, that is, predict well on unseen data?"
 11:15-12:15: Sebastian Musslick (PNI)"Multitasking Capability Versus Learning Efficiency in Neural Network Architectures"
 12:15-01:30: Lunch 
 01:30-02:30: Joan Bruna (NYU)"On the Optimization Landscape of Neural Networks"
 02:30-03:30: Andrew Saxe (Harvard) "A theory of deep learning dynamics: Insights from the linear case"
03:30-04:00: Break
 04:00-05:00: Anna Gilbert (U Mich)"Towards Understanding the Invertibility of Convolutional Neural Network"
05:00-06:00: Nadav Cohen (IAS) "Expressiveness of Convolutional Networks via Hierarchical Tensor Decompositions"
 06:00-06:15: Michael Shvartsman and Ahmed El Hady (PNI) "Outgoing remarks"
 06:15- 8:00: Reception


Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Printfriendly