Thursday, January 18, 2018

Towards Understanding the Invertibility of Convolutional Neural Networks

Ok, here is a "connection to a particular model of model-based compressive sensing (and its recovery algorithms) and random-weight CNNs". This is great! I would have expected to see the LISTA paper from Gregor and Lecun in there somewhere. Irrespective, this type of analysis brings us closer to figuring out the sort of layer that keeps or doesn't keep information (see Sunday Morning Insight: Sharp Phase Transitions in Machine Learning ? ). Enjoy !


Several recent works have empirically observed that Convolutional Neural Nets (CNNs) are (approximately) invertible. To understand this approximate invertibility phenomenon and how to leverage it more effectively, we focus on a theoretical explanation and develop a mathematical model of sparse signal recovery that is consistent with CNNs with random weights. We give an exact connection to a particular model of model-based compressive sensing (and its recovery algorithms) and random-weight CNNs. We show empirically that several learned networks are consistent with our mathematical analysis and then demonstrate that with such a simple theoretical framework, we can obtain reasonable re- construction results on real images. We also discuss gaps between our model assumptions and the CNN trained for classification in practical scenarios.




Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

No comments:

Printfriendly