Ali (who recently wrote about the parallel between neural networks and optics) mentioned the following work entitled MythBusters: A Deep Learning Edition by Sasha Rakhlin on his twitter:
Here is the attendant work from Myth 3: Size-Independent Sample Complexity of Neural Networks by Noah Golowich, Alexander Rakhlin, Ohad Shamir
We study the sample complexity of learning neural networks, by providing new bounds on their Rademacher complexity assuming norm constraints on the parameter matrix of each layer. Compared to previous work, these complexity bounds have improved dependence on the network depth, and under some additional assumptions, are fully independent of the network size (both depth and width). These results are derived using some novel techniques, which may be of independent interest.
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.