Tuesday, April 17, 2007

Compressed Sensing in the Primary Visual Cortex ?


[Update, please note that the Compressed Sensing Blog is at this address: http://nuit-blanche.blogspot.com/search/label/CS]

Thomas Serre, Aude Oliva and Tomaso Poggio just came out with a paper showing that the brain processes information in a feedforward fashion. i.e. in most of the brain architecture, there is no feedback loop. It's a breakthrough since even though the biology seemed to show that, there was little computational modeling that could support this hypothesis. Most of the modeling is featured in Thomas Serre's Ph.D thesis entitled:

Learning a dictionary of shape-components in visual cortex: Comparison with neurons, humans and machines, PhD Thesis, CBCL Paper #260/MIT-CSAIL-TR #2006-028, Massachusetts Institute of Technology, Cambridge, MA, April, 2006

I hinted on this earlier, but compressed sensing seems to be such a robust technique that there is little reason to believe that it is not part of a biological process in the works in the brain. Then I found this following statement in page 3 of the preprint of the PNAS paper (Thanks to Thomas Serre I found it in the final paper, it is in the footnote section here) :

Functional organization:

Layers in the model are organized in feature maps which may be thought of as columns or clusters of units with the same selectivity (or preferred stimulus) but with receptive fields at slightly different scales and positions (see Fig. S 1). Within one feature map all units share the same selectivity, i.e., synaptic weight vector w which is learned from natural images (see subsection A.1.2).

There are several parameters governing the organization of individual layers: K_X is the number of feature maps in layer X. Units in layer X receive their inputs from a topologically related N_X × N_X × S_X, grid of possible afferent units from the previous layer where NX defines a range of positions and SX a range of scales.
Simple units pool over afferent units at the same scale, i.e., SSk contains only a single scale element. Also note that in the current model implementation, while complex units pool over all possible afferents such that each unit in layer Ck receives nCk = NS Ck × NS Ck × SCk , simple units receive only a subset of the possible afferent units (selected at random) such that nSk < NSk × NSk (see Table S 1 for parameter values).

Finally, there is a downsampling stage from Sk to Ck stage. While S units are computed at all possible locations, C units are only computed every Ck possible locations. Note that there is a high degree of overlap between units in all stages (to guarantee good invariance to translation). The number of feature maps is conserved from Sk to Ck stage, i.e., KSk = KCk. The value of all parameters is summarized in Table S 1.



So it looks like that in this layered approach to stimuli understanding, the current modeling allows for randomly picking some stimuli out of several in order to go to a higher level of synthesis. This approach is very similar to compressed sensing or some of the concept developed in the Uniform Uncertainty Principle (since we know that natural images for the most part can be sparse in the fourier domain) developed by Terry Tao, Emmanuel Candes and Justin Romberg. Two features of this model can be mapped to the Compressed Sensing approach: a feedback mechanism could be mapped into the usual transform coding approach (compute all the wavelets coefficients and take only the largest ones) whereas Compressed Sensing avoids the nonlinear process of the feedback mechanism. Random sampling is the current best approach to provide a uniform sampling strategy irrespective to most known basis (sines, cosines, wavelets, curvelets,....)

No comments:

Printfriendly