| Breaking news
[Back to archive]
The latest results achieved by the project consortium.
Learning receptive fields out of natural scenes with an biological motivated model of rate coded neurons
Fig 1: a) Sketch of the model. The LGN responses benefit from a feature based attentional feedback signal. Cells from the second layer are excited linearly according to the match between input and receptive field (RF). They are inhibited by non-linear lateral connections and excite themselves non-linearly. b) The properties of the RFs of the model in comparison to the electrophysiological monkey data. Each dot represents one cell (model or monkey data). On each axis the product of the frequency of the RF and the size of the underlying Gauss of the corresponding direction is displayed. Exemplary RFs of the model are shown (left) together with the best Gabor filter fit (right).
A major issue in the research field of the visual cortex is why the V1 receptive fields (RFs) have a particular structure. In order to enhance the understanding of that particular problem simulations of the early visual cortex are used. The most prominent approach of learning receptive fields relies on a generative model that minimizes the error between the top-down predicted signal and the actual input. Such an objective function minimizes the loss of information, which in principle is a desirable property. However, at present it is not clear how the generative model is consistent with invariant representations and the dynamic attenuation of irrelevant scene information by attention. Moreover, the objective function of the generative model requires the computation of a global error signal, which is probably not done by the brain. Consistent with our earlier models of attention, we have developed an alternative approach to integrate attentive feedback with Hebbian learning and demonstrated that receptive fields converge to localized, oriented and bandpass filters similar to the ones found in V1 (Hamker & Wiltschut, Neural Comput, 20:1261-1284, 2007).
This approach has recently been further developed to improve the fit to the primary visual cortex (V1) data (Wiltschut & Hamker, Vis. Neurosci., 10:1-14, 2009). The model consists of two layers, the first layer simulating neurons of the LGN, the second layer simulating V1 (Fig. 1a). All connection weights, the feedforward RFs, the feedback signals and the lateral inhibitions are being learnt unsupervised according to the Hebbian law. With this model we obtained RFs of simulated V1 cells, that show similar properties as RFs from the primary visual cortex of monkeys (Ringach, J Neurophysiol, 88:455–463, 2002) (Fig. 1b). The similarity between the model and this data is one of the highest (e.g. Ringach, J Neurophysiol, 88:455–463, 2002, Rehn & Sommer, J Comput Neurosci, 22:135-46, 2007, Weber & Triesch, Neural Comput, 20:1261–1284, 2008 for comparison), perhaps even the highest among those reported so far (earlier reports did not report a quantitative measure).
Additionally, we determined how far a variation of key parameters changes the properties with respect to efficient coding. Thus, we seek for correlations of receptive field properties with ”quality” measurements of efficient coding, such as sparseness and independence (as estimated by the average mutual information between two neurons). We found that foremost sparseness is strongly dependent on the amount of competition among the cells and that there is a strong linear correlation between the degree of sparseness and the quality of the simulation results (as indicated by the similarity to the monkey data). Similar, but less strong effects have been observed using other estimates of efficient coding, such as the mutual-information, the variance in the conditional distributions (variance-distribution), and the variance of the mean firing rate (variance-mean).
F.H. Hamker, J. Wiltschut
Department of Psychology, University of Münster (Germany)