Sparse Coding Models Can Exhibit Decreasing Sparseness while Learning Sparse Codes for Natural Images
A SAILnet simulation was performed in which the RFs were initially randomized, and the recurrent inhibitory connection strengths and firing thresholds were initialized with random numbers that were smaller than for the simulation described in Fig. 3 (see Methods section for details). (A) The initial RFs are shown for 196 randomly selected model neurons. As in Fig. 3, each box on the grid depicts the RF of one neuron, with lighter tones corresponding to positive pixel values, and darker tones corresponding to negative values. (B) After training with natural images, these same SAILnet neurons have oriented, localized RFs. (C) All three of our multi-unit sparseness measures increase during the training period. Aside from the initial conditions, the network used to generate these data was identical to the one from Fig. 3: both networks have the same learning rates, the same number of neurons, the same target mean firing rate, and are trained on the same database of whitened natural images.