Skip to main content
Advertisement

< Back to Article

Computational origins of shape perception

Fig 5

Ablating head movements and transitional views.

(a) In the ‘no head movements’ condition, the agent moved freely around the virtual chamber, but rather than performing head movements at each location (as in the dense exploration condition, left), the agent stared directly at the object (right). (b) In the ‘no transitional views’ condition, the chamber was divided into 49 equally-spaced locations, and the agent teleported to each location. The colored circles and associated images show sample views obtained from five locations. Unlike the dense exploration condition, the agent did not acquire transitional views moving from place to place. (c) Generic fitting models trained in the no head movements condition developed partial sensitivity to shape, as shown in the RDMs (top) and color/shape scores (bottom). This differs from models trained in the dense exploration condition, which developed robust shape perception (Fig 4d). (d) Generic fitting models trained in the no transitional views condition developed partial sensitivity to shape. Again, this differs from models trained in the dense exploration condition, which developed robust shape perception (Fig 4d). In both conditions, the models partially reduced their weighting of color features (compared to untrained models, Fig 4c). For all RDMs, the images used to make the RDMs were the same as those used in Fig 1. Error bars denote standard error for each model across the color cells and shape cells shown in Fig 1b, c.

Fig 5

doi: https://doi.org/10.1371/journal.pcbi.1013674.g005