Skip to main content
Advertisement

< Back to Article

Computational origins of shape perception

Fig 6

Ablating lateral and depth transitions between views.

(a) In the ‘no side-to-side transitions’ condition, the agent teleported to locations within the horizontal red stripe. Since the agent was limited to the red stripe, they could not collect views showing side-to-side transitions of the object. (b) In the ‘no depth transitions’ condition, the agent teleported to locations within the vertical red stripe. Since the agent was limited to the red stripe, they could not collect views showing depth transitions of the object. (c) Generic fitting models trained in the no side-to-side transitions condition developed partial sensitivity to shape, as shown in the RDMs (top) and color/shape scores (bottom). This differs from models trained in the dense exploration condition, which developed robust shape perception (Fig 4d). (d) Generic fitting models trained in the no depth transitions condition developed partial sensitivity to shape. Again, this differs from models trained in the dense exploration condition, which developed robust shape perception (Fig 4d). The one exception was the 3H model, which developed shape perception. In both conditions, the models partially reduced their weighting of color features (compared to untrained models, Fig 4c). For all RDMs, the images used to make the RDMs were the same as those used in Fig 1. Error bars denote standard error for each model across the color cells and shape cells shown in Fig 1b, c.

Fig 6

doi: https://doi.org/10.1371/journal.pcbi.1013674.g006