Skip to main content
Advertisement

< Back to Article

An image-computable model of human visual shape similarity

Fig 5

ShapeComp predicts human shape similarity across small sets of shapes.

(A) Example shape pairs that varied as a function of ShapeComp distance. (B) Shape similarity ratings averaged across 14 observers for 250 shape pairs highly correlate with distance in ShapeComp’s 22-dimensional space. Inset: The variance in the similarity ratings accounted for by the different ShapeComp dimensions. Many ShapeComp dimensions on their own account for some of the variance in human shape similarity ratings. Shaded error bars are estimated via 1000 bootstrapping across participant responses. (C) Pixel similarity was defined as the standard Intersection-over-Union (IoU; [37, 72]) (D) Observers viewed shape triads and judged which test appeared more similar to the sample. (E) ShapeComp distance between test and sample were parametrically varied but pixel similarity was held constant. (F) Mean probability across participants, that the closer of two test stimuli was perceived as more similar to the sample, as a function of the relative proximity of the closer test shape. Blue: psychometric function fit; orange: prediction of IoU model. (G) Results of experiment in which distances from test to sample were equated for one ShapeComp dimension at a time. Mean psychometric functions slopes were much steeper than predicted if observers relied only on the respective dimension. These results, and that the variance in the similarity ratings is accounted for by many ShapeComp dimensions, inset in B, support the idea that human shape perception is based on a high-dimensional feature space.

Fig 5

doi: https://doi.org/10.1371/journal.pcbi.1008981.g005