Figure 1.
Overview of the data analysis framework.
Figure 2.
Schematic of the voxel-based encoding model used in this study.
(a) Model estimation. We used training images as input channels to estimate the encoding model of one voxel with a gradient descent algorithm. (b) Model prediction. The correlation r was calculated between the observed activity pattern (i.e., the fMRI response to each training image in the early visual cortex) and the predicted activity pattern from multiple encoding models. The dotted box represents the most closely matched model. (c) Correlation matrix and prediction performance. The color at the (n, m) element represents the correlation between the observed activity pattern for the mth image and the predicted activity pattern for the nth image. The maximum correlation in each column is designated by an enlarged circle of the appropriate color, which indicates the image selected by the prediction algorithm. If the diagonal element was the maximum value in each column, we marked it as a correct prediction. The prediction performance of the encoding model was defined as the ratio of the number of correct predictions to the total number of training images. For this participant, the performance was 88.3% (106/120).
Figure 3.
Performance of different voxel-based encoding models.
(a) Summary of identification performance. The bars indicate the performance obtained from a set of 120 images, and the dashed green line indicates chance performance. Note that for both participants S1 and S2, the performance of all three methods was higher than the chance level, and the combination had higher performance than either the stimulus VBEM or the LOC AVBEM alone. (b) There were inverse modulations in the sub-areas of ERC between the stimulus VBEM and the LOC AVBEM. VBEM, voxel-based encoding model; AVBEM, analogical voxel-bsed encoding method.