Fig 1.
Stimulus set used in the experiment.
The 72 images used in this study include twelve images from each of six categories: Human Body, Human Face, Animal Body, Animal Face, Fruit Vegetable, and Inanimate Object. The stimuli can be divided most broadly into Animate and Inanimate categories. Within the Animate category, images are either Human or Animal and Body or Face. Inanimate images are either Natural or Man-made. Colored borders are added for visualization purposes only, and were not shown during experimental sessions.
Fig 2.
Summary table of classification results.
Classifier accuracies, along with p-value (p), effect size (d), and sample standard deviation (s) across the ten participants, for classifications incorporating data from all electrodes into the feature vector. Classifications using all time points together are shown in the “0–496 ms” column. Temporally resolved classifications are shown in subsequent columns. Chance level for six-class (category-level) classifications was 1/6 = 16.67%; for 72-class (exemplar-level) was 1/72 = 1.39%; for twelve-class (within-category) was 1/12 = 8.33%; and for two-class (between-category) was 1/2 = 50.00%. Statistical significance and effect size were calculated under the null distribution of the Binomial distribution based upon the number of observations in one test fold. Some classifications could not be performed for certain participants’ data due to the SVD not converging in the computation of Principal Components. Results are from all ten participants unless otherwise indicated: † indicates nine participants; ‡ indicates eight participants; ⊕ indicates seven participants. A ⋆ indicates missing data from one participant in our results file.
Fig 3.
Category-level classification results.
All electrodes and time samples of the brain response were used together in the six-class category-level classification. An equal number of observations from each category were used. Left: Confusion matrix showing proportions of classifier output. Rows represent actual labels and columns represent predicted labels. Values along the diagonal indicate proportions of correct classifications. Mean accuracy for this classification was 40.68%, compared to chance-level accuracy of 16.67% (Fig 2). Middle: Multidimensional scaling (MDS) plot derived from the confusion matrix, visualizing the non-hierarchical structure of the representational space. MDS dimensions are sorted in descending order of variance explained. Right: Dendrogram visualizing the hierarchical structure of the representation. The Human Face category is most separate from the other categories, while the two Inanimate categories form the tightest category cluster.
Fig 4.
Topographic map of category-level classifier accuracies for individual electrodes.
Independent category-level classifications were performed on the data from each of the 124 electrodes, and resulting classifier accuracies were plotted over a scalp map. Accuracies of 20.66% or higher are statistically significant at the α = 0.01 level.
Fig 5.
Temporally resolved category-level classification results.
Separate classifications were performed on temporal subsets of the brain response. Each temporal window was 80 ms long and advanced in 48-ms increments. Classifications used data from all electrodes together. Chance level was 1/6 = 16.67%. (A) Classifier accuracies as a function of time for the nine temporal windows. The width of each black bar specifies the time interval used in classification, and the height indicates the overall accuracy of that classification. The line plots through the center of each temporal window display proportions of correct classifications for each of the six categories. The dotted horizontal line indicates the statistical significance threshold of the overall classifier accuracy at α = 0.01. Peak accuracy in the fourth temporal window (144–224 ms) is 39.12%, compared to 40.68% accuracy when all time points were used together in classification (Fig 2). (B) The nine confusion matrices corresponding to the classifications performed in (A). Composition of the matrices is as described in Fig 3. The diagonal values of each confusion matrix make up the line plots in (A). (C) MDS plots derived from the confusion matrices in (B). As in Fig 3, Dimension 1 of the MDS is plotted along the x-axis, and Dimension 2 is plotted along the y-axis. (D) Dendrograms derived from the confusion matrices in (B).
Fig 6.
Temporally resolved rate maps for single-electrode category-level classifications.
For each overlapping temporal window described in Fig 5, independent category-level classifications were performed on single-electrode data (as described in Fig 4) in order to reveal the topography of the representation over time. Resulting classifier accuracies are plotted over separate scalp maps for each temporal window. 20.66% is the accuracy threshold for statistical significance at α = 0.01.
Fig 7.
Exemplar-level classification results.
The classifier attempted to predict image exemplar labels from brain responses in a 72-class classification. Mean accuracy for the classification was 14.46%. (A) Line plot of the proportion of correct classifications for each of the 72 image exemplars. (B) Confusion matrix from the classification. The matrix diagonal, visualized in (A), has been set to zero for better display of off-diagonal elements.
Fig 8.
Multidimensional scaling plots for exemplar-level classification.
MDS coordinates were derived from the 72-class confusion matrix (Fig 7). (A) The first four MDS dimensions are scatterplotted in pairs of dimensions. Boxplots show the distribution of image exemplar coordinates along each dimension, grouped by the category labels used previously (as in Fig 3). (B) Statistical significance of category separability along each of the four principal MDS dimensions plotted in (A). Nonparametric tests were performed on exemplar coordinates for MDS Dimensions 1–4 to assess category separability; all category pairs except for the two Inanimate categories are separable at the α = 0.01 level along at least one of the four principal MDS dimensions.
Fig 9.
Dendrogram and reordered confusion matrix for exemplar-level classification.
The dendrogram was derived from the 72-class confusion matrix shown in Fig 7. The principal split in the dendrogram separates Human Face images from the other images. To the right of the dendrogram is the 72-class confusion matrix, whose rows and columns have been reordered to match the ordering of elements in the dendrogram.
Fig 10.
Temporally resolved exemplar-level classification results.
Exemplar-level classification rates are plotted as a function of time. The temporal windows used for classification, and the composition of the plot, are as described in Fig 5. As in the category-level case (Fig 5), peak accuracy occurs in the fourth temporal window (144–224 ms). Here, exemplar-level peak accuracy is 13.17%, compared to 14.46% when all time samples were used at once (Fig 2).
Fig 11.
Within-category classification results.
Separate within-category classifications were performed on Human Face and Inanimate Object responses. A two-class face-versus-object classification was also performed for reference. (A) Confusion matrices from classifications using data from all electrodes and time points together for Human Face (Left), Inanimate Object (Center), and face versus object (Right). Matrix layout is as described in Fig 3. Classifier accuracy for Human Face was 18.30%; for Inanimate Object was 28.87%; and for face versus object was 81.06% (summarized in Fig 2). (B) Maps of single-electrode classifier accuracies for Human Face (Left), Inanimate Object (Center), and face versus object (Right). Significance thresholds (α = 0.01) are 16.28% for twelve-class and 58.72% for two-class. (C) Classification rates as a function of time, with exemplar or category accuracies overlaid (as described in Fig 5) for Human Face (Left), Inanimate Object (Center), and face versus object (Right). Dotted horizontal lines designate α = 0.01 significance thresholds. The best-classifying Human Face image (Left) is highlighted. (D) Classifier accuracy maps for temporally resolved single-electrode Human Face (Top), Inanimate Object (Middle), and face-versus-object (Bottom) classifications, using spatiotemporal subsets described in Fig 6.
Fig 12.
Stimulus set, averaged by category.