Skip to main content
Advertisement

< Back to Article

Searching through functional space reveals distributed visual, auditory, and semantic coding in the human brain

Fig 2

Enhanced performance of multivariate analysis with functional searchlight.

We calculated the percent improvement of functional searchlight over anatomical searchlight for every subject from the top-performing 1% of searchlights of each type. (A) Each dot represents the percent improvement for a subject from an example layer in the AlexNet visual network (conv2) and the KellNet auditory network (fc7_W), as well as for annotation vector decoding. Error bars depict 95% confidence intervals (CIs) from bootstrapping. Raw performance levels for each searchlight type and non-parametric chance baselines can be found in Fig 3A. (B) For the visual and auditory analyses, we visualize which voxels contained model-based information by depicting the count of the number of subjects for whom that voxel contributed to one or more of the top 1% of their functional and anatomical searchlights. For the semantic analysis, we do the same but only visualize the center voxels of the top 1% of searchlights to avoid clutter. (C) We compared functional vs. anatomical searchlight in a localizer task by attempting to classify brain activity evoked by images from six categories: bodies, faces, houses, objects, landscapes, scrambled. Each dot represents percent improvement from chance of the mean top 1% searchlight accuracy. Error bar depicts 95% CI from bootstrapping. (D) We visualize the locations of all voxels that contributed to the top-performing searchlights for category decoding.

Fig 2

doi: https://doi.org/10.1371/journal.pcbi.1008457.g002