Fig 1.
Brain mapping with a cognitive ontology.
Our approach characterizes the task conditions that correspond to each brain image with terms from a cognitive ontology. Forward inference maps differences between brain responses for a given term and its neighbors in the ontology, i.e. closely related psychological notions. Reverse inference is achieved by predicting the terms associated with the task from brain activity. The figure depicts the analysis of visual object perception tasks with motor response. A forward inference captures brain responses in motor, primary visual and high-level visual areas. Reverse inference captures which regions or neural substrate are predictive of different terms, discarding common response to different tasks, here in the primary visual cortex.
Table 1.
Contrasts used to characterize tasks effects in our database.
We used CogPO categories for task-related description, and add necessary terms from Cognitive Atlas to describe higher-level cognitive aspect. Here we report only terms that were present in more than one study –aside from the “left foot”, which maps in the analysis as maps in “feet” task category, but not “right foot”. The task categories group terms typically used as conditions and their controls to test a hypothesis. The stimulus modality category stands for CogPO and task categories. Some terms do not belong to any task category and are referred to as such. The arithmetics task category spans across the response modality and instructions CogPO categories.
Fig 2.
The hierarchical decoding procedure reduces the dimensionality by stacking the decision functions of several simple binary classifiers, which mimic study-level contrasts by opposing each term to matching ones. A second level of one-versus-all (OvA) classifiers predicts the presence of terms using the output of the first level. The first layer may be seen as capturing whether a given brain activity map looks more like face or place recognition, objects or scrambled images, visual or motor stimuli. The second layer combines this information to conclude on what cognitive terms best describe the given activity. Final linear classifiers may be recovered by combining the coefficients of the first and second level classifiers.
Fig 3.
Maps for the different inference types.
Left (a–d): maps of the different inferences on our database for the “place” concept. The consensus between reverse inference and forward inference based on contrasts defined from the ontology singles out the “parahippocampal place area” (PPA) for the “place” concept. Right (d): the NeuroSynth reverse-inference map for this concept. Reverse inference with Neurosynth also narrows well on the PPA, but is more noisy.
Fig 4.
Regions outlined using different functional mapping approaches, from left to right: a. forward term mapping; b. forward inference with ontology contrasts (standard analysis); c. reverse inference with logistic regression; d. NeuroSynth reverse inference; and e. our approach, mapping with decoding and an ontology. The top part shows visual regions, and the lower one auditory regions in the left hemisphere. Forward term mapping outlines overlapping regions, as brain responses capture side effects such as the stimulus modality: for visual and auditory regions every cognitive term is represented in the corresponding primary cortex. Forward mapping using contrasts removes the overlap in primary regions, but a large overlap persists in mid-level regions, as control conditions are not well matched across studies. Standard reverse inference, specific to a term, creates overly sparse regions though with little overlap. Reverse inference with Neurosynth also displays large overlap in mid-level regions. Finally, ontology-based decoding maps recover known functional areas the visual and auditory cortices.
Fig 5.
Prediction scores for different methods.
Area under the ROC curve (1 is perfect prediction, while chance is at 0.5); a score for each term; b score relative to the average per term for each decoding approach. As the terms in NeuroSynth do not fully overlap with the terms used in our database, not every term has a prediction score with NeuroSynth. The ontology-informed decoder is almost always able to assign the right cognitive concepts to an unknown task and clearly out-performs standards decoders: logistic regression and naive Bayes classifier trained on our database. It also outperforms the NeuroSynth decoding based on meta-analysis.
Fig 6.
Functional atlases with decoding in an ontology.
Regions linked to the various cognitive terms by our mapping approach. They are displayed in 5 different panels depending on their location in the brain: a. visual regions; b. auditory regions; c. motor regions; d. parietal regions; e.cerebellum regions.