Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

< Back to Article

Fig 1.

Experimental design and sample trials of action execution and observation.

A) The actor performed four different actions in the scanner. An actor in the control experiment kept her eyes closed and could not see her own hand actions. B) Trial structure for the actor subjects. C) Trial structure for the observer subjects. Green outline shows the trial portions used for classification. Photographer identified themselves and the purpose of the photograph to the person shown in the photograph, and they agreed to have their photograph taken and potentially published.

More »

Fig 1 Expand

Table 1.

MNI coordinates of clusters included in the overlap localizer ROI.

The individual subject data obtained in action-execution and action-observation localizers were thresholded at T > 2, and supra-threshold voxels overlapping in both execution and observation were preserved. Labels provided from Harvard-Oxford cortical and subcortical structural atlas (FSL).

More »

Table 1 Expand

Table 2.

MNI coordinates of clusters included in the meta-analytic ROI.

Labels provided from Harvard-Oxford cortical and subcortical structural atlas (FSL).

More »

Table 2 Expand

Fig 2.

Schematic description of data preprocessing and analysis for hyperclassification.

Bayesian canonical correlation was used on preprocessed data to acquire mapping between actor’s and observer’s BOLD signals. Mapping was acquired in cross-validated fashion, where a model was trained on four runs of the actor and the observer. The observer’s left-out run was used in subsequent analysis, where shared representation between actor and observer was mapped to actor’s functional space and used in testing the classifier. Bayesian logistic regression was used as pattern classifier. In within-subject classification training and testing was done using the data from the same individual. In hyperclassification training was done on actor’s data and testing on corresponding observer’s data.

More »

Fig 2 Expand

Fig 3.

Brain regions showing activation during action execution and observation localizers.

For each subject, the statistical image was thresholded at T > 2 and binarized. Warm colors indicate number of subjects that showed activation during action observation for each voxel, and cold colors during action execution. Purple color shows overlap between action execution and observation. Abbreviations: SPL–Superior Parietal Lobule; SOP–Superior Occipital Pole; SMG–Supramarginal Gyrus; LOC–Lateral Occipital Cortex, SI–primary somatosensory cortex, SII–secondary somatosensory cortex.

More »

Fig 3 Expand

Fig 4.

Action-specific brain activation.

Brain regions whose activity increased during specific actions (A: Power grip, B: Precision grip, C: Slap, D: Point) during the main experiment. Warm colors indicate activation for observer subjects, cold colors activation for actor subjects, and purple color overlap between the activation maps of actors and observers.

More »

Fig 4 Expand

Fig 5.

Brain activation during closed-eyes and open-eyes experiments.

Brain regions showing significant activation during action execution and observation in the main experiment for closed-eyes (top) and open-eyes (bottom) conditions for actor and all observers. Warm colors indicate activation for observer subjects; cold colors indicate activation for actor subjects. Purple color indicates overlap between activation maps of actors and observers.

More »

Fig 5 Expand

Fig 6.

Within-subject classification accuracies.

Means and 95% confidence intervals for within-subject classification of seen actions in different regions of interest (ROIs). Dashed line indicates the chance level.

More »

Fig 6 Expand

Fig 7.

Hyperclassification and between-subject classification accuracies.

Means and 95% confidence intervals for hyperclassification and between-subjects classification accuracy of seen actions in different regions of interest (ROIs) in the closed-eyes (left) and open-eyes (right) conditions. Dashed line indicates the chance level.

More »

Fig 7 Expand

Fig 8.

Confusion matrix for functionally realigned hyperclassification with meta-analytic ROI.

Numbers in the cells indicate percentage of action category guesses by classifier. Black frames denote action categories that were correctly classified with above chance level accuracy.

More »

Fig 8 Expand

Fig 9.

Increases in intersubject correlation and classification accuracy after functional realignment.

A) Cortical regions showing significant increase of intersubject correlation (ISC) between actors and observers after functional realignment. Colorbar denotes the difference in ISC, indicated with T-statistic. Green outline indicates the regions included in meta-analytic ROI. B) Cortical regions showing significant increase in searchlight kNN classification accuracy after functional realignment thresholded with increase of accuracy of more than 5 percentage points. Colorbar denotes the difference of kNN accuracy. Green outline as in A).

More »

Fig 9 Expand