Browse Subject Areas

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

< Back to Article

Simultaneously Uncovering the Patterns of Brain Regions Involved in Different Story Reading Subprocesses

Figure 2

Illustration of the model and the classification task.

a- (1) Diagram showing 7 of the 195 story features used to annotate a typical story passage. The size of each square indicates the magnitude of the feature. (2) Diagram of our generative model. The model assumes that the fMRI neural activity at each voxel at time depends potentially on the values of every story feature for every word read during the preceding 8 s. Parameters learned during training determine which features actually exert which influence on which voxels' activity at which times. Response signatures shown here are hypothetical. A rectangle around 4 consecutive feature values indicates these values correspond to one time point and their magnitudes were summed. (3) Time course of fMRI volumes acquired from one subject while they read this specific story passage. Only 6 slices are shown per volume. b- Classification task. We test the predictive model by its ability to determine which of two candidate story passages is being read, given a time series of real fMRI activity held out during training. The trained model first predicts the fMRI time series segments for both of the candidate story passages. Then it selects the candidate story passage whose predicted time series is most similar (in Euclidean distance) to the held out real fMRI time series. The model's accuracy on this binary task is 74%, which is significantly higher than chance performance (50%), with . c- Diagram illustrating the approach to discover what type of information is processed by different regions. We choose one feature set at a time to annotate the text, and we run the entire classification task using only a subset of voxels centered around one location. If classification is significantly higher than chance, we establish a relationship between the feature set and the voxel location. We repeat for every feature set and every location and we use these relationships to build representation maps.

Figure 2