Skip to main content
Advertisement

< Back to Article

Fig 1.

Two complementary perspectives on population activity.

(A) The multivariate activity data can be viewed as a set of activity profiles (columns) or as a set of activity patterns (rows). An activity profile is a vector of responses of a single channel across experimental conditions. An activity pattern is a vector of responses across all channels for a single condition. Activity data can be visualized by plotting activity profiles as points in a space defined by the experimental conditions (B,D), or by plotting the activity patterns as points in a space defined by the measurement channels (C,E). (B) If the activities are uncorrelated between conditions, then (C) the corresponding activity patterns of all three conditions are equidistant to each other, and can be equally well distinguished. (D) If the activities are positively correlated for two conditions that elicit similar regional-mean activation (conditions 2 and 3 here), then (E) the activity patterns for these conditions are closer to each other and can be less well distinguished.

More »

Fig 1 Expand

Table 1.

Comparison of encoding analysis with regularization, pattern component modelling (PCM), and representational dissimilarity analysis (RSA).

More »

Table 1 Expand

Fig 2.

Three approaches to testing representational models.

(A) In encoding analysis, the distribution of activity profiles is described by the underlying features (red vectors). The direction of a feature vector determines the associated activity profile, and the length the strength of the feature encoding in the representation. (B) PCM models the distribution of the activity profiles as a multivariate Gaussian. This model is parametrized by the second moment of the activity profiles, which determines at what signal-to-noise ratio any feature is linearly decodable from the population. (C) RSA uses the representational distances (or, more generally, dissimilarities) between activity patterns as a summary statistic to describe decodability and hence the second moment of the underlying distribution.

More »

Fig 2 Expand

Table 2.

Notation used.

For non-scalars, the second column indicates the vector / matrix size.

More »

Table 2 Expand

Fig 3.

Adjudicating between encoding models with and without regularization.

The axes of the three-dimensional space are formed by the response to three experimental conditions. The activity profile of each unit defines a point in this space. Models are defined by their features (blue arrows) and (when using regularization) a prior distribution of the weights for these features. The features and the prior, together, define a distribution of activity profiles (ellipsoids indicate an iso-probability-density contours of the Gaussian distributions). To predict the activity profile of a single measurement channel, the model is fitted to the training data set (cross). Simple regression finds the shortest projection (black dot) onto the subspace defined by the features, whereas regression with regularization (red dot) biases the prediction towards the predicted distribution. Two models (A, B) with features that span different model subspaces are distinguishable using regression without regularization. (C) This model spans the same subspace as model A. Unregularized regression results in the same projection as for model A, whereas regression with regularization leads to a different projection. (D) A saturated model with as many features as conditions. Unregularized regression can perfectly fit any data point (cross and black dot coincide). With regularization, the prediction is biased towards the predicted distribution (iso-probability-density ellipsoid).

More »

Fig 3 Expand

Fig 4.

Representational dissimilarity matrices (RDMs) for the models used in simulation.

Each entry of an RDM shows the dissimilarity between the patterns associated with two experimental conditions. RDMs are symmetric about a diagonal of zeros. Note that while zero is meaningfully defined (no difference between conditions), the scaling of the distances is arbitrary. For Experiment 1, the distance between the activity patterns for the five fingers are predicted from the structure of (A) muscle activity and (B) the natural statists of movement. In Experiment 2 (C, D) the same models predict the representational dissimilarities between finger movements for 31 piano-like chords. For Experiment 3 (E, F), model predictions come from the activity of the seven layers of a deep convolutional neural network in response to 96 visual stimuli. The 1st convolutional layer and the 1st fully connected layer are shown as examples.

More »

Fig 4 Expand

Fig 5.

Dependence of encoding model analysis on regularization and the number of included model features.

(A-C) Percent correct model selections using either (solid line) or correlation (dashed line) for encoding models without a prior (blue lines) and with a prior (red line). (D-F) Correlation between predicted and observed patterns. (G-I) Predictive for the encoding models with prior. All values for models without prior are negative, and therefore not visible.

More »

Fig 5 Expand

Fig 6.

Sensitivity of the (solid line) and correlation (dashed line) to the choice of the regularization coefficient.

Simulations come from Experiment 2 with a true signal strength of s = 0.2 and a noise variance of 1. For this combination the optimal regularization coefficient is (dashed vertical line). The correlation criterion is generally robust against non-optimal choice of regularization coefficient.

More »

Fig 6 Expand

Fig 7.

RSA model selection accuracies for different criteria of RDM fit.

Data sets for all three experiments were generated with varying signal strength (horizontal axis). The percentage of correct decisions using different criteria is shown (dotted line). Models were selected based on the Spearman rank correlation (purple), Pearson correlation (green), fixed intercept correlation (blue) or likelihood under the multinormal approximation (red). For comparison, the model selection accuracy for PCM is shown in the dotted line.

More »

Fig 7 Expand

Fig 8.

Model selection accuracy and execution time for likelihood-based RSA, PCM, and encoding analysis with regularization.

(A-C) Model-selection accuracy was inferentially compared between the three techniques on the basis of N = 3,000 simulations, using a likelihood-ratio test of counts of correct model decisions [51]. The signal-strength parameter for the simulation was set to s = 0.3 for Exp. 1, s = 0.15 for Exp. 2, and s = 0.5 for Exp. 3. All resulting significant differences (two-tailed, p<0.01, uncorrected) are indicated by a horizontal line above the bars. (D-F) Execution times for the evaluation of a single data set under a single model. For encoding, the time is split into the time required to estimate regression coefficients (dark blue) and the time to determine the regularization constant (light blue).

More »

Fig 8 Expand