Skip to main content
Advertisement

< Back to Article

Fig 1.

Sloppiness vs. identifiability.

Although sloppiness and parameter identifiability are closely related, they are actually two distinct concepts. Sloppiness refers to an approximate uniform spacing of FIM eigenvalues spread over many orders of magnitude. In the most commmon case (first column) this means that many eigenvalues will be small and also correspond to unidentifiable parameter combinations. However, it is possible (in principle) for all the eigenvalues to be large (second column) so that sloppy models can be identifiable (as in references [13, 14]). It is also possible for model parameters to be unidentifiable and not sloppy (third column) or identifiable and not sloppy (fourth column). We here take λ ∼ 1 as the cutoff between identifiable and unidentifiable motived by arguments in Fig 2.

More »

Fig 1 Expand

Fig 2.

Model manifold widths define relevant and irrelevant parameters.

(Left) The set of all possible model outputs defines a manifold of predictions. The true model ideally corresponds to a point near the manifold (red dot). For typical sloppy models, the manifold is bounded by a hierarchy of widths that are approximately given by the square-roots of the FIM eigenvalues (when parameterized in natural units). Widths of the model manifold are measured in units of the standard-deviation of the data, so that widths much less than one are practically indistinguishable from noise. Widths larger than one, on the other hand, are distinguishable from noise and must be tuned to reproduce the observations. This suggests describing parameter combinations corresponding to large eigenvalues and large widths as relevant or important for the model. In contrast, those parameters corresponding to small eigenvalues and widths are irrelevant or unimportant. We describe widths comparable to the experimental noise as marginal.

More »

Fig 2 Expand

Fig 3.

Experimental design in sloppy system.

Sloppy models are characterized by an exponential distribution of FIM eigenvalues (left). Black lines are FIM eigenvalues for the model in question. Red lines represent additional eigenvalues that would be introduced by using a more realistic model. Optimal experimental design selects experiments so as to shift all the black eigenvalues above some desired threshold (dashed line). Under these new experimental conditions, the red eigenvalues could (1) remain irrelevant, (2) become relevant, or (3) become marginally relevant.

More »

Fig 3 Expand

Fig 4.

FIM for the four EGFR models.

Both the approximate Michaelis-Menten kinetics and mechanistic mass-action kinetics are unidentifiable when fit to the data in reference [7]. Although the optimal experiments in reference [13] lead to an identifiable (but still sloppy) model for the approximate Michaelis-Menten kinetics, the mechanistic mass-action kinetics remain unidentifiable. Furthermore, the FIM of the mass-action model suggests that a minimal model should include at least 60 parameters to explain the expanded observations, i.e., the manifold has approximately 60 widths larger than the experimental noise. The approximate Michaelis-Menten kinetics do not contain all of the relevant physics. The red dashed line corresonds to a relative standard error of 1/e in the inferred parameters.

More »

Fig 4 Expand

Fig 5.

Fit of approximate Michaelis-Menten kinetics to mechanistic mass-action data.

Because the approximate model does not contain all of the relevant mechanisms for the expanded observations, it cannot give a reasonable fit (i.e., within the expected variance of the experimental noise) to all of the experiments simultaneously. We see here that several time series are fit quite badly which could guide a modeler in identifying the missing relevant mechanisms.

More »

Fig 5 Expand

Fig 6.

An example of a sloppy system.

Observations of an EGFR signaling network can be explained by a model that is identifiable and not sloppy. The 18 parameter model has FIM eigenvalues that span fewer than 4 orders of magnitude and are all larger than one. By including additional mechanisms in the model (more parameters) the models become increasingly sloppy and less identifiable. The FIM eigenvalues ultimately span more than 16 orders of magnitude, leading to the large parameter uncertainties reported in reference [7].

More »

Fig 6 Expand

Fig 7.

Quantifying model error.

As in Fig 2, the model of interest forms a statistical manifold in data space, represented by the black dashed line. Another more realistic model also forms a statistical manifold of higher dimension (red surface). Experimental observations (blue dot) are generated by adding Gaussian noise of size σ to a “true” model (red dot). The least squares estimate is the point on the approximate model (black dot) nearest to the experimental observations. However, the distance from the best fit to the observed data has contributions from both the experimental noise and the model error.

More »

Fig 7 Expand

Fig 8.

Uncertainty ellipses and approximate models.

Parameters inside the ellipses are consistent with the data. Experimental design identifies complementary experiments to minimize the region of consistent parameters. If the approximate model does not include the this region, the model will be non-predictive for the collection of experiments.

More »

Fig 8 Expand