Fig 1.
Uncertainty-relevance model of preference shift.
Before information about the ‘other’ is seen, beliefs about the reference distribution are uninformative and so the original beliefs about the self are proportional to the likelihood p(ds; ks). Once data do about the other are seen, the likelihood of ko combines with the conditional probabilities that ks and ko as they are drawn from the reference distribution; this combination multiplies the beliefs about the self to yield the posterior (shifted) ks. This is a schematic representation of Eq 5 (see e.g. its penultimate line).
Fig 2.
a. Correlation between kb and Ts in the population. ln(Ts) is plotted against kb, as the latter is already in ln units and the two enter Eq 2 on the same footing. Pearson r = 0.55, p < 1e-70. b. Similar plot for the KU parameterisation. r = -0.03, p = 0.37. Note two 'clumps' near ub ~ 0 (or ln(Ts) approx. -4 to -2) which appear separate from main cloud of points.
Fig 3.
The difference between m-for-self after learning and before learning as a function of partner’s preference.
This difference (ordinate) is plotted against the difference between m -for-other and m-for-self-before-learning. Two clusters form because we exposed participants to others that were 2.3 ln units away in modal preference (in either direction). Red is the identity line (fully adopting other's preference). Green is the linear regression line. It has a positive slope as expected (p ~ 0.0), but a negative intercept, denoting a slight overall bias for shifting towards more patient preferences.
Fig 4.
The apparent discounting shift ma-mb, considered in the direction of the ‘other’, was regressed against σr and u in the whole sample, N = 738.
This shift is plotted against each variable removing the variance predicted by the other. We focused on variable inter-relationships, thus ignoring y-intercept terms. a. Shift vs. reference dispersion σr. The bigger the likely distance (σr) the smaller the shift. b. Shift vs. preference uncertainty u is also in the direction predicted by Bayesian reasoning. We note that in each case the population consists of a denser core of points but also of penumbrae that slightly dilute the overall fits (coloured lines). Here we follow this more conservative whole-sample regression; see S1 Text for post-hoc quality-controlled analyses.