The Affective Ising Model: A computational account of human affect dynamics

The human affect system is responsible for producing the positive and negative feelings that color and guide our lives. At the same time, when disrupted, its workings lie at the basis of the occurrence of mood disorder. Understanding the functioning and dynamics of the affect system is therefore crucial to understand the feelings that people experience on a daily basis, their dynamics across time, and how they can become dysregulated in mood disorder. In this paper, a nonlinear stochastic model for the dynamics of positive and negative affect is proposed called the Affective Ising Model (AIM). It incorporates principles of statistical mechanics, is inspired by neurophysiological and behavioral evidence about auto-excitation and mutual inhibition of the positive and negative affect dimensions, and is intended to better explain empirical phenomena such as skewness, multimodality, and non-linear relations of positive and negative affect. The AIM is applied to two large experience sampling studies on the occurrence of positive and negative affect in daily life in both normality and mood disorder. It is examined to what extent the model is able to reproduce the aforementioned non-Gaussian features observed in the data, using two sightly different continuous-time vector autoregressive (VAR) models as benchmarks. The predictive performance of the models is also compared by means of leave-one-out cross-validation. The results indicate that the AIM is better at reproducing non-Gaussian features while their performance is comparable for strictly Gaussian features. The predictive performance of the AIM is also shown to be better for the majority of the affect time series. The potential and limitations of the AIM as a computational model approximating the workings of the human affect system are discussed.

DE population N P = 50, a crossover probability CR = 0.7 and the binomial DE/rand/1 crossover strategy. The number of DE iterations typically lied around 1,000.
The DE algorithm is only a heuristic; there is never a guarantee that the global minimum has been found. The optimization procedure was therefore repeated 50 times for each dataset, each time with a different initialization. Out of the 50 runs, the estimate with the smallest min-log-likelihood was retained.
Testing the procedure on simulated data A recovery study and a coverage study was done to test the fitting procedure. By means of the recovery study it was investigated whether the exact parameters with which data had been simulated could be retrieved. This was investigated for various sample sizes.
The coverage study, on the other hand, was used to examine the confidence intervals around the parameters.
Both studies relied on data simulation. Therefore, in first instance, the simulation method is explained. Then, the actual studies and their results are discussed.

Simulating data
To simulate data, parameters are required. In order to cover a (more) relevant part of the parameter space, 300 datasets were randomly selected and the corresponding maximum-likelihood estimates were computed using the fitting procedure discussed above. Although the trustworthiness of the procedure had not yet been verified when obtaining these estimates, they still ensured a more realistic simulation study; at least more realistic than a random selection of parameters. These 300 parameter vectors were subsequently treated as true parameter vectors (denotes as Θ) and they were used to simulate data.
Because the data format is that of a time series, a second ingredient that is required to simulate data is a schedule of measurement moment (i.e., beeps). Again, to ensure relevant schedules, those of the actual datasets were used.
To simulate data, the observed data points in the dataset, except the last one, were used as starting points. Eq (3) in combination with the parameter vector Θ corresponding to the dataset were used to simulate a data point at the time of the April 24, 2020 3/6 ensuing beep. An observation at the time of the very first beep was obtained using the equilibrium distribution Eq (1). To avoid burdensome computations, the first measurement of each day was also drawn from the equilibrium distribution instead of running a simulation from the last beep of the previous day.

Manipulating the sample size
For both the recovery study and the coverage study a range of sample sizes were used.
The larger the number of data points, the easier it must become to recover the parameters with which the data was simulated. Retrieval must be exact for an infinite amount of data. Similarly for the coverage study, not only should the confidence intervals of the parameters become smaller with an increasing number of data points, but any discrepancy between the actual coverage probabilities and the nominal coverage probabilities should become smaller as well. To

Recovery
The results of the recovery study for the AIM are shown in Fig A. The recovered parameters are depicted as a function of the true parameters that were used to simulate data for N = 1 (lighter dots) and N = 100 (darker dots). If the recovery is exact, the dots lie on the main diagonal (red line). We can see that the recovery becomes exact as the sample size increases, except for the parameter D. The recovered parameters are depicted in function of the true parameters. If the parameter was recovered correctly, the point lies on the main diagonal. Lighter dots correspond to a recovery study with N = 1 and darker dots correspond to a study with N = 100.
its equilibrium (i.e., almost no autocorrelation). When the D becomes too large, the model will always relax towards its equilibrium distribution before it reaches the next beep. When this happens, the actual value of the D is not important anymore. It only matters that it is sufficiently large.

Coverage
We used a coverage study to examine the frequency with which the true parameter values were contained in the estimated confidence intervals with a predetermined confidence level. If, for instance, the confidence level was set at .95, the confidence intervals should cover the true parameters in 95% of the cases. This is referred to as the actual coverage probabilities are set out against the nominal coverage probabilities. The lighter lines correspond to N = 1 and the darker lines to N = 10. All of these lines are expected to lie on the main diagonal indicated in red. As the sample size increases, the lines shift more towards the main diagonal. In general though, the actual coverage probability is larger than the nominal coverage probability, meaning that the computed confidence intervals are somewhat too broad. This overestimation is small in the upper right corner however (which practically matters the most). Hence, we can conclude that the coverage performance of the AIM is sufficient.

The bounded OU model
To compute the likelihoods of the bounded OU model the same simulation based technique was used as the one that was used for the AIM. As a sanity check, we also did a recovery study for this model. The results turned out to be similar to those of AIM.