^{1}

^{2}

^{*}

^{1}

^{3}

^{2}

^{4}

Conceived and designed the experiments: AS TRK JH. Performed the experiments: AS KS. Analyzed the data: AS KS FMA. Wrote the paper: AS TRK KS JH FMA.

The authors have declared that no competing interests exist.

Stimulation with rhythmic light flicker (photic driving) plays an important role in the diagnosis of schizophrenia, mood disorder, migraine, and epilepsy. In particular, the adjustment of spontaneous brain rhythms to the stimulus frequency (entrainment) is used to assess the functional flexibility of the brain. We aim to gain deeper understanding of the mechanisms underlying this technique and to predict the effects of stimulus frequency and intensity. For this purpose, a modified Jansen and Rit neural mass model (NMM) of a cortical circuit is used. This mean field model has been designed to strike a balance between mathematical simplicity and biological plausibility. We reproduced the entrainment phenomenon observed in EEG during a photic driving experiment. More generally, we demonstrate that such a single area model can already yield very complex dynamics, including chaos, for biologically plausible parameter ranges. We chart the entire parameter space by means of characteristic Lyapunov spectra and Kaplan-Yorke dimension as well as time series and power spectra. Rhythmic and chaotic brain states were found virtually next to each other, such that small parameter changes can give rise to switching from one to another. Strikingly, this characteristic pattern of unpredictability generated by the model was matched to the experimental data with reasonable accuracy. These findings confirm that the NMM is a useful model of brain dynamics during photic driving. In this context, it can be used to study the mechanisms of, for example, perception and epileptic seizure generation. In particular, it enabled us to make predictions regarding the stimulus amplitude in further experiments for improving the entrainment effect.

Neuroscience aims to understand the enormously complex function of the normal and diseased brain. This, in turn, is the key to explaining human behavior and to developing novel diagnostic and therapeutic procedures. We develop and use models of mean activity in a single brain area, which provide a balance between tractability and plausibility. We use such a model to explain the resonance phenomenon in a photic driving experiment, which is routinely applied in the diagnosis of various diseases including epilepsy, migraine, schizophrenia and depression. Based on the model, we make predictions on the outcome of similar resonance experiments with periodic stimulation of the patients or participants. Our results are important for researchers and clinicians analyzing brain or behavioral data following periodic input.

Electrophysiological measurements such as

Basic properties of the alpha rhythm during photic driving have been investigated by electroencephalographic methods

In order to gain further insight into mechanisms underlying such brain resonance effects and their relevance to brain function and pathology, as well as to make predictions concerning the stimulation parameters, generative models can be used. Such models are called biologically plausible if their state variables and parameters are biophysically meaningful. By fitting the model parameters to measurements, one can test hypotheses on the implementation of brain function. To ensure that this inversion is mathematically tractable and at the same time physically meaningful, the model must strike a balance between mathematical simplicity and biological realism. One class of models designed to meet these criteria is referred to as

However, this type of modeling also involves a number of simplifications that may lead to limitations. First of all, it is based on a simplified notion of the function of a neuron, namely the firing rate model: The neuron convolves the rate of incoming spikes with an alpha-shaped function and thereby generates a change in membrane potential (PSP), and produces an output spike rate that is a non-linear (e.g., sigmoid) function of the membrane potential. These are the most important aspects of neuronal function. However, in the brain, things are usually more complicated. For examples, , modeling is made difficult due to feedback influence of action potentials on the dendritic membrane potentials (back propagation)

In this work, we use a particular local network of NMs first described by Jansen and Rit

To date, the dynamics of this system has been systematically investigated only under the assumption of constant extrinsic input levels, thereby allowing the system to settle in a stable state (e.g., fixed point or limit cycle)

In this paper, we use a continuous-time periodic function as model input approximating a periodic train of pulses. In this continuous function, each single pulse is similar (but not equal) to the single event used by Jansen and Rit for eliciting visual evoked potentials

To our knowledge, this is the first study to investigate a photic driving experiment using a NMM. We demonstrate that with this NMM, one can explain effects of complex behavior in such an experiment. The results also indicate that a relatively simple model of a local neural circuit is capable of producing surprisingly complex and diverse phenomena, which are observable in brain data and relevant to the explanation of brain function.

In our previous work on the extended Jansen and Rit neural mass model (NMM) for a cortical area

Indeed, we observe frequency entrainment, that is, the cortical area responds with the stimulus frequency instead of the intrinsic frequency, thus forming a plateau in the frequency-detuning curves (see colored ranges in _{int}≤1.5 by complex behavior such as chaos (see below, as well as _{int}, the general trend of frequency detuning seems to be shifted towards the intrinsic frequency _{int} by the response frequencies _{resp}, which explains the experimental data (see B in

A frequency-detuning curve refers to the ratio of stimulus to characteristic mean response frequency plotted against the ratio of stimulus to intrinsic frequency, for the normalized stimulus amplitude of 1.5. The entrainment ranges around the intrinsic frequency and its subharmonics are shown in red and in blue/green. Such an entrainment around the intrinsic frequency can be found in photic driving experiments (e.g.,

For the experiment, the mean over subjects is shown as white lines and the region between the 5% and 95% quantiles is covered by red areas. For the model (black dots), the amplitude configuration that best fits the largest Lyapunov exponents of the experiments is used (see Comparison section in

Outside the entrainment ranges, more complex dynamics, including periodic, quasi-periodic and chaotic behavior, are observed (see _{i} that are commensurable (i.e., ∑_{i} _{i} = 0 for some non-zero integers _{i}) for the periodic state and incommensurable (i.e., ∑_{i} _{i}≠0 for any set of non-zero integers _{i}) for the quasi-periodic case. Chaotic behavior is indicated by non-closed bounded trajectories in state space, broadband continuous spectra and positive Lyapunov exponents (see

Orbits, time series, and power spectra (columns) are shown for three configurations (rows) displaying (top-down) periodic (normalized input amplitude; normalized input frequency: 3.6301; 9.33·10^{−2}), quasi-periodic (1.5; 7.59·10^{−2}) and chaotic behavior (3.6301; 7.05·10^{−2}). The orbits are in the state space of normalized postsynaptic potentials of pyramidal cells caused by both interneurons (_{30}) as well as at both excitatory and inhibitory interneurons caused by pyramidal cells (_{31} and _{32}). The red circle represents the stable limit cycle (i.e., harmonic oscillation) arising from Andronov-Hopf bifurcations performed by the unperturbed system. The time series and the power spectra are shown for the normalized postsynaptic potentials of pyramidal cells (which are related to M/EEG). Periodic behavior is characterized by a closed orbit (limit cycle) and a discrete power spectrum with peaks at commensurable frequencies. Quasi-periodic behavior is characterized by trajectories forming an invariant

The map shows the largest Lyapunov exponent _{1} as a function of stimulus amplitude and frequency, indicating the sensitivity of the periodically forced neural mass model to initial conditions. Positive exponents (magenta to black) reflect diverging trajectories irrespective of how close they are, and thus chaos in the system. They also measure the entropy of an attracting set for all cases because the maximum number of positive Lyapunov exponents for any parameter configuration is one. Zero exponents (white) indicate neutral stability, and negative exponents (cyan to yellow) reflect frequency locking. Arnol'd tongue structures (i.e., resonance zones) are indicated by negative largest Lyapunov exponents due to the phase locking between system kinetics and the stimulus. The red arrow indicates the amplitude for which the experimental data fits best (see

By studying the Lyapunov spectra, configurations are discovered where the system has two zero Lyapunov exponents and evolves on a two-dimensional invariant torus, indicating quasi- and bi-periodicity (see

Two-dimensional tori are indicated by two zero Lyapunov exponents (shown in black dots) in the parameter space of stimulus amplitude and frequency. The red arrow indicates the amplitude for which the experimental data fits best (see

The Kaplan-Yorke dimension _{KY} given by Equations (8) and (9) never goes above 1.7, thus hyperchaos does not exist in the model. The red arrow indicates the amplitude for which the experimental data fits best (see

Furthermore, we find that the model is indeed able to explain frequency entrainment that is observable during a photic driving experiment (see also _{1}

The LEDs were powered for half of each period. The raise and decay time for the LEDs was measured to be 100 µs.

The largest normalized Lyapunov exponents calculated from the model show very good agreement with those obtained from experimental time series. The largest Lyapunov exponents for the average over all subjects and for the nearest neighbors of the model with the stimulus amplitude

The vertical axis is the normalized postsynaptic potential _{32} on pyramidal cells caused by inhibitory interneurons, that is, the coordinate of the intersection points (black dots) of trajectories with the Poincaré hyperplane after neglecting initial transients. The horizontal axis is the normalized stimulus frequency. The regimes are color-coded and indicated by the horizontal line. Periodic regimes (red) exist, for instance, for frequencies ranging from 0 to 5.34·10^{−2}. In this range, the system appears to undergo a period-adding bifurcation cascade by decreasing the normalized stimulus frequency. Chaotic (blue) and quasi-periodic regimes (green) occur, for example, for frequencies ranging from 5.3·10^{−2} to 6.23·10^{−2} (scattered dots) and between 17.21·10^{−2} and 18.72·10^{−2}. The classification can be also taken from

Participant | Normalized amplitude |
Mean errors |
Significance ( |

1 | 3.67 | 19 | yes |

2 | 3.67 | 22 | yes |

3 | 2.89 | 30 | no |

4 | 3.67 | 17 | yes |

5 | 3.67 | 16 | yes |

6 | 3.47 | 22 | no |

7 | 2.34 | 26 | no |

8 | 3.67 | 23 | yes |

9 | 3.67 | 20 | yes |

10 | 3.67 | 22 | yes |

Mean | 3.6301 | 13 | yes |

Medial | 3.6301 | 13 | yes |

Orbit | Limit cycle | Two-torus | Strange attractor |

Occurrence in frequency range (·10^{−2}) |
>0 to 18.72 | 17.21 to 18.72 | 5.34 to 6.23; 7 to 7.18; 7.52 to 8.04; 8.92 to 9.29; 10.33 to 11; 11.96 to 17.21 |

In this study, we analyzed the behavior of the periodically forced extended Jansen and Rit neural mass model (NMM) as a function of amplitude and frequency of the stimulus within biologically plausible ranges. The system investigated exhibits interesting and complex dynamics, including chaos. As an important result, the model was able to account for the EEG dynamics of a photic driving experiment. Photic driving paradigms are of great importance in clinical practice

It should be pointed out that many aspects of our results are in close agreement with previous studies of other types of periodically driven oscillators (see, for example,

While reverse

We applied the concept of a periodically forced oscillator to model brain resonance effects. In the brain, such periodic input might stem from rhythmic stimulation of the brain, such as in the photic driving paradigm, or from the output of other oscillating brain areas. Such coupling between (oscillating) processes inside and outside the brain has been discussed as important for the processing of information (e.g.,

In our simulations, we found that the dynamics of the periodically forced extended Jansen and Rit NMM feature a rich mosaic of complex behavior (_{resp} above and below the diagonal in _{int}<1 and _{int}>1), respectively.

In regions of the parameter space without entrainment, complicated interaction between stimulus and intrinsic kinetics leads to periodic, quasi-periodic, and chaotic behavior, as indexed by the largest Lyapunov exponents and the Kaplan-Yorke dimension. Areas with different dynamic behavior form fractal structures in parameter space (

It should, however, be pointed out that this result has been obtained from a purely deterministic model without any added or modulating noise. If noise is added to the input, this would cause jitter in its amplitude and frequency, and thereby impose a blur on the pattern depicted in

Consequently, we predict that a decrease in stimulus intensity in photic driving experiments would shrink and an increase would broaden the ranges of frequency entrainment (i.e., the plateaus in the frequency-detuning curve). Our model also predicts that saturation effects become important starting with approximately 1.3 times the currently applied stimulus intensity and for intensities close to zero. A stimulus increase between 1 and 1.3 times the current intensity could lead to an improved entrainment effect (i.e., broadened range). Such broadening of the entrainment range is particularly important because in clinical practice, the individual alpha frequency is usually unknown. It is important to know how great an increase in the stimulus intensity still improves the entrainment effect and hence makes the photic driving more reliable.

Although the effect of photic driving has long been known, and standard examination in neurology includes intermittent photic stimulation in patients with suspected photosensitive epilepsy, the exact pathomechanism is not well understood. It is known that the

In short, we show that our model is capable of accounting for major aspects of the photic driving paradigm. This sets the scene for future work that will explore the predictions of the model in health and disease in more detail based on additional experimental data. Furthermore, a systematic exploration of the parameter space of the model with respect to brain resonance is needed. All this requires substantial efforts and is beyond the scope of the current proof-of-principle paper.

A principal limitation of our study is the modeling of the thalamus as independent signal generator, neglecting the cortico-thalamic feedback loop. However, we tested a model of the thalamus according to Robinson et al.

Another issue which must be discussed is whether and to what extent our results support the idea of chaotic dynamics in the brain. The model investigated here describes complex, partially chaotic, dynamics at the mesoscopic spatial scale, which captures mass action of neural ensembles ^{5} to 10^{9}, mainly cortical, neurons

Apart from brain rhythms in characteristic frequency bands (e.g., the alpha rhythm), complex behavior with noise-like characteristics causes the continuous spectral components in these data. This can be interpreted as filtered noise (e.g., stochastic sensory input) or described by nonlinear deterministic processes. We have shown that periodically driven NMMs may explain the continuous spectral components of M/EEG without having to consider noisy input processes. Other NMM studies often apply a stochastic input process with the effect that the spectra are more realistically widened around an intrinsic frequency of interest (e.g., alpha band) (e.g.,

Furthermore, the complex behavior of the NMM for certain parameter sets or ranges could be used to explain ordered sequences of dynamic regimes and multi-stability in M/EEG data by producing a temporal hierarchy

One way to achieve ordered sequences of dynamic regimes that are sufficiently robust against noise is to adequately change the state space through parameter changes that are slower than the state dynamics producing a temporal hierarchy

Note that the directions of parameter changes play an important role in parameter ranges of the system where a hysteresis occurs (see, for example, in

The present study is the first to find complex types of behavior like entrainment, chaos, and periodic and quasi-periodic motion in a periodically forced Jansen and Rit NMM for a single cortical area for biologically plausible parameter ranges without considering noise processes. Such dynamics are observable in brain data and relevant to the explanation of brain function. We demonstrate that with the NMM, one can explain brain resonance phenomena like frequency entrainment in a clinically relevant photic driving experiment. It should be pointed out that, at this stage, the aim of our model has not been to directly improve the diagnostics of mental illnesses, but rather to allow deeper understanding of the mechanisms underlying a diagnostic tool and thereby pave the way for future new treatments and diagnosis techniques. As a logical next step, the model should be applied to pathological cases in order to specify what disease-specific inferences can be made.

As any model, our model features a number of simplifications with respect to reality. The mean-field model studied embodies structural (e.g., local neural circuitry) as well as functional approximations (e.g., mean postsynaptic potential (PSP), mean firing rates and its conversions) of neural circuits to describe brain dynamics at the mesoscopic and the macroscopic levels, which are accessible, for instance, to LFP and M/EEG. Simplifications appear at all levels of modeling: the description of single cell behaviors, the modeling of neural masses (NMs) based on a single cell description (i.e., firing rate neuron), the description of the local neural circuitry and the description of networks of brain areas.

On the single cell level, we consider the firing rate instead of individual action potentials. Moreover, only two types of synaptic kinetics are modeled, which leads to two types of neurons that either excite or inhibit other neurons. In the brain, there is a great diversity of electrophysiological neuron types that differ in their specific input and output operations

On the population level, the distribution of states (i.e., PSPs and firing rates) is described by their means, while variances and higher-order statistics are left out of consideration.

The local neural circuitry of the cortex is characterized by a wealth of distinguishable populations and their interconnections (see, for example,

Finally, in this work, we deal with a cortical area mean-field model, without considering projections to the rest of the brain. In particular, the thalamus, which might play a role here, is modeled only in terms of its output.

Moreover, since the retina is fully illuminated by the flicker that drives much of the visual cortex (see

In this study, we show that a simple local cortical area model is already capable of performing relevant complex dynamics, particularly in response to periodic inhibitory feed-forward stimulations. Based on the fact that such a local cortical circuitry of neural populations is embedded in large-scale networks that can span the whole brain and also include subcortical structures such as the thalamus, the question arises to what extent network interactions might contribute to the complexity of brain signals such as M/EEG.

The present work might also contribute to the understanding of large-scale networks. In particular, the present results can be applied to inhibitory feed-forward interactions in networks between two local area models, where one model periodically performs spikes that drive the other model. The frequency entrainment or locking phenomena we found here can thus be interpreted as an effect of network interactions, which might have an impact on functional or effective brain connectivity measures such as the phase correlation (e.g.,

From the modeling perspective, one can obtain a dynamic regime of a local cortical area such as quasi-periodic behavior within a network by frequency locking through feed-forward inhibition from another local area by considering the following steps: (i) tuning the driving cortical area so that it performs (spiky) rhythms, (ii) selecting the dynamic regime depending on stimulus amplitude and frequency (see

The use of these approaches to control or set up a network depends on the complexity of the graph. For instance, several bidirectional connections or feedback loops usually make a setting more difficult. In such complex graphs, one can expect more complicated behavior than for a single local area model, such as hyperchaos or phase locking of several (chaotic) regimes. However, one might to have to perform a separate analysis for the network.

A generative model for brain measurements such as M/EEG can be specified by two separate systems: the state system _{x}_{z}

The NMM of Jansen and Rit

Note that the feedback loops may also be modeled dynamically (see, for example,

With this NMM, the mean neuronal states can be described by a system of six nonlinearly coupled first-order ordinary differential equations:

pyramidal cells (3) to (excitatory and inhibitory) interneurons (1 and 2, combined to 0)

excitatory interneurons (1) to pyramidal cells (3)

inhibitory interneurons (2) to pyramidal cells (3)

where the state vector _{03}, _{31}, _{32}, _{30}, _{31}, _{32})^{T} contains the normalized mean PSPs _{ba}_{ba}_{b}_{T}. The average synaptic gains or the average numbers of synaptic contacts established between the two NMs _{ba}_{e}/_{i} and, in the formulas (3) to (5), the dot indicates the derivatives with respect to the normalized time _{b}_{b}_{a} x_{ba}_{b}_{b}_{e}, the coupling parameter _{ba}_{0} _{ba} H_{e,i} ^{2}/_{e,i}, the sigmoid parameter _{0} _{ba}_{b}υ_{ba}_{0}, the slope of the sigmoid _{ba}_{e,i} (for more details, see

In this work, we explore the dynamics of the single-area model as a function of amplitude and frequency of a periodic input. This input consist of brief pulses similar to ones used by Jansen and Rit for eliciting visual evoked potentials

In the following, we will specify the parameter space to be investigated. The system described by Equations (3) to (5) has nine parameters, namely couplings _{ba}_{b}_{T} with

Jansen and Rit ^{−1}, respectively) leads to the following dimensionless parameters in our model: couplings _{13} = 12.285, _{23} = _{13}/4, _{31} = 4_{13}/5, _{32} = −11_{13}/13, kinetic ratio _{1T} = 0 and PCs _{3T} = 3.36, and time-variant for IINs in the form of periodic pulses _{2T} = ^{2}(

In the absence of stimulation (i.e., _{2T} = 0), the system intrinsically performs limit cycle oscillations arising from Andronov-Hopf bifurcations, appearing as harmonic oscillations with a frequency of approximately _{intr} = 0.108 (see bifurcation diagram and phase portraits, Figure 2 and Figure 3 in ^{−1} _{intr} = 0.108), the stimulus frequency ^{−1}∈N_{1} with the sampling interval Δ

In summary, for analysis, we consider a system of seven first-order ordinary differential equations (Equations (3) to (6)) describing the (neuronal) states ^{*}_{03}, _{31}, _{32}, _{30}, _{31}, _{32}, ^{T} specified by two parameters ^{T}.

We study the differential equations (3) to (6) numerically using the fourth-fifth order Runge-Kutta method over ^{3} in time (which equals 5 minutes for _{JR} = 10 ms, according to Jansen and Rit ^{−11}, and then linearly sampled with an interval Δ^{−2} for further analysis. From the last 6·10^{3} samples (last minute if _{JR} = 10 ms), the histograms of each state were computed using the optimal number of bins ^{3} samples) using the fast Fourier transform, especially for the time series of the PSPs of the PCs, which are reflected in M/EEG.

We also compute the characteristic Lyapunov spectra, that is, all six Lyapunov exponents _{1}>_{2} …>_{6} directly from the differential equations (3) to (6), using the Fortran algorithm by Chen et al. ^{−3}. Chen et al. ^{−6}). The Lyapunov spectrum gives a quantitative measure of the sensitivity of the states of the system dependent on the initial conditions, or, more precisely, the average rate of divergence or convergence of two neighboring trajectories in the state space. Furthermore, the whole Lyapunov spectrum enables statements of hyperchaos.

Hyperchaos is a higher form of chaos with at least two rather than one directions of hyperbolic instability on the attractor

Due to the computational effort required, only the largest exponent or a few of the largest ones are calculated in most of the existing literature. Here, we compute the whole characteristic Lyapunov spectra running on a massive parallel cluster system of the advanced computing unit at the Computer Center, Ilmenau University of Technology. We also select several regions from the parameter space with scattered, presumably fractal, patterns of chaotic regimes (i.e., positive largest Lyapunov exponents) for recomputing at a finer stimulus amplitude and frequency resolution (see

We probe the stability of the characteristic Lyapunov spectra by adding a stochastic term to the stimulus _{2T}. Although the Gaussian noise process that we used is not autocorrelated and could lead to errors due to the constant integration step size of the Adam-Bashforth method, the estimation of the characteristic Lyapunov spectra (for the stimulus amplitude that fits the experimental data best) is stable up to a signal-to-noise-ratio (SNR) of 10 dB, especially for the 1∶1 entrainment region (i.e., _{int}). However, the stochastic term changes the characteristic Lyapunov spectra specifically, for instance, at stimulus frequencies _{int}. For this stimulus frequency range (0.5817<_{int}<0.7632), the profile is qualitatively preserved for mild noise with a SNR up to 17 dB. We determined the SNR as the ratio of the variances of the deterministic and stochastic portions of the stimulus. The variance of the deterministic terms ^{2}(_{2T}) (i.e., periodic pulses) is given as follows_{0} is the modified Bessel function of the first kind, _{T2}.

The knowledge of the whole spectrum enables us to derive the Kaplan-Yorke dimension

The Kaplan-Yorke dimension measures the upper bound of the Hausdorff dimension and is similar to the information dimension (entropy) or correlation dimension of an attractor. The Hausdorff dimension quantifies the complexity of the geometry of the attractor. For example, the Hausdorff dimension of a point is zero, of a line is one, of a plane is two, but irregular sets, such as fractals or the attractors found in this work, can feature non-integer Hausdorff dimensions. We divide the state space by classifying the behavior of the system qualitatively. To that end, we specify a Poincaré map

Experimental data were obtained by performing a photic driving experiment that was adapted to the individual alpha frequency of the subjects. Data were previously published by Schwab et al.

One EEG channel located in the occipital region (O_{1}) was examined per participant. Data were filtered and down-sampled to 200 Hz. For each participant, periods of 62.5 s (

The estimation of the largest Lyapunov exponent was based on the approach of Wolf et al.

Our periodically driven deterministic model exhibits chaos or otherwise complex behavior in certain parameter ranges (see

The calculated 15 largest Lyapunov exponents from our experimental data are all positive due to the background noise. For our model, this is the case only if chaos arises (see ^{−2}≤_{1, Model}≤−9.4435·10^{−5}), by a shift-and-scale transformation _{1}. For the means and standard deviations of _{1}(_{1}(_{1}•exp(_{1}) with _{1} = exp(

Since the ratio between the sampling rates on the frequency axis is 4.6 between the model (69 samples) and the experimental data (15 samples), we compare an experimental data point with the four nearest neighbors in the model. The comparison and detection of the model configuration that fits the experimental data best comprise seven steps: (i) select the four nearest neighbors along the stimulus frequency axis of a amplitude configuration of our model to a query point in the experiment (i.e., the response to an experimentally applied ratio of stimulus to intrinsic alpha frequency), (ii) calculate the Euclidean distances in the plane spanned by frequency and largest normalized Lyapunov exponent between each of the nearest neighbors and the experimental data point (this way an agreement between model and data Lyaponov exponents is weighted according to the agreement between the frequencies they belong to), (iii) determine the maximum Euclidean distance between the nearest neighbors and the experimental data point, where the largest normalized Lyapunov exponent of a nearest neighbor is set to the lower (min(_{1, Model}) = −9.6972·10^{−2}) or to the upper bound (max(_{1, Model}) = −9.4435·10^{−5}) of the model-based largest Lyapunov exponent if the query point in the experiment is greater than −4.8533·10^{−2} (i.e., min(_{1, Model})/2+max(_{1, Model})/2) or not, respectively, (iv) calculate the relative error as the ratio of the distance of a nearest neighbor to its maximum distance, (v) detect the nearest neighbor with the minimum relative error for each experimental data point and average these errors over all 15 data points (for the different experimental frequencies), (vi) repeat steps (i) to (iv) for each amplitude of the model stimulation, and (vii) find the model configuration with the stimulus amplitude that fits the data best by detecting the minimum of the averaged minimum relative errors (i.e., mean error

In order to test the significance of the comparison results, we (i) compute Pearson's (linear) correlation coefficient for each stimulus amplitude

Moreover, in order to further substantiate our findings, we performed a bootstrap test. We randomized the sequence of experimental Lyapunov exponents along the frequency axis so that their distribution remained the same but any putative frequency-dependence was destroyed. Then we applied our fit method and recorded the fit error. We repeated this 5000 times and obtained an estimate of the error distribution. Counting the occurrences of errors that are below the one obtained with the true sequence of frequencies, we obtained an estimate of the probability that such an error could have been achieved by chance.

(EPS)

(EPS)

(AVI)

We would like to thank Henning Schwanbeck (Computer Center, Ilmenau University of Technology), who graciously provided timely computational support, and Stefan J. Kiebel for very helpful comments on an earlier version of this manuscript.