Figures
Abstract
Neural network dynamics are governed by the interaction of spiking neurons. Stochastic aspects of single-neuron dynamics propagate up to the network level and shape the dynamical and informational properties of the population. Mean-field models of population activity disregard the finite-size stochastic fluctuations of network dynamics and thus offer a deterministic description of the system. Here, we derive a stochastic partial differential equation (SPDE) describing the temporal evolution of the finite-size refractory density, which represents the proportion of neurons in a given refractory state at any given time. The population activity—the density of active neurons per unit time—is easily extracted from this refractory density. The SPDE includes finite-size effects through a two-dimensional Gaussian white noise that acts both in time and along the refractory dimension. For an infinite number of neurons the standard mean-field theory is recovered. A discretization of the SPDE along its characteristic curves allows direct simulations of the activity of large but finite spiking networks; this constitutes the main advantage of our approach. Linearizing the SPDE with respect to the deterministic asynchronous state allows the theoretical investigation of finite-size activity fluctuations. In particular, analytical expressions for the power spectrum and autocorrelation of activity fluctuations are obtained. Moreover, our approach can be adapted to incorporate multiple interacting populations and quasi-renewal single-neuron dynamics.
Author summary
In the brain, information about stimuli is encoded in the timing of action potentials produced by neurons. An understanding of this neural code is facilitated by the use of a well-established method called mean-field theory. Over the last two decades or so, mean-field theory has brought an important added value to the study of emergent properties of neural circuits. Nonetheless, in the mean-field framework, the thermodynamic limit has to be taken, that is, to postulate the number of neurons to be infinite. Doing so, small fluctuations are neglected, and the randomness so present at the cellular level disappears from the description of the circuit dynamics. The origin and functional implications of variability at the network scale are ongoing questions of interest in neuroscience. It is therefore crucial to go beyond the mean-field approach and to propose a description that fully entails the stochastic aspects of network dynamics. In this manuscript, we address this issue by showing that the dynamics of finite-size networks can be represented by stochastic partial differential equations.
Citation: Dumont G, Payeur A, Longtin A (2017) A stochastic-field description of finite-size spiking neural networks. PLoS Comput Biol 13(8): e1005691. https://doi.org/10.1371/journal.pcbi.1005691
Editor: Bard Ermentrout, University of Pittsburgh, UNITED STATES
Received: August 30, 2016; Accepted: July 20, 2017; Published: August 7, 2017
Copyright: © 2017 Dumont et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: All relevant data are within the paper.
Funding: AL thanks the Natural Sciences and Engineering Research Council of Canada (http://www.nserc-crsng.gc.ca) for funding. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
Introduction
Neurons communicate by sending and receiving pulses called spikes which occur in a rather stochastic fashion. A stimulus is thus translated by neurons into spike trains with a certain randomness [1]. At the microscopic scale, this variability is mostly attributed to the probabilistic nature of the opening and closing of ion channels underlying the emission of an action potential. At a mesoscopic scale, variability typically stems from the seemingly random barrage of synaptic inputs. This variability is fundamentally noise [2, 3]. Many papers have been devoted to establish its origin [4, 5], and the mathematical formalization to suitably describe this effect has been an intense subject of research over the past decades. Nowadays, models are capable of reproducing both the statistics of the spiking activity and the subthreshold dynamics of different cell types.
In neuroscience, it is believed that information—about external stimuli, internal states, motor outputs, etc.—is encoded in the timing of spikes produced by populations of neurons [6]. An understanding of this high-dimensional response, from an information-theoretic or dynamical point of view, is facilitated by various dimensionality-reduction methods [7]. A trivial one is to consider the population activity, i.e. the proportion of neurons firing in small time windows, which is assumed to be a meaningful coarse-grained dynamical variable [6, 8–10]. The importance of population activity is manifest in its extensive use in macroscopic models of neural activity, and by the constant effort put forth to derive its dynamics from single-neuron and network properties.
Most attempts to produce analytically tractable population rate models have made use (directly or indirectly) of mean-field theory [11–14]. The population activity obtained by solving mean-field models is deterministic, since this theory neglects finite-size fluctuations. Theoretically, this causes no problem. Real neural circuits, however, necessarily have a finite size. For a system made up of N independent units, the relative magnitude of fluctuations should scale as . The thermodynamic limit (N goes to infinity) neglects those small fluctuations, and the randomness so present at the cellular level disappears in the description of the circuit. However, these finite-size effects should be taken into account because they can drastically affect both the synchronization [15, 16] and the stability [17, 18] of neural systems.
Various analytical methods from statistical physics have been used to describe such activity fluctuations. A first type of approach consists in adapting the Fokker-Planck formalism to incorporate the finite-size rate fluctuations as a source term [18, 19]. The spiking processes of the neurons are then assumed to be Poisson processes. One can also apply the so-called linear response theory (LRT) [20] and compute spectral quantities characterizing the neuronal spiking processes. This theory relies on the ansatz that the spiking response of a neuron embedded in a network can be written as the sum of the neuron’s unperturbed spiking activity—i.e., when the neuron is isolated from the fluctuations of the network activity—and its first-order response to the fluctuations of synaptic currents. Finite-size effects do not need a special treatment in that theory, but it can only manage wide-sense stationary external inputs that maintain a comfortable distance from bifurcation points. LRT has in fact been successfully used to study finite-size fluctuations in the context of decorrelative effects of inhibitory feedback [21], the interplay between correlation structures and network topology [22], and the effect of correlations on coding [23].
A more systematic approach to study finite-size effects is to construct a master equation describing the time evolution of the probability distribution of the states of the network. In that case the formalism borrows heavily from reaction kinetics in chemistry [24, 25]. The network navigates between states via activation and decay functions, which embody the stochastic transitions between quiescent and active spiking states. Using the master equation, one can then construct a system involving moments of the network’s activity. A troublesome feature of this approach is that lower order moments typically depend on higher order moments, thus constituting a hierarchy of moments that must be truncated in one way or the other. The chosen truncation scheme depends on the assumptions about the underlying neural activity. One possibility is to assume that the network is in an asynchronous state—defined by a constant infinite-size mean field—so that spike-spike correlations are of order 1/N, and a direct expansion of the moments to that order becomes possible [26–28]. In the same spirit, one can also perform a system-size expansion—to order —of the master equation, and then construct the hierarchy of moments that can be subsequently truncated [29]. Another truncation based on normal-ordered cumulants assumes near-Poisson spiking [30].
The truncation of the hierarchy of moment equations breaks down near criticality, i.e., near a bifurcation. One way to circumvent this problem is to use path integral methods [31]—borrowed from statistical field theory—which themselves are amenable to the use of renormalization group methods that can extract critical exponents near the bifurcation. These field theoretic methods have also been applied to describe the statistics of deterministic phase neurons [32]. All the approaches discussed above assume a network that is either far below criticality or near criticality.
As we see from this overview, analytical treatments are emerging and hold the expectancy of understanding the nontrivial effects of variability on neural circuits. To move toward the formulation of models that keep track of the intrinsic randomness, our challenge is to correct the usual mean-field equations to account for the inescapable stochastic nature of spike initiation. The aim of the present paper is then to take up the problem of finite-size fluctuations and to show that, actually, one can formulate it in the framework of stochastic partial differential equations (SPDE).
Contrary to other treatments of finite-population dynamics, our main objective is to describe the finite-size activity itself, and not its moments. To this end, we derive a SPDE (see Eq 7 below) that gives the activity of a finite population of spike-response model neurons with escape noise [33] for a fully connected inhibitory network. The equation describes the dynamics of the finite-size refractory density [34], i.e. the density of neurons in a given refractory state at a given time. The boundary condition (Eq 8) or the conservation law (Eq 9) for the refractory density is used to extract the activity. Finite-size fluctuations appear in the SPDE through a two-dimensional Gaussian white noise—in time and along the refractory dimension—whose prefactor vanishes in the infinite-size (thermodynamic) limit. Importantly, the Gaussian white noise acting in the network’s description naturally emerges from the intrinsic randomness of spike initiation present at the cellular level.
The SPDE can be solved numerically via a discretization along its characteristic curves (see Eq 55 in the Methods section), and thus provides a direct mean to simulate finite-size networks, both below and above the bifurcation towards the oscillatory state. Importantly, the simulation time does not depend on the size of the population. Bifurcation analysis of the associated mean-field counterpart enables us to reveal how delayed inhibitory feedback permits the emergence of macroscopic rhythms. More insight into finite-size effects is obtained by applying a linear noise approximation, followed by a study of the spectral and autocorrelation properties of fluctuations in the asynchronous activity regime (see Eqs 18 and 20).
Perhaps the approaches closest in spirit to the one adopted in the following are those of Meyer and van Vreeswijk [27], and Deger, Schwalger and coworkers [35, 36]. Meyer and van Vreeswijk treated finite-size fluctuations in homogeneous, fully-connected networks of renewal spike-response neurons using a refractory density approach, as in the current paper. However, their main goal was to derive equations for the temporal cross correlations in the stable stationary state. Thus, even though their framework is akin to ours, the objectives differ: instead of focusing on the moments, here we analytically build a stochastic dynamical equation that captures the temporal evolution of the activity itself, including finite-size fluctuations.
The paper by Deger, Schwalger et al. [35] dealt with the finite-size activity of randomly connected spike-response model neurons with adaptation. With their approximations, they ended up with quasi-renewal neurons [37] connected in an effective all-to-all fashion. In that context, they established an integral equation for the activity, and obtained the temporal autocorrelation function of the activity. However, this formulation makes it impossible to actually simulate the network’s activity; this is because their correlation function includes the probability density that a spike in the past causes a spike in the future, which cannot be computed in general because the future activity—which is yet unknown—is required.
In a recent work, however, Schwalger et al. [36] solved that problem by proposing an error minimizing method that permits rapid and accurate simulations of the firing statistics of the network, for a single or multiple populations. They derived stochastic differential equations that only involve the finite-size activity; contrary to our approach, information about the refractory distribution is thus purposefully disregarded. A byproduct of this approximation is that the neuron number is not conserved anymore, unlike our approach. Moreover, using a Gaussian approximation they obtained a stochastic equation—Eq 41 in their appendix—that is similar in spirit to ours, although it differs somewhat in its details. Their stochastic equation also involves the temporal evolution of the refractory distribution, but is not thoroughly analyzed.
This paper is organized as follows. First, we present the network and neuron model that will be used throughout. Then, we obtain the SPDE and perform a linear noise approximation, which is then used to study both the stability of the deterministic (mean-field) part of the activity, and the statistical properties of the finite-size fluctuations. These include spectral and autocorrelation properties.
Results
The neural network
We consider a fully connected (all-to-all) homogenous network of N neurons with inhibitory synapses. Neuronal dynamics are described using the spike-response model with escape noise [33]. For this model, neuronal spiking occurs according to an instantaneous firing intensity or hazard function that depends on the difference between the membrane potential and a fixed threshold. The membrane potential of neuron i at time t is given by (1) where hi(t) is the change in potential caused by inputs (input potential), is a refractory kernel representing the reset of the membrane potential following a spike, and κ is a filter kernel encompassing synaptic and membrane filtering of synaptic and external inputs. The spike train of any neuron is given by where the sum is over all of its spike times. Hence, the membrane potential of neuron i at time t is given by the convolution of its own spike train with the refractory kernel [], to which are added a filtered external input [(κ * Iext)(t)] and the total inhibitory synaptic input coming from all neurons within the network, with uniform synaptic weight −Js/N. We shall restrict ourselves to non-adaptive single-neuron dynamics, meaning that only the most recent spike of a neuron affects its potential. Thus, (2) where is the refractory state or age of neuron i, and denotes its most recent spike.
We are interested in the dynamics and statistics of the population activity (3) The homogeneity of the network implies that the index i can be dropped in Eq 1: all neurons with the same refractory state have the same subsequent dynamics. Using the definition of the population activity, Eq 1 becomes (4) The hazard function ρ is the probability per unit time of emitting a spike, conditional on the past history of the activity, and of the external signal, For concreteness we will use an exponential hazard function, (5) where λ0 and δu = 1 mV are constants. This choice has no impact on the theory presented herein, other than simplifying some computations. Also, the refractory kernel is taken to be where τ is the recovery time scale. The synaptic filter is (6) with H(t) the Heaviside step function, τs the synaptic decay and Δ the conduction delay. The specific expression characterizing the escape rate is justified and largely used because it combines a relative mathematical simplicity with the capacity to realistically reproduce observed neuronal behaviors. Note that Js and Iext have units mV⋅ms and mV, respectively, because κ has units ms−1. The synaptic kernel is thus defined because in that case the average input potential is independent of the time scale τs. The hazard function and the synaptic kernel are depicted in Fig 1. We also provide examples of network dynamics in Fig 2. The dynamics of every neuron follow the non-adaptive version of Eq 1 (i.e., with Eq 2), together with the escape rate firing mechanism and the hazard function of Eq 5. As Js increases, the network develops an oscillatory instability (see below) and oscillations appear.
A) Hazard function ρ(r), Eq 5, when h(t) ≡ 5 and λ0 = 1 kHz. B) Synaptic filter κ(t) for different values of the synaptic decay τs, with Δ = 10 ms (cf. Eq 6). The integral of κ over time is normalized to 1.
The network contains N = 200 neurons. In each panel is shown the spiking activity of every neuron in a raster plot (dots represent spikes). The solid red line represents the activity A(t) of the network, obtained by counting the total number of spikes in a time window Δt = 0.2 ms, and dividing by NΔt. For all panels, λ0 = 1 kHz, τ = τs = 10 ms, Δ = 3 ms and Iext = 7;. Panel A: Js = 3; panel B: Js = 4, panel C: Js = 5.
The stochastic-field equation
The continuous deterministic mean-field approach to modeling neural networks fails to capture many important details. The missing detail manifests itself as small unpredictable finite size fluctuations present at the network level. Our main challenge is then to define an equation that fully entails the stochastic aspects of the network. To do so, we consider the finite-size refractory density q(t, r), such that Nq(t, r)dr gives the number of neurons with age in [r − dr, r) at time t. A time dt later, all neurons that did not fire will have aged by an amount dr = dt, whereas the number of neurons that did fire is given by a Poisson random variable with rate Nρ(t, r)q(t, r)dtdr [25, 35]. This idea encompasses the presence of fluctuations that are proportional to the mean number of firing events and therefore retains the full random character of the spiking probability. Using a Gaussian approximation of this Poisson distribution, i.e. making the approximation where denotes the standard normal distribution and ∼ means “is distributed like”, we show (see Methods) that q(t, r) obeys the following stochastic partial differential equation (SPDE): (7) where η is a Gaussian (sheet) white noise with and The brackets denote an ensemble average over realizations of the stochastic process. The boundary condition is naturally given by the reset mechanism. Indeed, once a neuron triggers a spike, its age is reset to zero. Therefore, the boundary condition is (8) where A(t) is the finite-size activity of the network (see for instance Fig 2). This activity A(t) is also given by the total rate at which neurons of all ages escape their trajectories in the (t, r)-plane: (9) The integral above is to be understood in the Itô sense. Finally, the refractory density must obey a conservation law, (10) since each neuron possesses a given age at any given time. Note that both q(t, r) and A(t) are stochastic processes in this formulation.
From the derivation above we observe that for a network with N cells, the relative magnitude of fluctuations scale as . The stochastic parts in Eq 7 as well as in Eq 9 disappear in the thermodynamic limit when the network is taken infinitely large. Doing so, the equation finally reduces to the classical refractory equation [33]. However, the thermodynamic limit does not allow for any characterization of fluctuations around the mean activity, because it is an entirely deterministic approach as is illustrated in Figs 3 and 4.
A) Time evolution of the stimulus. B) Activity obtained from simulations of the full network. C) Activity obtained from simulations of the mean-field equation (Eq 13). D) Activity obtained from the SPDE (Eq 7). The same initial condition was used in all cases. Parameters are as in Fig 2C, except that N = 500 and Δt = 0.1 ms.
The same initial condition was used in all cases. Parameters are as in Fig 2A, except Δt = 0.1 ms, t = 1000.
Fig 3A shows the time evolution of the external stimulus, whereas panel B gives the spiking activity obtained from a simulation of the full network and its corresponding activity. The activity given by the standard mean field in the thermodynamic limit (N taken infinitely large, see Eq 13 below) is shown in Fig 3C. Although in this case the mean-field approximation captures the essential “shape” of the activity of the full network shown in the panel B, it completely ignores the substantial finite-size fluctuations. Finally, Fig 3D shows the activity as generated via a simulation of the stochastic-field equation (Eq 7). The stochastic-field equation effectively captures both the shape and variability of the full neural activity, which could be described as a noisy version of the mean-field activity. Similar observations can be made regarding the refractory density q(t, r) as can be seen in Fig 4. The numerical integration of Eq 7 is discussed in the Methods section.
Note that multiple populations can be modeled straightforwardly using the above formalism. To each population n would be assigned a refractory density qn(t, r) obeying the SPDE. The respective membrane potentials would be given by with Jnk the total synaptic strength connecting population k to population n. Different hazard functions ρn can be chosen as well.
An analytical solution of the SPDE is exceedingly difficult, if not impossible. However, informations about the statistics of activity fluctuations can be extracted via a system-size expansion, as discussed in the next section.
Linear noise approximation
To investigate the effects of fluctuations for large but finite network size N, we perform a linear noise approximation (LNA) when Iext is constant. In our situation, the LNA states that the density function as well as the neural activity can be approximated by the sum of a deterministic and a stochastic process. The fluctuating part is scaled by a factor that is justified by the Van Kampen system-size expansion [24]. This system-size expansion is usually pursued only up to first order, hence the “linear” qualifier. We write (11) with q0(t, r) and qξ(t, r) the deterministic and stochastic parts, respectively. Similarly, the population activity reads (12) again with A0(t) the deterministic part and Aξ(t) the stochastic part. After algebraic manipulations—including a linearization that only keeps the first-order term in —we find that the deterministic part obeys the usual mean-field description (13) with boundary condition where A0(t) is the deterministic part of the activity. The hazard ρ0(t, r) is with Note that the mean field equation is strikingly similar to the standard age-structured system known as the McKendrick von-Foerster model in mathematical biology [38, 39].
The stochastic component solves a SPDE similar to Eq 7, namely (see Methods) (14) and the derivative is evaluated at u0. This equation is now linear in the function qξ—contrary to Eq 7—and therefore an analytical solution is possible (this is done in Methods in the context of a computation of the power spectrum of activity fluctuations).
When the deterministic component exhibits multiple stable states, the LNA fails to capture transitions between these states. While the LNA is inadequate for this type of situation, it is pertinent when the network fluctuates around a unique stable equilibrium, as illustrated in Fig 5. The stochastic activity is given to first order by Eq 12. After a short transient (∼10 ms), the deterministic part of the activity, A0(t), asymptotically reaches its steady state value A∞ (see black curve in Fig 5C and 5E). Since the neuron number is finite, fluctuations around the deterministic activity are observed, both in the transient and steady states (red curve Fig 5C and 5E). As illustrated in panels B and D of this figure, the activity of the relaxed network is asynchronous.
A) Steady state activity in the asynchronous regime as a function of the synaptic coupling Js. Since Js actually represents the strength of inhibition, A∞ decreases when Js increases. The black curve is obtained by computing A∞ from Eq 15. The small red points and the two large blue dots are steady-state activities computed from simulations of the stochastic neural network. B) and D) The spiking activity is depicted as raster plots. The two simulations correspond to the parameters given by the two large blue dots in panel A. C) and E) Comparison between the neural activity of the fully stochastic neural network (red curve) and the deterministic activity (constant black line) obtained by solving Eq 15. Parameters were N = 100, Iext = 2 mV, τ = 7 ms, τs = 5 ms, Δ = 3 ms and λ0 = 1 kHz. The discretization time step for computing the activity (red curves) was Δt = 0.2 ms.
The average activity in the asynchronous state, A∞, can be computed as the time-independent solution of the infinite-size refractory density equation, Eq 13. This means that the partial derivative with respect to time is zero. After algebraic manipulations, we find that A∞ is given by (see Methods) (15) where and The equation above must be solved self-consistently for A∞. From Fig 5A, the agreement between analytical and numerical results is excellent.
Deterministic oscillatory instability
The asynchronous state can lose stability through an oscillatory instability. Finite-size fluctuations will potentially influence the presence of oscillations. To address this issue, we performed a linearization of the deterministic field equation (Eq 13) around the steady state (see Methods). This allowed us to write the characteristic equation, whose solutions give the eigenvalues λ of the deterministic system. The characteristic equation is denoted with (16) where is the Laplace transform of κ and P∞(r) is the steady-state interspike interval probability density of the asynchronous infinite-size network, i.e. the probability density of obtaining an interval see Eq 46. Also, is the steady state of the infinite-size refractory density. The eigenvalues are thus given by the roots of . The time-independent solution A∞ will be stable if all eigenvalues have negative real parts. The bifurcation line—which separates the oscillatory regime from the asynchronous one—can be drawn numerically, as depicted in Fig 6A. The red line constitutes the boundary between the stability and the instability regions for the chosen Iext. The shaded area defines the region in parameter space where self-sustained oscillations are going to emerge. Well below the transition line, in the asynchronous regime, damped oscillations may occur; however, the network activity will eventually settle into finite-size fluctuations around a constant mean activity. Above the transition, network oscillations are clearly noticeable (Fig 6C).
If the synaptic strength Js and the conduction delay Δ are large enough, then the system undergoes a Hopf bifurcation. A) Bifurcation line in parameter space (red curve). The grey shaded region corresponds to an oscillatory regime of the neural network. B) and C) Raster plots of the spiking activity for 50 cells. Panel B corresponds to the black asterisk lying in the asynchronous (white) region in panel A, whereas panel C depicts the activity in the oscillatory state. Parameters were Iext = 2 mV⋅ms, λ0 = 1 kHz, τ = 10 ms and τs = 10 ms.
The existence of oscillations in fully connected, delayed inhibitory networks is well known [40], and underlies the ING (Interneuron Network Gamma) mechanism for generating gamma rhythms (see for instance [41]). The above analysis serves the purpose of delineating the oscillatory and asynchronous states in the phase diagram of Fig 6 with the help of the characteristic function. Both the phase diagram and the characteristic function will come handy in the next section where we will study the fluctuations of the asynchronous state. Note, however, that we shall not study the exact nature of the bifurcation; for the network studied in [40], the Hopf bifurcation can be either subcritical or supercritical, depending on the inhibitory coupling strength.
Fluctuations in the asynchronous regime
Asynchronous firing has been repeatedly associated with the background activity in cortical networks [42]. Finite-size effects generate fluctuations whose basic statistics are closely related to the response of the associated infinite-size dynamics. To characterize the statistical content of fluctuations we compute the power spectrum of the activity in the asynchronous regime. The power spectrum describes how the variance of A(t) is distributed over the frequency domain and accordingly helps to identify the dominant frequencies, if any.
By definition, the power spectrum is given by where is the Fourier transform of the neural activity restricted to a time interval and the brackets 〈⋅〉 denote an average over the noisy realizations of the network dynamics. To derive an analytical expression for this quantity, we assume that the deterministic part of the activity has reached its equilibrium state (below the threshold of instability). This means that Eq 12 becomes neglecting terms of order 1/N and above. Hence, (17) with the power spectrum of the fluctuations. In Methods, is shown to be (18) where ℜ means to take the real part of the expression in curly brackets and P∞(r) is the interspike interval distribution in the asynchronous state. The stability of the deterministic system—embodied in its characteristic equation —appears explicitly in the above expression. Of course, in the asynchronous regime there are no pure imaginary solutions to this equation. However, the minima of dictate the position of dominant frequencies, as illustrated in Fig 7. This figure also compares the power spectrum obtained from Eq 18 to the power spectrum obtained from both an average over different realizations of the fully stochastic neural network and the SPDE; it shows an excellent agreement. We carried out the comparison for more simulations than are shown here, and our results hold over a wide range of parameters.
We display the theoretical PS (solid red curve), omitting the DC component at ω = 0 (cf. Eqs 17 and 18), as well as an average spectrum (black trace) over 50 realizations of the SPDE. The blue curve is the PS of the full spiking network activity. Parameters were Iext = 2 mV⋅ms, λ0 = 1 kHz, τ = 10 ms and τs = 10 ms. A-B-C correspond to the three stars of Fig 6A.
We note that there is a discrepancy between theory and numerics mainly at lower frequencies as the bifurcation is approached from below (Fig 7B). The theoretical power spectrum of Eq 18 does not capture the emergence of harmonics (for instance, the small peak around f = 400 Hz in Fig 7B). The LNA is known to fail as soon as parameters are chosen to be close to a bifurcation point. The spectra computed from the SPDE agree very well with the spectra from the whole network for all the regimes shown. Thus the SPDE can be used to provide a good picture of a spiking network’s spectral properties without having to resort to the much longer full network simulations, at least when the network size is not too small.
Another important remark concerns the discrepancy between the SPDE and the full network for very small system sizes. To illustrate this effect, we show in Fig 8 comparisons between the power spectrum of the full network and the SPDE for different sizes. The simulations presented in Fig 8 correspond to parameters under the bifurcation line (top row), close to the bifurcation line (middle row), and above the bifurcation (bottom row). The chosen parameter values within these regimes correspond to the three stars of Fig 6A. Panels A, B and C are illustrations for different number of cells. We clearly see that the SPDE gives very good results for large N, for all regimes. However, discrepancies appear when the number of cells becomes too small (of order ∼10; panels C). This is no surprise since our theory is expected to fail for very small networks. Indeed, the Gaussian approximation to the Poisson variable (see Methods) that is used to arrive at the SPDE (Eq 7) implicitly assumes the number of cells to be large enough. Hence the observed discrepancy when the number of cells is too small.
We display the average spectrum over 50 realizations of the SPDE (black traces) and of the full spiking network (blue traces). Panels A, B and C correspond to simulations with network sizes N = 1000, 100 and 10, respectively. The subscripts 1 to 3 correspond to the three stars in Fig 6A: number 1 corresponds to the star in the non-oscillatory regime, 2 represents the red star and 3 corresponds to the rightmost star in the oscillatory regime. Other parameters as in Fig 6.
Another way to obtain the power spectrum would be to first compute the autocorrelation function of Aξ(t) and then use the Wiener-Khintchin theorem. However, solving Eq 14 provides a self-consistent expression (see Methods), namely (19) Here, Q represents the response to finite-size perturbations of the activity in the past, whereas Σ gives the effect of the Gaussian noise η(t, r) on the stochastic part of the activity (see Eq 54 and following in Methods for expressions for Q and Σ). Eq 19 can be solved by taking the Fourier transform, thus forcing us to first compute the power spectrum to get the autocorrelation function. Interestingly, however, the autocorrelation function of Σ(t) can be obtained directly in terms of the interspike interval distribution in the asynchronous state, giving (see Methods) (20) which corresponds to that obtained in [35].
Discussion
The origin and functional implications of variability at the network scale are ongoing questions of interest in neuroscience. There have been a number of earlier efforts to go beyond the mean-field approximation to address these questions. In the past few years the idea of studying spiking neural circuit within the framework of statistical physics to investigate finite-size effects has become a concrete research project (see e.g. [43]). In this study, an alternative method was proposed to adequately describe observable noise at the network level.
We derived a stochastic partial differential equation (SPDE) describing the dynamics of the refractory density q(t, r) (Eq 7). This quantity gives the density of neurons in refractory state r at a certain time t, the refractory state being the time interval from the last spiking event. The population activity A(t) can be extracted, for instance, from the boundary condition q(t, 0). In the limit where the neuron number N goes to infinity, the standard PDE for the infinite-size refractory density is recovered.
An important point about our derivation is that we did not assume that the single-neuron spiking processes were described by a (inhomogeneous) Poisson process. In particular, as per renewal theory, the firing intensity (or hazard function) depended on the last spike, contrary to Poisson processes. Poisson random variables appeared in the course of the derivation because the number of neurons firing in a small time interval Δt is a random variable following a binomial distribution, which becomes a Poisson distribution in the limit Δt goes to zero. From there, the only assumption to arrive at Eq 7 was to approximate this Poisson distribution by a Gaussian distribution—the Gaussian approximation. This is in contrast with the work of Brunel and Hakim [18, 44], which starts with deterministic single-neuron membrane equations, but approximates the spiking process of neurons embedded in the network by a Poisson process. Nonetheless, the two approaches lead to a similar description, where the number of spikes produced during a small amount of time can be approximated by a Gaussian random variable.
Our SPDE cannot be solved analytically since the stochastic term involves a square-root nonlinearity. Nonetheless, a discretization along the characteristic curves of the SPDE offers a numerical scheme that gives a very satisfying approximation to its behavior. The advantage over numerical simulations of the full network is that it overcomes some of the restrictions imposed by computation time in multiple neuron simulations. Indeed, the numerical simulation of the SPDE is independent of the number of neurons. Therefore the stochastic-field density approach appears to be a viable method to make simulations of large neural networks all the while keeping track of their intrinsic variability.
On the other hand, a system-size expansion restricted to first order in —the linear noise approximation—gives rise to two coupled linear SPDEs which are amenable to analytical investigations. In particular, we studied finite-size fluctuations in the asynchronous regime, for which the average network activity is constant and neurons fire asynchronously. This enabled an analytical expression for the power spectrum of these fluctuations to be obtained (Eq 18). Its structure is determined both by the characteristic function of the deterministic system (thermodynamic limit) and by the spiking properties of the network (e.g., the interspike interval density and the hazard function). The characteristic function appears in the stability analysis of the deterministic system: zeros of this function are eigenmodes of the deterministic dynamics. Similarly, via an integration along the characteristic curves, some salient aspects of the autocorrelation properties of the finite-size fluctuations in the asynchronous activity regime were computed (Eq 20), allowing comparison with the literature. We therefore note that our SPDE permits the analytical computations of both the spectral Eq (18) and the correlation Eq (20) properties of the fluctuations.
We restricted our study to fully connected networks of non-adaptive (renewal) neurons. However, it should be noted that it is possible to take care of non-renewal aspects of neural dynamics using the approximation provided by the quasi-renewal theory [37]. That theory suggests that one can replace the full adaptation potential (Eq 1) by the sum of a contribution from the last spike, , and an average potential caused by other spikes over the whole past. This latter contribution can be formulated as a convolution of the population activity with a kernel. Thus, it can simply be included in an extended version of the SPDE formalism presented here.
The stochastic-field approach to finite-size effects in neural network dynamics presented herein uses a surrogate quantity q(t, r) to access the population activity. With this approach, finite-size noise explicitly appears in the main equation, Eq 7, taking the form of a Gaussian white noise in the variables t and r, divided by . This equation can be integrated numerically, providing information about the latent, refractory state of neurons. This information is typically not present in other approaches that deal with finite-size effects. Moreover, and most importantly, the firing activity can thus be directly simulated by using the SPDE; such direct simulation is typically not possible with other methods (but see below). On the other hand, the linear noise approximation gives analytical expressions for the power spectrum of finite-size activity fluctuations in the asynchronous state, a feat also possible by other approaches. For instance, the linear response theory applied to leaky integrate-and-fire networks readily generates this power spectrum [20]. Also, an integral equation has recently been proposed that directly involves the population activity [35]; linearizing this equation yields the power spectrum. Our preliminary investigations and Ref. [45] suggest that all these approaches lead to analogous results.
Moreover, Schwalger et al. recently used an error minimizing procedure to directly describe the activity of finite-size networks at a so-called mesoscopic level. This procedure de facto sacrifices probability conservation to obtain a nevertheless accurate activity equation in the time domain. Our approach rather involves both time and refractoriness, which is still considered the microscopic level by these authors. However, working directly with our SPDE provides some advantages. First, even though we performed a Gaussian approximation of the Poisson spiking statistics, the probability is still conserved, which is the standard expectation for biophysically interpreted neural mass models [14]. Second, as stated above, its linearization in combination with the use of the method of characteristics enabled us to obtain a stochastic differential equation—along the characteristics—and to compute spectral and correlation properties of the network activity. Finally, we note that our approach can also be extended to multiple populations, and we pointed the reader in the right direction for doing so without performing the full calculations.
Methods
Derivation of the stochastic partial differential equation
We consider a homogeneous neural population and we denote the density of neurons in refractory state r at time t by q(t, r). The number of neurons with their refractory state in [r − Δr, r) at time t, denoted m(t, r), is given by (21) At time t, a given neuron is in refractory state r if its last spike occurred at and survived without spiking until t. Accordingly, m(t, r) represents the number of neurons a time t whose last spike belongs to interval (t − r, t − r + Δr]. By extension, m(t + Δt, r + Δr) represents the number of neurons whose last spike also occurred in (t − r, t − r + Δr], but that survived without spiking until t + Δt. Intuitively, m(t + Δt, r + Δr) is equal to m(t, r) minus the number of these neurons that did spike in [t, t + Δt). If Δt is small enough—so that the hazard function ρ(t, r) is nearly constant on this interval—then this number is a statistically independent Poisson random variable with mean and variance equal to ρ(t, r)m(t, r)Δt [25, 35]. Therefore, (22) where is the aforementioned Poisson-distributed random number.
Assuming that then the Poisson distribution from which samples are randomly drawn approaches a normal distribution. In this case, we can write (23) where is a standard normal random variable. On the discretized (t, r)-space implicitly defined above, there exists one such standard normal random variable for every point of that space; these random variables are mutually independent.
Replacing Eq 21 in Eq 22 and using Eq 23 yields
After a change of integration variable the left-hand side becomes assuming that the partial derivatives of q exist; they are evaluated at (t, x). From the mean-value theorem, we have for any integral Then, in our case, we get where Note that because the refractory state increases linearly with t when neurons are not firing, hence Dividing each side of this equation by ΔtΔr and taking the limits yields the stochastic partial differential equation (SPDE) appearing as Eq 7 in the main text: Here, we identified the limit of with a two-dimensional Gaussian white noise process η(t, r) obeying
The initial condition needed to solve the SPDE is obtained as follows. The total number of neurons firing in interval [t, t + Δt), here denoted n(t), is given by (24) where A(t′) is the population activity. But n(t) is also the number of neurons at time t + Δt with refractory state in [0, Δt). Hence, (25) In the limit of small Δt we get (26) which is the required initial condition.
Moreover, n(t) must be equal to the summation of all neurons that fire in [t, t + Δt), whatever their refractory state. Therefore, we must have, symbolically, Steps similar to those used above readily yield (27) Finally, a normalization condition for q(t, r) is obtained from the fact that, at time t, all neurons within the network have a given refractory state: (28) Eqs 26–28—and variants thereof—are used in the main text.
Let us emphasize that we do not assume that the single neurons possess Poisson statistics. The Poisson random variable arises from an application of the theory of point processes. According to this theory, a generic point process with hazard function —where is the given history of events prior to t and Θ represents any covariate, e.g. an external stimulus—generates a spike in [t, t + Δt) with probability [46]. A Poisson process is obtained when the hazard function does not depend on history, in which case the hazard function identifies with the mean field itself (i.e. the firing rate). In our case, the hazard function ρ depends on the history and the external input through the potential u(t, r). Fundamentally, the number of neurons firing in [t, t + Δt) follows a binomial distribution, which becomes a Poisson distribution in the limit Δt → 0; the underlying Poisson random variable changes as a function of time and is correlated with other Poisson variables at other times (see appendix B in [35] for a mathematical explanation).
Linear noise approximation
We start with the system-size expansion given in Eqs 11 and 12, that we rewrite here for convenience (29) (30) Replacing these two equations in the SPDE, Eq 7, yields omitting function arguments for brevity. Here, d/dt is the material derivative operator Using Eqs 30 and 4, we can write the hazard function as (31) where with and the derivative dρ/du is evaluated at u0. Then, matching terms of orders and , and neglecting contributions from terms of order and lower, we get two equations involving q0 and qξ: (32) The first of these equations is the usual mean-field model (Eq 13). The second equation is a SPDE whose coefficients, source and noise terms depend on the solution of the deterministic equation. Of course, the boundary condition on q(t, r) (cf. Eqs 8 or 26) implies (33) Furthermore, from Eq 9 and the LNA, we have, after algebraic manipulations: (34) Together, Eqs 32–34 constitute a coupled system between the deterministic mean field (A0 and q0) and the first-order fluctuations in the LNA (Aξ and qξ).
Activity in the asynchronous state
Let us denote by q∞(r) and A∞ the refractory density and activity in the asynchronous steady state, respectively. We have to solve (35) where (36) Eq 35 can be integrated to give (37) where we have used the boundary condition Finally, the average asynchronous activity can be computed using the conservation property of the neural network, namely so that Note that the mean firing rate is implicitly given since ρ∞ depends on A∞. Therefore, to compute the mean activity, this nonlinear equation must be solved numerically.
With our choices for ρ we can push the calculation further. We have where δu has been set to its numerical value of 1 mV, hence and The change of variable reduces the problem to finding the solution of the equation (38) where the lower incomplete γ function is given by and we defined Eq 38 is then solved self-consistently for A∞.
Stability analysis and the characteristic equation
To study the stability of the asynchronous state, one needs the eigenvalues of the differential operator once a linearization around the steady state has been performed. We therefore consider a small perturbation and write the solution in the form (39) Plugging these expressions into Eq 13—keeping the first order terms only—yields the partial differential equation (40) with ρ∞(r) given by Eq 36 and
From Eq 40 (compare with Eqs 7 and 27), we have (41) We express the perturbation in eigenmodes After algebraic manipulations, we find that the perturbation obeys (42) with the Laplace transform of κ given by Integration of this equation yields (43) where (44) is the survivor function in the steady state. From Eq 37, we have hence, (45) Moreover, we get from Eq 41 Replacing Eq 45 into this latter equation gives, after cancellation of A1 throughout, Writing (46) for the probability density of obtaining a refractory state r in the steady state and defining its Laplace transform (47) we get We shall write the characteristic equation as with The eigenvalues are thus given by the roots of . The time-independent solution A∞ will be stable if all the eigenvalues have negative real parts.
Computation of the power spectrum
We assume that the deterministic part of the activity tends toward its equilibrium state—below the instability threshold. In other words, t is large enough that Thus discarding the transient dynamics, the SPDE involving the finite-size fluctuations in the LNA (see second equation of Eq 32) becomes (48) On the other hand, the stochastic part of the activity now reads (see second equation in Eq 34) (49)
The power spectrum is defined by where (50) is the Fourier transform of the stochastic process Aξ restricted to 0 < t < T. Analogously, we define and likewise for other stochastic processes depending on t and r. After taking the Fourier transform, Eq 48 becomes (arguments of functions are omitted when no confusion may arise) Note that this equation has exactly the same form as Eq 42 above, and thus can be solved accordingly: where we used Also, from Eq 49 we get (51) Replacing in this equation by the expression that has just been obtained for it yields Hence, with the characteristic function of P∞(r), we get To simplify the notation, we define Then, To compute the power spectrum of Aξ, we first note that since η(t, r) is a Gaussian white noise. Also, in the expression for , only the numerator contains stochastic terms. Therefore, we only have to compute which gives rise to four terms that we compute separately (we denote ).
Second term since for an arbitrary function g(s).
Fourth term The power spectrum is then With the definitions for f(s) and αω(s, r), the second term of the numerator becomes and for the third term, Hence, finally,
Autocorrelation function
In this section, we compute Aξ and (a part of) its autocorrelation function. According to the Wiener-Khintchin theorem, the autocorrelation function of Aξ is simply the Fourier transform of its power spectrum. But to allow comparison with the results in [35], we calculate a part of this autocorrelation directly. From Eq 49, we must determine qξ(t, r). This will be done by first solving Eq 48 using the method of characteristics.
Solving the first-order inhomogeneous PDE with the method of characteristics.
We have to solve (52) where The ranges of r and t are [0, ∞) and (−∞, ∞), respectively. The boundary condition is With the method of characteristics, we first transform the PDE above into an ordinary differential equation. We find a family of curves for which Since we have Along these lines,
Hence, we now have to solve the ODE on the characteristic curve, namely The solution is Replacing we get the solution for the whole (t, r) space: Replacing this expression into Eq 49 yields (53) The second term is The first part of this second term can be written with Therefore, we finally have (54) with and In the case where the hazard function is exponential, as in Eq 5, a∞ becomes whereas
Autocorrelation function of Σ.
We compute 〈Σ(t)Σ(t + t′)〉: First, we have Second, we have Similarly, for the third term we obtain We can combine the last two expressions by writing For the fourth term, defining we have where in the last line we used When t′ > 0, we have where we used When t′ < 0, The autocorrelation function is then hence, finally
Numerical implementation
The SPDE of Eq 7 can be readily integrated using the following numerical scheme: (55) where H(x) is the Heaviside step function, and is a standard normal random variable. This numerical scheme is obtained by discretizing the time evolution of q(t, r) on the characteristic curve (cf. Methods) and noting that with e−ρdt the probability that a spike is not fired during interval dt [33]. The Heaviside function prevents negative values for q(t, r) from appearing under the square-root sign. Along the characteristic curve, the dynamics correspond to a Cox-Ingersoll-Ross stochastic differential equation [47]. The proposed numerical scheme above is thus well defined [48] and produces results in excellent agreement with simulations of the full network (see Fig 4). One way to extract the population activity is to evolve q(t, r) according to the above numerical scheme for all and compute q(t, 0)—and thus A(t)—by enforcing the conservation law (Eq 10).
Acknowledgments
GD wishes to thank Prof. Georg Northoff for partial support during this project. AP wishes to thank Recherches Neuro-Hippocampe for partial support during this project. Finally, AL and AP wish to thank Brain Canada.
References
- 1. Faisal AA, Selen LP, Wolpert DM. Noise in the nervous system. Nature Review Neuroscience. 2008;9(4):292–303. pmid:18319728
- 2. Longtin A. Neuronal noise. Scholarpedia. 2013;8(9).
- 3.
Destexhe A, Rudolph-Lilith M. Neuronal Noise. Springer, New York; 2012.
- 4. White JA, Rubinstein JT, Kay AR. Channel noise in neurons. Trends in Neuroscience. 2000;23:131–137.
- 5. Manwany A, Koch C. Detecting and estimating signals in noisy cable sstructure. I: Neuronal noise sources. Neural Computation. 1999;11:1797–1829.
- 6. Averbeck BB, Latham PE, Pouget A. Neural correlations, population coding and computation. Nature Review Neuroscience. 2006;7(5):358–366.
- 7. Cunningham JP, Byron MY. Dimensionality reduction for large-scale neural recordings. Nat Neurosci. 2014;17(11):1500–1509. pmid:25151264
- 8. Knight BW. The relationship between the firing rate of a single neuron and the level of activity in a population of neurons experimental evidence for resonant enhancement in the population response. The Journal of General Physiology. 1972;59(6):767–778. pmid:5025749
- 9. Silberberg G, Bethge M, Markram H, Pawelzik K, Tsodyks M. Dynamics of population rate codes in ensembles of neocortical neurons. Journal of Neurophysiology. 2004;91(2):704–709. pmid:14762148
- 10. Okun M, Yger P, Marguet SL, Gerard-Mercier F, Benucci A, Katzner S, et al. Population rate dynamics and multineuron firing patterns in sensory cortex. Journal of Neuroscience. 2012;32(48):17108–17119. pmid:23197704
- 11. Treves A. Mean-field analysis of neuronal spike dynamics. Network. 1993;4(3):259–284.
- 12.
Renart A, Brunel N, Wang XJ. Mean-field theory of irregularly spiking neuronal populations and working memory in recurrent cortical networks. In: Feng J, editor. Computational Neuroscience: A Comprehensive Approach. CRC Press, Boca Raton; 2004.
- 13. Gerstner W. Population dynamics of spiking neurons: fast transients, asynchronous states, and locking. Neural Computation. 2000;12(1):43–89. pmid:10636933
- 14. Deco G, Jirsa VK, Robinson PA, Breakspear M, Friston K. The Dynamic Brain: From Spiking Neurons to Neural Masses and Cortical Fields. PLoS Computational Biololgy. 2008;4(8):1–35.
- 15. Eurich CW, Herrmann JM, Ernst UA. Finite-size effects of avalanche dynamics. Physical Review E. 2002;66(6):066137.
- 16. Doiron B, Rinzel J, Reyes A. Stochastic synchronization in finite size spiking networks. Physical Review E. 2006;74(3):030903.
- 17. Laing CR, Chow CC. Stationary bumps in networks of spiking neurons. Neural Computation. 2001;13(7):1473–1494. pmid:11440594
- 18. Brunel N, Hakim V. Fast global oscillations in networks of integrate-and-fire neurons with low firing rates. Neural Computation. 1999;11(7):1621–1671. pmid:10490941
- 19. Mattia M, Del Giudice P. Population dynamics of interacting spiking neurons. Physical Review E. 2002;66(5):051917.
- 20. Lindner B, Doiron B, Longtin A. Theory of oscillatory firing induced by spatially correlated noise and delayed inhibitory feedback. Physical Review E. 2005;72(6):061919.
- 21. Tetzlaff T, Helias M, Einevoll G, M MD. Decorrelation of Neural-Network Activity by Inhibitory Feedback. PLoS Computational Biology. 2012;8(8):e1002596. pmid:23133368
- 22. Trousdale J, Hu Y, Shea-Brown E, K KJ. Impact of Network Structure and Cellular Response on Spike Time Correlations. PLoS Computational Biology. 2012;8(3):e1002408. pmid:22457608
- 23. Hu Y, Zylberberg J, Shea-Brown E. The Sign Rule and Beyond: Boundary Effects, Flexibility, and Noise Correlations in Neural Population Codes. PLoS Computational Biology. 2014;10(2):e1003469. pmid:24586128
- 24.
Van Kampen NG. Stochastic processes in physics and chemistry. Elsevier, Amsterdam; 1992.
- 25. Gillespie DT. Stochastic simulation of chemical kinetics. Annual Review Physical Chemistry. 2007;58:35–55.
- 26. Ginzburg I, Sompolinsky H. Theory of correlations in stochastic neural networks. Physical Review E. 1994;50(4):3171–3191.
- 27. Meyer C, van Vreeswijk C. Temporal correlations in stochastic networks of spiking neurons. Neural computation. 2002;14(2):369–404. pmid:11802917
- 28. El Boustani S, Destexhe A. A master equation formalism for macroscopic modeling of asynchronous irregular activity states. Neural computation. 2009;21(1):46–100. pmid:19210171
- 29. Bressloff PC. Stochastic neural field theory and the system-size expansion. SIAM Journal Appllied Mathematics. 2009;70(5):1488–1521.
- 30. Buice MA, Cowan JD, Chow CC. Systematic fluctuation expansion for neural network activity equations. Neural computation. 2010;22(2):377–426. pmid:19852585
- 31. Buice MA, Cowan JD. Field-theoretic approach to fluctuation effects in neural networks. Physical Review E. 2007;75(5):051919.
- 32. Buice MA, Chow CC. Dynamic finite size effects in spiking neural networks. PLoS Computational Biology. 2013;9(1):e1002872. pmid:23359258
- 33.
Gerstner W, Kistler WM, Naud R, Paninski L. Neuronal dynamics: From single neurons to networks and models of cognition. Cambridge: Cambridge University Press; 2014.
- 34. Gerstner W, van Hemmen JL. Associative memory in a network of ‘spiking’neurons. Network: Computation in Neural Systems. 1992;3(2):139–164.
- 35. Deger M, Schwalger T, Naud R, Gerstner W. Fluctuations and information filtering in coupled populations of spiking neurons with adaptation. Physical Review E. 2014;90(6):062704.
- 36. Schwalger T, Deger M, Gerstner W. Towards a theory of cortical columns: From spiking neurons to interacting neural populations of finite size. PLoS Computational Biology. 2017;13(4):e1005507. pmid:28422957
- 37. Naud R, Gerstner W. Coding and Decoding with Adapting Neurons: A Population Approach to the Peri-Stimulus Time Histogram. PLoS Computational Biology. 2012;8(10):1–14.
- 38.
Perthame B. Transport equation in biology. Birkhauser Verlag, Basel; 2007.
- 39.
Britton N. Essential Mathematical Biology. Springer-Verlag, London; 2003.
- 40. Brunel N, Hansel D. How noise affects the synchronization properties of recurrent networks of inhibitory neurons. Neural Computation. 2006;18(5):1066–1110. pmid:16595058
- 41. Tiesinga P, Sejnowski TJ. Cortical enlightenment: are attentional gamma oscillations driven by ING or PING? Neuron. 2009;63(6):727–732. pmid:19778503
- 42. Renart A, de la Rocha J, Bartho P, Hollender L, Parga N, Reyes A, et al. The Asynchronous State in Cortical Circuits. Science. 2010;327(5965):587–590. pmid:20110507
- 43. Buice M, Chow C. Generalized activity equations for spiking neural network dynamics. Frontiers in Computational Neuroscience. 2013;15(7):162–89.
- 44. Brunel N. Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. J Comput Neurosci. 2000;8(3):183–208. pmid:10809012
- 45. Grytskyy D, Tetzlaff T, Diesmann M, Helias M. A unified view on weakly correlated recurrent networks. Frontiers in Computational Neuroscience. 2013;7:131. pmid:24151463
- 46.
Daley DJ, Vere-Jones D. An Introduction to the Theory of Point Processes. Springer-Verlag New York; 1998.
- 47. Cox JC, Ingersoll JE Jr, Ross SA. A theory of the term structure of interest rates. Econometrica. 1985; p. 385–407.
- 48. Deelstra G, Delbaen F, et al. Convergence of discretized stochastic (interest rate) processes with stochastic drift term. Applied stochastic models and data analysis. 1998;14(1):77–84.