^{*}

^{*}

The authors have declared that no competing interests exist.

Conceived and designed the experiments: MAB CCC. Contributed reagents/materials/analysis tools: MAB CCC. Wrote the paper: MAB CCC.

We investigate the dynamics of a deterministic finite-sized network of synaptically coupled spiking neurons and present a formalism for computing the network statistics in a perturbative expansion. The small parameter for the expansion is the inverse number of neurons in the network. The network dynamics are fully characterized by a neuron population density that obeys a conservation law analogous to the Klimontovich equation in the kinetic theory of plasmas. The Klimontovich equation does not possess well-behaved solutions but can be recast in terms of a coupled system of well-behaved moment equations, known as a moment hierarchy. The moment hierarchy is impossible to solve but in the mean field limit of an infinite number of neurons, it reduces to a single well-behaved conservation law for the mean neuron density. For a large but finite system, the moment hierarchy can be truncated perturbatively with the inverse system size as a small parameter but the resulting set of reduced moment equations that are still very difficult to solve. However, the entire moment hierarchy can also be re-expressed in terms of a functional probability distribution of the neuron density. The moments can then be computed perturbatively using methods from statistical field theory. Here we derive the complete mean field theory and the lowest order second moment corrections for physiologically relevant quantities. Although we focus on finite-size corrections, our method can be used to compute perturbative expansions in any parameter.

One avenue towards understanding how the brain functions is to create computational and mathematical models. However, a human brain has on the order of a hundred billion neurons with a quadrillion synaptic connections. Each neuron is a complex cell comprised of multiple compartments hosting a myriad of ions, proteins and other molecules. Even if computing power continues to increase exponentially, directly simulating all the processes in the brain on a computer is not feasible in the foreseeable future and even if this could be achieved, the resulting simulation may be no simpler to understand than the brain itself. Hence, the need for more tractable models. Historically, systems with many interacting bodies are easier to understand in the two opposite limits of a small number or an infinite number of elements and most of the theoretical efforts in understanding neural networks have been devoted to these two limits. There has been relatively little effort directed to the very relevant but difficult regime of large but finite networks. In this paper, we introduce a new formalism that borrows from the methods of many-body statistical physics to analyze finite size effects in spiking neural networks.

Realistic models of neural networks in the central nervous system are analytically and computationally intractable, presenting a challenge to our understanding of the highly complex spiking dynamics of neurons. Consequently, some degree of simplification is necessary for theoretical progress and there is a corresponding spectrum of models with a range of complexity. “Mean Field” models represent the highest degree of simplification and classically consider the evolution of an “activity” variable which is some average of the output of a population of neurons. Early examples of mean field models are those of Wilson-Cowan

The next level of model complexity requires relating population level activity to single neuron dynamics. This is a question explored by Knight

Neuronal firing is inherently variable and the source of this variability has been subject to much study and debate

Here, we present a systematic expansion around the density mean field behavior that quantifies the finite-size fluctuations and correlations of a population of neurons in terms of the interactions in the network. The expansion utilizes a kinetic theory approach adapted from plasma physics

Our approach is thus in the spirit of Gibbs' view of statistical mechanics

We present a formalism to analyze finite-size effects in a network of

We will develop our theory for a general frequency function

Our goal is to derive the fluctuation and correlation effects beyond mean field theory for the system. For global coupling, these effects arise from the finite number of neurons

We adapt the methods of the kinetic theory as applied to gas and plasma dynamics to create a probabilistic description of the network dynamics

Denoting averages over initial conditions and neuron parameters (i.e. those over

Mean field theory is obtained by neglecting all correlations and higher order cumulants. Thus, setting

The second type of finite size effect is due to the coupling and is contained in

We have shown that one tractable approach for incorporating fluctuations and correlations is to truncate the BBGKY hierarchy. However, solving such truncated systems for any model of reasonable complexity quickly becomes unwieldy. For this reason, we adapt the density functional formalism developed for statistical field theory to obtain a formal expression for the probability density functional of the neuron density and synaptic drive

In this section we present only final results, the complete derivation and description of the computational method can be found in the

In general, the action

The variables in the action can be compared to those in a stochastic differential equation. The original variables (without a tilde, e.g.

We first apply the formalism on the simple phase model defined by

The action for the phase model as derived in the

The mean field equations, which are given by a critical point of the action, are given by (8), which for parameters

Solving for

The steepest descent expansions to the path integrals will be expressed in terms of the propagators or linear response functions

If we assume a constant input

As described in

Performing the time integration gives

We now turn to the correlations in the density variable

The 2-neuron distribution function is given by

The correlation function between the global coupling and the density is given by (with

The firing rate of the population is given by

The second model we analyze is the quadratic integrate-and-fire model, whose single neuron dynamics are given by

Defining the neuron density in the same way as before

It is useful to examine the steady state of this model in some detail. For a constant drive

The linear response for the coupled theta model is given by the equations:

Consider the steady state and transform the angle variable for each

The linear response for the theta model is most easily expressed in terms of the Laplace variable

We also have

We can use these expressions to compute the tree level correlations with:

We can use the linear response formulas above to compute analytic formula for steady state. Changing coordinates and using the steady state mean field values we have

The two-neuron density function, by contrast, is different by virtue of the non-uniform nature of the steady state. In this case,

The firing rate fluctuations for the theta model are simpler than the phase model because the input for each neuron is the constant 2 at

Ensemble average is taken over

We have constructed a system size expansion for the density formulation of spiking neural networks and computed the fluctuations and correlations of network variables to lowest order. In particular, we explicitly calculate two-neuron and higher order moments in the network. We have demonstrated our method in globally coupled networks with two different neuron types. We note that all the fluctuations and correlations are “finite-size” effects, i.e. they do not exist in mean field theory. There will also be finite-size effects on the mean firing rate and synaptic drive, which could also be calculated using our methods. However, in the systems we studied, the finite-size corrections to the mean field density in the steady state are necessarily zero by neuron conservation. The steady state is uniform and the fluctuation effects will not (for these models) break the symmetry.

The method is based on the Klimontovich equation, which is an exact formal continuity equation for the finite-size neuron density. Solutions to the Klimontovich equation only exist in the weak or distributional sense because the neuron density is a collection of Dirac delta functionals and is not differentiable. In the limit of infinite system size, it can be shown that under some conditions, the neuron density becomes a smooth function that obeys a strong continuity equation called the Vlasov equation

Our approach is based on the traditional Gibbs picture of statistical mechanics, to wit: the variability in the dynamics (in the absence of externally supplied noise or explicitly probabilistic dynamics) is a reflection of the distribution of “microscopic dynamics” which are consistent with the “macroscopic dynamics”, population level variables such as the global coupling,

The Gibbs picture is realized by taking the ensemble average of the Klimontovich equation, which leads to a moment hierarchy where lower ordered moments (or cumulants) of the neuron density depend on higher order moments. The moment hierarchy is an exact ensemble averaged description of the finite-size system. However, in general, solving the moment equations is as difficult if not more difficult than integrating the original system directly. For systems with a well defined large

A truncated moment hierarchy is still unwieldy to solve. Our approach is to compute the moments directly by constructing a formal expression for the probability density functional of this distribution. This density functional is a “doubly” infinite dimensional object since its elements are infinite dimensional functions. Its formal construction hinges on the fact that it is proportional to a point mass (in an infinite dimensional functional space) located at a population density function that obeys the Klimontovich equation. Intuitively, this can be thought of as a Dirac delta functional with the Klimontovich operator as an argument. This expression is rendered computationally useful by noting that a Dirac delta functional in infinite dimensions has a Laplace transform representation where the integration is over a space of functions or fields and a set of imaginary response fields corresponding to the Laplace transform variables. The exponent of the integrand is called the action and fully specifies the distribution over the neuron density and synaptic drive.

Previously, we applied this strategy to the Kuramoto model where oscillators are coupled directly through their phase differences. The corresponding action is a function of the population density together with the response field. The linear response satisfies the linear Vlasov equation. The tree level expression for the second moment of the population density, which captures the fluctuations due to finite-size effects, is identical to a solution of the truncated moment hierarchy known as the Lenard-Balescu solution in plasma physics

Finite size effects were considered by Brunel and Hakim

Our approach generates a natural explanation for Poisson like firing rates in a population of neurons. Indeed, it is a natural consequence of the neurons firing in a stable asynchronous state. The number of neurons firing is the number of neurons out of

The mean field theory for our system is comparable to a differential equation form of the spike response theory

We considered the example of an all-to-all network. In this case, the

However, even in the case of a heterogeneous network without self-averaging, we can still apply our formalism if we consider the network to be comprised of local populations which exhibit exchangeability

There is always a tension in computational neuroscience between detailed realistic models versus simpler reduced models. The main purpose of this work is to build quantitative tools to bridge the gap between the two approaches. We have developed a principled method of coarse-graining a neural system that is relatable to experimentally accessible quantities. Even with the exponential increase in available data and computational power, detailed realistic modeling will still have limitations. For one, a large scale simulation of the brain may not necessarily be easier to understand than the brain itself. An exhaustive exploration of parameter space will be intractable even if Moore's law holds up for centuries. Thus, there will always be a role for theoretical analysis of simple models. However, one of the criticisms of reduced models is that they are

The population statistics of the network is encoded in a hierarchy of moment functions of the population density,

We wish to derive a probability density functional

The derivation is applicable to any dynamical system, so we derive it for a generic variable

The generating function for the moments of

Inserting (55) into (56) yields

For the population density

For the coupled system, we have both the synaptic variable

The initial values of the generating functions will be determined by the ensemble distribution of the initial state. In the simplest case, we will consider that the initial state of the synaptic drive variable

If the neurons are not prepared independently then the expressions for the connected

Just as the nonlinear terms in the cumulant generating function

In a more general case, the Doi-Peliti Janssen transformation provides an elegant means of expanding around Poisson solutions and is thus useful for models whose statistics should be near Poisson, such as population densities in networks, in which the statistics are essentially coupled counting processes, though not simple ones. The moments of the variables

The neural models we describe have two different fields, one for the synaptic drive variable and one for the density variable (along with the response field counterparts). As above, the class of models we consider is given by

Graphs are constructed by connecting the vertices shown in

The graphs for the two point correlations are shown in

By row they are

In order to compute the linear response for the quadratic integrate-and-fire model, we use a reduction to a simple system of ODEs. We start with the propagators in steady state in the

Let's derive equations for the

We start with