Skip to main content
Advertisement
  • Loading metrics

Stochastic neural field model of stimulus-dependent variability in cortical neurons

Abstract

We use stochastic neural field theory to analyze the stimulus-dependent tuning of neural variability in ring attractor networks. We apply perturbation methods to show how the neural field equations can be reduced to a pair of stochastic nonlinear phase equations describing the stochastic wandering of spontaneously formed tuning curves or bump solutions. These equations are analyzed using a modified version of the bivariate von Mises distribution, which is well-known in the theory of circular statistics. We first consider a single ring network and derive a simple mathematical expression that accounts for the experimentally observed bimodal (or M-shaped) tuning of neural variability. We then explore the effects of inter-network coupling on stimulus-dependent variability in a pair of ring networks. These could represent populations of cells in two different layers of a cortical hypercolumn linked via vertical synaptic connections, or two different cortical hypercolumns linked by horizontal patchy connections within the same layer. We find that neural variability can be suppressed or facilitated, depending on whether the inter-network coupling is excitatory or inhibitory, and on the relative strengths and biases of the external stimuli to the two networks. These results are consistent with the general observation that increasing the mean firing rate via external stimuli or modulating drives tends to reduce neural variability.

Author summary

A topic of considerable current interest concerns the neural mechanisms underlying the suppression of cortical variability following the onset of a stimulus. Since trial-by-trial variability and noise correlations are known to affect the information capacity of neurons, such suppression could improve the accuracy of population codes. One of the main candidate mechanisms is the suppression of noise-induced transitions between multiple attractors, as exemplified by ring attractor networks. The latter have been used to model experimentally measured stochastic tuning curves of directionally selective middle temporal (MT) neurons. In this paper we show how the stimulus-dependent tuning of neural variability in ring attractor networks can be analyzed in terms of the stochastic wandering of spontaneously formed tuning curves or bumps in a continuum neural field model. The advantage of neural fields is that one can derive explicit mathematical expressions for the second-order statistics of neural activity, and explore how this depends on important model parameters, such as the level of noise, the strength of recurrent connections, and the input contrast.

Introduction

A growing number of experimental studies have investigated neural variability across a variety of cortical areas, brain states and stimulus conditions [111]. Two common ways to measure neural variability are the Fano factor, which is the ratio of the variance to the mean of the neural spike counts over trials, and the trial-to-trial covariance of activity between two simultaneously recorded neurons. It is typically found that the presentation of a stimulus reduces neural variability [5, 9], as does attention and perceptual learning [6, 7, 12]. Another significant feature of the stimulus-dependent suppression of neural variability is that it can be tuned to different stimulus features. In particular, Ponce-Alvarez et al [10] examined the in vivo statistical responses of direction selective area-middle temporal (MT) neurons to moving gratings and plaid patterns. They determined the baseline levels and the evoked directional and contrast tuning of the variance of individual neurons and the noise correlations between pairs of neurons with similar direction preferences. The authors also computationally explored the effect of an applied stimulus on variability and correlations in a stochastic ring network model of direction selectivity. They found experimentally that both the trial-by-trial variability and the noise correlations among MT neurons were suppressed by an external stimulus and exhibited bimodal directional tuning. Moreover, these results could be reproduced in a stochastic ring model, provided that the latter operated close to or beyond the bifurcation point for the existence of spontaneous bump solutions.

From a theoretical perspective, a number of different dynamical mechanisms have been proposed to explain aspects of stimulus-dependent variability: (i) stimulus-induced suppression of noise-induced transitions between multiple attractors as exemplified by the stochastic ring model [10, 1316]; (ii) stimulus-induced suppression of an otherwise chaotic state [1719]; (iii) fluctuations about a single, stimulus-driven attractor in a stochastic stabilized supralinear network [20]. The pros and cons of the different mechanisms have been explored in some detail within the context of orientation selective cells in primary visual cortex (V1) [20]. We suspect that each of the three mechanisms may occur, depending on the particular operating conditions and the specific cortical area. However, we do not attempt to differentiate between these distinct mechanisms in this paper. Instead, we focus on the attractor-based mechanism considered by Ponce-Alvarez et al [10], in order to understand the stimulus-dependent variability of population tuning curves. Our main goal is to show how the tuning of neural variability can be analyzed in terms of the stochastic wandering of spontaneously formed tuning curves or bumps in a continuum neural field model. (For complementary work on the analysis of wandering bumps within the context of working memory see Refs. [2123].) The advantage of using neural field theory is that one can derive explicit mathematical expressions for the second-order statistics of neural activity, and explore how this depends on important model parameters, such as the level of noise, the strength of recurrent connections, and the input contrast. In particular, our mathematical analysis provides a simple explanation for the bimodal tuning of the variance observed by Ponce-Alvarez et al [10].

After accounting for the qualitative statistical behavior of a single ring network, we then explore the effects of inter-network coupling on stimulus-dependent variability in a pair of ring networks, which has not been addressed in previous studies. The latter could represent populations of cells in two different layers of a cortical hypercolumn linked via vertical synaptic connections, or two different cortical hypercolumns linked by horizontal patchy connections within the same layer. We will refer to these two distinct architectures as model A and model B, respectively. (See also Fig 1) In this paper, we use model A to show how vertical excitatory connections between two stochastic ring networks can reduce neural variability, consistent with a previous analysis of spatial working memory [22]. We also show that the degree of noise suppression can differ between layers, as previously found in an experimental study of orientation selective cells in V1 [24]. An experimental “center-surround” study of stimulus-dependent variability in V1 indicates that correlations in spontaneous activity at the center can be suppressed by stimulating outside the classical receptive field of the recorded neurons [25], that is, by evoking activity in the surround. In this paper, we show that the effect of a surround stimulus depends on at least two factors: (i) whether or not the horizontal connections effectively excite or inhibit the neurons in the center, and (ii) the relative directional bias of the surround stimulus. In particular, we find that at high contrasts (inhibitory regime), noise is facilitated in the center when the center and surround stimuli have the same directional bias, whereas it is suppressed when the center and surround stimuli have opposite directional biases. The converse holds at low contrasts (excitatory regime). These results are consistent with the general observation that increasing the mean firing rate via external stimuli or modulating drives tends to reduce neural variability.

thumbnail
Fig 1. Coupled ring models.

(a) Model A consists of two ring networks that are located in two vertically separated cortical layers and interact via interlaminar connections. (b) Model B consists of two ring networks that are located in the same cortical layer and interact via intralaminar horizontal connections.

https://doi.org/10.1371/journal.pcbi.1006755.g001

In the remainder of the Introduction we introduce our stochastic neural field model of coupled ring networks and describe in more detail the structure of models A and B. In Materials and Methods we use perturbation theory to show how the neural field equations can be reduced to a pair of stochastic phase equations describing the stochastic wandering of bump solutions. These equations are analyzed in the Results, using a modified version of the bivariate von Mises distribution, which is well-known in the theory of circular statistics. This then allows us to determine the second-order statistics of a single ring network, providing a mathematical underpinning for the experimental and computational studies of Ponce-Alvarez et al [10], and to explore the effects of inter-network coupling on neural variability in models A and B.

One final point is in order. There are many different measures of neural variability in the literature, both in terms of the type of statistical quantity (variance, Fano factor, auto and cross correlations) and the relevant observables (single cell or population firing rates/binned spikes, voltages for in vivo patch recordings, and individual spikes). In this paper, we mainly focus on the mean and variance of population activity, which could be interpreted as the extracellular voltage or current associated with a population of cells. Hence, our results speak most directly to the experimental studies of Ref. [10]. The possible relationship to other measures of neural variability are considered in the Discussion.

Coupled ring model

Consider a pair of mutually coupled ring networks labeled j = 1, 2. Let uj(θ, t) denote the activity at time t of a local population of cells with stimulus preference θ ∈ [−π, π) in network j. Here θ could represent the direction preference of neurons in area-middle temporal cortex (MT) [10], the orientation preference of V1 neurons, after rescaling θθ/2 [26, 27], or a coordinate in spatial working memory [22, 28, 29]. For concreteness, we will refer to θ as a direction preference. The variables uj evolve according to the neural field equations [21, 22, 30, 31] (1a) (1b) where is a constant scale factor (see below), Jj(θθ′) is the distribution of intra-network connections between cells with stimulus preferences θ′ and θ in network j, Kj(θθ′) is the corresponding distribution of inter-network connections to network j, and hj(θ) is an external stimulus. The firing rate function is assumed to be a sigmoid (2) with maximal firing rate f0, gain γ and threshold η. The final term on the right-hand side of each equation represents external additive noise, with Wj(θ, t) a θ-dependent Wiener process. In particular, (3) where δ(t) is the Dirac delta function and δi,j is the Kronecker delta function. For concreteness, we will take C(θ) = (θ) + b cos(θ) for constants a, b. For b ≠ 0, the noise is colored in θ (which is necessary for the solution to be spatially continuous) and white in time. (One could also take the noise to be colored in time by introducing an additional Ornstein-Uhlenbeck process. For simplicity, we assume that the noise processes in the two networks are uncorrelated, which would be the case if the noise were predominantly intrinsic. Correlations would arise if some of the noise arose from shared fluctuating inputs. For a discussion of the effects of correlated noise in coupled ring networks see [22].) The external stimuli are taken to be weakly biased inputs of the form with (4) where is the location of the peak of the input (stimulus bias) and is the contrast. Finally, the time-scale is fixed by setting the time constant τ = 10 msec. The maximal firing rate f0 is taken to be 100 spikes/sec.

The weight distributions are 2π-periodic and even functions of θ and thus have cosine series expansions. Following [21], we take the intra-network recurrent connections to be (5) which means that cells with similar stimulus preferences excite each other, whereas those with sufficiently different stimulus preferences inhibit each other. It remains to specify the nature of the inter-network connections. As we have already mentioned, we consider two different network configurations (see Fig 1): (A) a vertically connected two layer or laminar model and (B) a horizontally connected single layer model. In model A, the inter-network weight distribution is taken to have the form (6) which represents vertical coupling between the layers. We also assume that both layers receive the same stimulus bias, that is, in Eq (4). In model B, the inter-network weight distribution represents patchy horizontal connections, which tend to link cells with similar stimulus preferences [3235]. This is implemented by taking (7) Now the two networks can be driven by stimuli with different biases so that .

Note that in order to develop the analytical methods of this paper, we scale the internetwork coupling, the noise terms and the external stimuli in Eq (1) by the constant factor . Taking 0 < ϵ ≪ 1 (weak noise, weak inputs and weak inter-network coupling) will allow us to use perturbation methods to derive explicit parameter-dependent expressions for neural variability. We do not claim that cortical networks necessarily operate in these regimes, but use the weakness assumption to obtain analytical insights and make predictions about the qualitative behavior of neural variability. In the case of weak inter-network connections, the validity of the assumption is likely to depend on the source of these connections. For example, in model B, they arise from patchy horizontal connections within superficial or deep layers of cortex, which are known to play a modulatory role [36]. On the other hand, vertical connections between layers are likely to be stronger than assumed in our modeling analysis, at least in the feedforward direction [37]. Finally, the weak stimulus assumption depends on a particular view of how cortical neurons are tuned to stimuli, which is based on the theory of ring attractor networks, see the Discussion.

Results

We present various analytical and numerical results concerning stimulus-dependent neural variability, under the assumption that the neural field Eq (1) support stable stationary bump solutions uj(θ, t) = Uj(θ) = Aj cos(θ), j = 1, 2, in the absence of noise, external stimuli, and inter-network coupling (ϵ = 0). The amplitudes Aj are determined self-consistently from the equations (see Material and methods) (8) One of the important properties of the uncoupled homogeneous neural field equations is that they are marginally stable with respect to uniform translations around the ring. That is, the location of the peak of the bump is arbitrary, which reflects the fact that the homogeneous neural field is symmetric with respect to uniform translations. Marginal stability has a number of important consequences. First, the presence of a weakly biased external stimulus can lock the bump to the stimulus. The output activity is said to amplify the input bias and provides a network-based encoding of the stimulus, which can be processed by upstream networks. (Since the bump may persist if the stimulus is removed, marginally stable neural fields have been proposed as one mechanism for implementing a form of spatial working memory [22, 28, 29, 38, 39]).

A second consequence of operating in a marginally stable regime is that the bump is not robust to the effects of external noise, which can illicit a stochastic wandering of the bump [2022, 3941]. One way to investigate the stochastic wandering of bumps in a neural field model is to use perturbation theory. The latter was originally applied to the analysis of traveling waves in one-dimensional neural fields [30, 31], and was subsequently extended to the case of wandering bumps in single-layer and multi-layer neural fields [21, 22, 42]. The basic idea is to to treat longitudinal and transverse fluctuations of a bump (or traveling wave) separately in the presence of noise, in order to take proper account of marginal stability. This is implemented by decomposing the stochastic neural field into a deterministic bump profile, whose spatial location or phase has a slowly diffusing component, and a small error term. (There is always a non-zero probability of large deviations from the bump solution, but these are assumed to be negligible up to some exponentially long time.) Perturbation theory can then be used to derive an explicit stochastic differential equation (SDE) for the diffusive-like wandering of the bump in the weak noise regime. (A more rigorous mathematical treatment that provides bounds on the size of transverse fluctuations has also been developed [43, 44]).

Motivated by previous studies of wandering bumps in stochastic neural fields, we introduce the amplitude phase decomposition [22, 30] (9) (As it stands, this decomposition is non-unique, unless an additional mathematical constraint is imposed that can define βj and vj uniquely. Within the context of formal perturbation methods, this is achieved by imposing a solvability condition that ensures that the error term can be identified with fast transverse fluctuations, which converge to zero exponentially in the absence of noise, see Materials and methods.) Substituting Eq (9) into the full stochastic neural field Eq (1) and using perturbation theory along the lines of [21, 22, 30, 42], one can derive the following SDEs for the phases βj(t), see Materials and methods: (10a) (10b) where , are 2π-periodic functions that depend on the form of the inter-network connections, and wj(t) are independent Wiener processes: (11) The functions and the diffusion coefficients D1, D2 are calculated in Materials and Methods, see Eqs (73), (74) and (77).

Wandering bumps in a single stochastic ring network

Let us begin by considering stimulus-dependent neural variability in a single ring network evolving according to the stochastic neural field equation (12) where (13) with A clear demonstration of the suppressive effects of an external stimulus can be seen from direct numerical simulations of Eq (12), see Fig 2. In the absence of an external stimulus, the center-of-mass (phase) of the bump diffuses on the ring, whereas it exhibits localized fluctuations when a weakly-biased stimulus is present. Clearly, the main source of neural variation is due to the wandering of the bump, which motivates the amplitude phase decomposition given by Eq (9).

thumbnail
Fig 2. Stimulus-dependent wandering of a bump in a single stochastic ring network.

(a, b) Direction-time plots of a wandering bump with brightness indicating the amplitude. Overlaid lines represent the trajectory of the center-of-mass or phase of the bump, β(t). (a) In the absence of an external stimulus (), the center-of-mass of the bump executes diffusive-like motion on the ring. (b) The presence of a weakly biased external stimulus () significantly suppresses fluctuations, localizing the bump to the stimulus direction . (c, d) Corresponding snapshots of bump profiles at different times (t = 100, 300, 600, 900). (c) For no external stimulus the bumps are distributed at different positions around the ring and vary in amplitude. (d) In the presence of a stimulus the bumps are localized around zero and have similar amplitudes. Parameters are threshold η = 0.5, gain γ = 4, synaptic weight , correlation parameters a = 3, b = 0.5 and ϵ = 0.05.

https://doi.org/10.1371/journal.pcbi.1006755.g002

Applying the perturbation analysis of Materials and Methods yields a one-network version of the phase Eq (10), which takes the form (14) with and , where A is the amplitude of the bump for ϵ = 0. Eq (14) is known as a von Mises process, which can be regarded as a circular analog of the Ornstein-Uhlenbeck process on a line, and generates distributions that frequently arise in circular or directional statistics [45]. The von Mises process has been used to model the trajectories of swimming organisms [46, 47], oscillators in physics [48], bioinformatics [49], and the data fitting of neural population tuning curves [50]. (Nonlinear stochastic phase equations analogous to (14) also arise in models of ring attractor networks with synaptic heterogeneities, which have applications to spatial working memory [23, 51, 52]).

Introduce the probability density This satisfies the forward Fokker-Planck equation (dropping the explicit dependence on initial conditions) (15) for β ∈ [−π, π] with periodic boundary conditions p(−π, t) = p(π, t). It is straightforward to show that the steady-state solution of Eq (15) is the von Mises distribution (16) with (17) Here I0(κ) is the modified Bessel function of the first kind and zeroth order (n = 0), where Sample plots of the von Mises distribution are shown in Fig 3. One finds that M(β; β*, κ) → 1/2π as κ → 0; since this implies that in the absence of an external stimulus one recovers the uniform distribution of pure Brownian motion on the circle. On the other hand, the von Mises distribution becomes sharply peaked as κ → ∞. More specifically, for large positive κ, (18) We thus have an explicit example of the noise suppression of fluctuations by an external stimulus, since . (We are assuming that the time for the distribution of the stochastic phase variable to reach steady-state is much shorter than the time for the amplitude-phase decomposition (9) to break down. This can be proven rigorously using variational methods for sufficiently small ϵ, since the time for a large transverse fluctuation becomes exponentially large [44]).

thumbnail
Fig 3. Sample plots of the von Mises distribution M(β, 0, κ) centered at zero for various values of κ.

Inset: Plot of first circular moment I1(κ)/I0(κ).

https://doi.org/10.1371/journal.pcbi.1006755.g003

Moments of the von Mises distribution are usually calculated in terms of the circular moments of the complex exponential x = e = cos β + i sin β. The nth circular moment is defined according to (19) In particular, (20) We can use these moments to explore stimulus-dependent variability in terms of the stochastic wandering of the bump or tuning curve. That is, consider the leading order approximation u(θ, t) ≈ A cos(θ + β(t)), with β(t) evolving according to the von Mises SDE (14). Trial-to-trial variability can be captured by averaging the solution with respect to the stationary von Mises density (16). First, (21) from Eq (20). Hence, the mean amplitude A(κ) is given by the first circular moment of the von Mises distribution, see inset of Fig 3. When κ = 0 (zero external stimulus), the amplitude vanishes due to the fact that the random position of the bump is uniformly distributed around the ring. As the stimulus contrast increases the wandering of the bump is more restricted and A(κ) monotonically increases.

Second, It follows that the variance is (22) In Fig 4(a), we show example plots of the normalized variance var(U)/A2 as a function of the parameter κ, which is a proxy for the input amplitude , since . It can be seen that our theoretical analysis reproduces the various trends observed in [10]: (i) a global suppression of neural variability that increases with the stimulus contrast; (ii) a directional tuning of the variability that is bimodal; (iii) a peak in the suppression of cells at the preferred directional selectivity. One difference between our theoretical results and those of [10] is that, in the latter case, the directional tuning of the variance is not purely sinusoidal. Part of this can be accounted for by noting that we consider the variance of the activity variable u rather than the firing rate f(u). Moreover, for analytical convenience, we take the synaptic weight functions etc. to be first-order harmonics. In Fig 4(b) we show numerical plots of the variance in the firing rate, which exhibits the type of bimodal behavior found in [10] when the ring network operates in the marginal regime.

thumbnail
Fig 4.

(a) Plot of normalized variance var(U)/A2 for U = A cos(θ) as a function of θ for a single ring network and various κ. In the spontaneous case (κ = 0) the variance is uniformly distributed around the ring (ignoring transients). The presence of a stimulus (κ > 0) suppresses the overall level of noise and the variance exhibits a bimodal tuning curve. (b) Plot of variance in firing rates var(f(U) (in units of ) as a function of θ for a single ring network and various κ. f is given by the sigmoid function (2) with γ = 4 and η = 0.5. The corresponding amplitude A ≈ 1.85.

https://doi.org/10.1371/journal.pcbi.1006755.g004

Effects of inter-laminar coupling (model A)

We now turn to a pair of coupled ring networks that represent vertically connected layers as shown in Fig 1(a) (model A), with inter-network weight distribution (6). For analytical tractability, we impose the symmetry conditions A1 = A2 = A and . However, we allow the contrasts of the external stimuli to differ, . Also, without loss of generality, we set . Eq (10) then reduce to the form, see Materials and methods (23a) (23b) with (24) Given our various simplifications, we can rewrite Eq (23) in the more compact form (25) where Φ is the potential function (26) Introduce the joint probability density This satisfies the two-dimensional forward Fokker-Planck equation (dropping the explicit dependence on initial conditions) (27) for βj ∈ [−π, π] and periodic boundary conditions p(−π, β2, t) = p(π, β2, t), p(β1, −π, t) = p(β1, π, t).

The existence of a potential function means that we can solve the time-independent FP equation. Setting time derivatives to zero, we have where Jj is a probability current. In the stationary state the probability currents are constant, but generally non-zero. However, in the special case D1 = D2 = D, then there exists a steady-state solution in which the currents vanish. This can be seen by rewriting the vanishing current conditions as This yields the steady-state probability density, which is a generalization of the von Mises distribution, (28) where and is the normalization factor (29) The distribution M2(β1, β2;κ1, κ2, χ) is an example of a bivariate von Mises distribution known as the cosine model [49]. The normalization factor can be calculated explicitly to give (30) The corresponding marginal distribution for β1 is (31) where An analogous result holds for the marginal density p(β2).

We now summarize a few important properties of the cosine bivariate von Mises distribution [49]:

  1. The density M2(β1, β2; κ1, κ2, χ) is unimodal if and is bimodal if
  2. When κ1 and κ2 are large, the random variables (β1, β2) are approximately bivariate normal distributed, that is, (β1, β2) ∼ N2(0, Σ) with (32)

We will assume that the vertical connections are maximal between neurons with the same stimulus preference so that and χ ≥ 0. It then follows that p(β1, β2) is unimodal. Moreover, from Eq (32) we have (33) For zero inter-network coupling (χ = 0), we obtain the diagonal matrix and we recover the variance of the single ring networks, that is, ; there are no interlaminar correlations. On the other hand, for χ > 0 we find two major effects of the interlaminar connections. First, the vertical coupling reduces fluctuations in the phase variables within a layer. This is most easily seen by considering the symmetric case κ1 = κ2 = κ for which (34) Clearly, (35) (This result is consistent with a previous study of the effects of inter-network connections on neural variability, which focused on the case of zero stimuli and treated the bump positions as effectively evolving on the real line rather than a circle [22]. In this case, inter-network connections can reduce the variance in bump position, which evolves linearly with respect to the time t.) The second consequence of interlaminar connections is that they induce correlations between the phase β1(t) and β2(t).

Having characterized the fluctuations in the phases β1(t) and β2(t), analogous statistical trends will apply to the trial-to-trial variability in the tuning curves. This follows from making the leading-order approximation uj(x, t) ∼ A cos(θ + βj(t)), and then averaging the βj with respect to the bivariate von Mises density M2(β1, β2; κ1, κ2, χ). In the large κj regime, this could be further simplified by averaging with respect to the bivariate normal distribution under the approximations cos(β) ≈ 1 − β2/2 and sin ββ. Both the mean and variance of the tuning curves are similar to the single ring network, see Eqs (21) and (22): (36) and (37) Their dependence on the coupling strength χ and input parameter κ1 = κ2 = κ is illustrated in Fig 5. Finally, so that inter-network covariance take the form In particular, for θ = θ′ we have (38) The resulting correlation tuning curve behaves in a similar fashion to the variance, see Fig 5(c), where (39) (Note that our definition of the cross-correlation function differs from that used, for example, by Churchland et al [9]. These authors consider the covariance matrix of simultaneous recordings of spike counts obtained using a 96-electrode array. The matrix is then decomposed into a network covariance matrix and a diagonal matrix of private single neuron noise. Our definition involves pairwise correlations between the activity of two distinct populations. Nevertheless, consistent with the findings of Churchland et al. [9], we find that the cross-correlations decrease in the presence of a stimulus).

thumbnail
Fig 5. Coupled ring network (model A).

(a) Amplitude of normalized mean tuning curve (36) as a function of the input parameter κ = κ1 = κ2 for various coupling strengths: χ = 0, 1, 5. (b) Corresponding maximum (θ = π/2) and minimum (θ = 0) normalized variances (37) as a function of the input parameter κ for coupling strengths χ = 0, 5. (c) Plot of correlation tuning curve (39) between cells with the same direction preference but located in different layers. Here κ = κ1 = κ2 and χ = 5.

https://doi.org/10.1371/journal.pcbi.1006755.g005

The above qualitative analysis can be confirmed by numerical simulations of the full neural field Eq (1), as illustrated in Fig 6(a)–6(d) for a pair of identical ring networks. In Fig 6(e)–6(h), we show corresponding results for the case where network 2 receives a weaker stimulus than network 1 ( and ). In the absence of interlaminar connections, the phase of network 2 fluctuates much more than the phase of network 1. When interlaminar connections are included, fluctuations are reduced, but network 2 still exhibits greater variability than network 1. This latter result is consistent with an experimental study of neural variability in V1 [24], which found that neural correlations were more prominent in superficial and deep layers of cortex, but close to zero in input layer 4. One suggested explanation for these differences is that layer 4 receives direct feedforward input from the LGN. Thus we could interpret network 1 in model A as being located in layer 4, whereas network 2 is located in a superficial layer, say.

thumbnail
Fig 6. Effects of interlaminar connections on a pair of wandering bumps (model A).

Overlaid lines represent the trajectories of the center-of-mass or phase of the bumps, β1(t) and β2(t). (a, b) Plots of wandering bump in network 1 for zero () and nonzero () interlaminar connections, respectively. (c, d) Analogous plots for network 2. The two networks are taken to be identical with the same parameters as Fig 2 except . (e-h) Same as Fig. 6 except that , and in (b, d).

https://doi.org/10.1371/journal.pcbi.1006755.g006

Effects of intra-laminar coupling (model B)

Our final example concerns a pair of coupled ring networks that represent horizontally connected hypercolumns within the same superficial layer, say, as shown in Fig 1(b) (model B), with inter-network weight distribution (7). Again, for analytical tractability, we impose the symmetry conditions A1 = A2 = A and . However, unlike model A, we take the contrasts to be the same, , but allow the biases of the two inputs to differ, . Eq (10) become, see Materials and methods (40a) (40b) with wj(t) given by Eq (24) and (41) We can rewrite in the form (42) Note that ϕ(−β) = ϕ(β) and thus ϕ′(−β) = −ϕ′(β). A sample plot of the potential ϕ(β) is shown in Fig 7(a), together with an approximate curve fitting based on a von Mises distribution. For the given firing rate parameters η = 0.5 and γ = 4, the unperturbed bump amplitude is A ≈ 1.85.

thumbnail
Fig 7. Coupled ring network (model B) with inhibitory intralaminar connections.

(a) Plot of the potential function ϕ(β) for threshold η = 0.5 and gain η = 4. The solid curve is an approximation based on a fitted von Mises distribution ϕ(β) ≈ 12M(β; 0, 0.6) − 0.9. (b) Plot of normalized mean 〈U〉/A of ring network 1 (center mean) as a function of the directional bias of the input to network 2 (surround bias) for various coupling parameters χ. (c) Corresponding plots of normalized variance var(U1)/A2 of ring network 1 (center variance) as a function of the surround bias for various coupling parameters χ. Stimuli to networks 1 and 2 are and , respectively, and we take κ = 1.

https://doi.org/10.1371/journal.pcbi.1006755.g007

As in the case of model A, we can rewrite Eq (40) in the more compact form (43) where Ψ is the potential function (44) and we have absorbed the factor 2/(A|Γ|) into the constant . The corresponding two-dimensional forward Fokker-Planck equation is (45) for βj ∈ [−π, π] and periodic boundary conditions p(−π, β2, t) = p(π, β2, t), p(β1, −π, t) = p(β1, π, t). Following the analysis of model A, if D1 = D2 = D then the stationary density takes the form (46) where and is a normalization factor.

Long-range horizontal connections within superficial layers of cortex are mediated by the axons of excitatory pyramidal neurons. However, they innervate both pyramidal neurons and feedforward interneurons so that they can have a net excitatory or inhibitory effect, depending on stimulus conditions [36, 53, 54], More specifically, they tend to be excitatory at low contrasts and inhibitory at high contrasts. Suppose that ring network 1 represents a hypercolumn driven by a stimulus and network 2 represents a hypercolumn driven by a stimulus , see Fig 1(b). In Fig 7(b) and 7(c) we plot how the normalized maximal mean and variance of network 1 (at θ = ±π/2) varies with the directional bias of the input to network 2. We also show the baseline mean and variance in the absence of horizontal connections (χ = 0). It can be seen that the mean and variance covary in opposite directions. In particular, for inhibitory horizontal connections (χ < 0) the variance is facilitated relative to baseline when the two stimuli have similar biases () and is suppressed when they are sufficiently different (). The converse holds for excitatory horizontal connections (χ > 0). In the Discussion, these results will be explored within the context of surround modulation.

Discussion

In this paper we used stochastic neural field theory to analyze stimulus-dependent neural variability in ring attractor networks. In addition to providing a mathematical underpinning of previous experimental observations regarding the bimodal tuning of variability in directionally specific MT neurons, we also made a number of predictions regarding the effects of inter-network connections on noise suppression:

  1. Excitatory vertical connections between cortical layers can suppress neural variability; different cortical layers can exhibit different degrees of variability according to the strength of afferents into the layers.
  2. At low stimulus contrasts, surround stimuli tend to suppress (facilitate) neural variability in the center when the center and surround stimuli have similar (different) biases.
  3. At high stimulus contrasts, surround stimuli tend to facilitate (suppress) neural variability in the center when the center and surround stimuli have similar (different) biases.

It is important to emphasize that previous related studies of variability in marginally stable ring networks have been based on computer simulations of spatially discrete models [10, 20]. That is, integrals with respect to the orientation or direction variable θ are replaced by discrete sums, so that the model dynamics is described by stochastic differential equations rather than stochastic neural fields. As we have demonstrated in this paper, the advantage of neural field theory is that it provides an analytical framework for studying neural variability in marginally stable ring attractor networks, see also [21]. In Ref. [20], the behavior of a marginally stable ring network is compared to a stabilized supralinear ring network. The latter operates in a completely different dynamical regime, consisting of a single stimulus-tuned attractor. This means that there does not exist a bump solution in the absence of a stimulus. One of the consequences of this is that weak (spontaneous) inputs increase variability, which is subsequently quenched by stronger inputs. The basic mechanism involves stimulus-dependent changes in the balance of two opposing effects [20]: feedforward interactions and recurrent excitation, which amplify variability and dominate for weak stimuli, and stabilizing inhibitory feedback, which suppresses variability and dominates in the case of stronger inputs. The authors also show that the orientation tuning of neural variability tends to be U-shaped, rather than M-shaped as found in [10], with a minimum at the preferred stimulus orientation. The stabilized supralinear ring network model was found to be more consistent with single neuron recordings from the V1 of awake primates, when compared to the marginally stable ring model.

However, certain caution should be taken when interpreting the results of Ref. [20]. First, the precise mechanism underlying the role of feedforward and recurrent inputs in generating orientation tuning in V1 is still controversial, see below. Second, the marginally stable ring model can also produce U-shaped tuning of neural variability using an appropriate Fourier decomposition of the weights—the M-shape was a direct consequence of using the first harmonic cosθ. Third, it is far from clear that the same operating regime holds for all V1 neurons, and may also vary according to the specific stimulus feature, cortical layer and cortical area. (The latter might account for differences between MT direction selective cells and V1 neurons.) Finally, there could be differences between the trial-averaged statistics of single neuron recordings and the statistics of local neural populations, as represented by neural field variables. As a further comparison of the two model paradigms, it would be interesting to explore the effects of intralaminar and interlaminar coupling on noise variability in stabilized supralinear ring networks.

Weak stimulus assumption

In order to utilize perturbation methods, we assumed that the ring networks were driven by weakly biased stimuli. This assumption depends on a particular view of how cortical neurons are tuned to stimuli. Consider the most studied example, which involves orientation tuning of cells in V1. The degree to which recurrent processes contribute to the receptive field properties of V1 neurons has been quite controversial over the years [5558]. The classical model of Hubel and Wiesel [59] proposed that the orientation preference and selectivity of a cortical neuron in input layer 4 arises primarily from the geometric alignment of the receptive fields of thalamic neurons in the lateral geniculate nucleus (LGN) projecting to it. (Orientation selectivity is then carried to other cortical layers through vertical projections). This has been confirmed by a number of experiments [6064]. However, there is also significant experimental evidence suggesting the importance of recurrent cortical interactions in orientation tuning [6571]. One issue that is not disputed is that some form of inhibition is required to explain features such as contrast-invariant tuning curves and cross-orientation suppression [58]. The uncertainty in the degree to which intracortical connections contribute to orientation tuning of V1 neurons is also reflected in the variety of models. In ring attractor models [26, 27, 72, 73], the width of orientation tuning of V1 cells is determined by the lateral extent of intracortical connections. Recurrent excitatory connections amplify weakly biased feedforward inputs in a way that is sculpted by lateral inhibitory connections. Hence, the tuning width and other aspects of cortical responses are primarily determined by intracortical rather than thalamocortical interconnections. On the other hand, in push-pull models, cross-orientation inhibition arises from feedforward inhibition from interneurons [62, 74]. Finally, in normalization models, a large pool of orientation-selective cortical interneurons generates shunting inhibition proportional in strength to the stimulus contrast at all orientations [75]. In the end, it is quite possible that are multiple circuit mechanisms for generating tuned cortical responses to stimuli, which could depend on the particular stimulus feature, location within a feature preference map, and cortical layer [58].

Surround modulation of neural variability

Surround modulation (SM) refers to the phenomenon in which stimuli in the surround of a neuron’s receptive field (RF) modulate the neuron’s response to stimuli simultaneously presented inside the RF. SM is a fundamental property of sensory neurons in many species and sensory modalities, and is thought to play an important role in contextual image processing. As with mechanisms of orientation tuning, there is considerable debate over whether feedforward or intracortical circuits generate SM, and whether this results from increased inhibition or reduced excitation [19, 36, 53, 54, 7682]. SM has been characterized in many species, commonly using circular grating patches of increasing radius or grating patches confined to the RF surrounded by annular gratings, and varying systematically the grating parameters. Modulatory effects are typically quantified in terms of changes in the mean firing rates of single neurons recorded from the center. Some of the main features of SM in V1 are as follows (see [36] and references therein): (i) SM is spatially extensive. For example, in primates, modulatory effects from the surround (both facilitatory and suppressive) can be evoked at least 12.5 degrees away from a neuron’s RF center. (ii) SM is tuned to specific stimulus parameters. The strongest suppression is induced by stimuli in the RF and surround of the same orientation, spatial frequency, drift direction, and speed, and weaker suppression or facilitation is induced by stimuli of orthogonal parameters (e.g., orthogonally oriented stimuli or stimuli drifting in opposite directions). (iii) SM is contrast dependent. Surround stimulation evokes suppression when the center and surround stimuli are of high contrast, but can be facilitatory when they are of low contrast.

One way to interpret the results of model B is to treat networks 1 and 2 as hypercolumns driven by center and surround stimuli, respectively. SM is then mediated by the horizontal connections that can have a net excitatory or inhibitory effect, depending on stimulus conditions. Here, for simplicity, we impose the sign of the horizontal connections by hand. However, one could develop a more detailed model that implements the switch between excitation and inhibition using, for example, high threshold interneurons [54]. The major prediction of our analysis is that whenever the surround modulation suppresses (facilitates) the center firing rate, the corresponding variance is facilitated (suppressed).

Extensions of the neural field model

One of the main simplifications of our neural field model is that we do not explicitly distinguish between excitatory and inhibitory populations. This is a common approach to the analysis of neural fields, in which the combined effects of excitation and inhibition are incorporated using, for example, Mexican hat functions [8385]. In the case of the ring network, the spontaneous formation of population orientation tuning curves or bumps is implemented using a cosine function, which represents short-range excitation and longer-range inhibition around the ring. We note, however, that the methods and results presented in this paper could be extended to the case of separate excitatory and inhibitory populations, as well as different classes of interneuron, as has been demonstrated elsewhere for deterministic neural fields [27, 54]. One major difference between scalar and E-I neural fields is that the latter can also exhibit time-periodic solutions, which would add an additional phase variable associated with shifts around the resulting limit cycle. The effects of noise on limit cycle oscillators can be analyzed in an analogous fashion to wandering bumps [86, 87]. We also note that neural variability in a two-population (E-I) stabilized supralinear network has been analyzed extensively using linear algebra [20].

Another possible extension of our work would be to consider higher-dimensional neural fields. For example, one could replace the ring attractor on S1 by a spherical attractor on S2. In the latter case, marginally stable modes would correspond to rotations of the sphere. (Mathematically speaking, this corresponds to the action of the Lie group SO(3) rather than SO(2) for the circle.) One could generalize the Fourier analysis of the ring network by using spherical harmonics, as previously shown for deterministic neural field models of orientation and spatial frequency tuning in V1 [88, 89]. One could also consider a planar neural field with Euclidean-symmetric weights, for which marginally stable modes would be generated by the Euclidean group of rigid body transformations of the plane (translations, rotations and reflections.) However, this example is more difficult since the marginally stable manifold is non-compact, and one cannot carry out a low-dimensional harmonic reduction. In order to obtain analytical results, one has to use Heaviside rate functions [30, 90].

A third possible extension would be to develop a more detailed model of the laminar structure of cortex. Roughly speaking, cortical layers can be grouped into input layer 4, superficial layers 2/3 and deep layers 5/6 [37, 9193]. They can be distinguished by the source of afferents into the layer and the targets of efferents leaving the layer, the nature and extent of intralaminar connections, the identity of interneurons within and between layers, and the degree of stimulus specificity of pyramidal cells. In previous work, we explored the role of cortical layers in the propagation of waves of orientation selectivity across V1 [94], under the assumption that deep layers are less tuned to orientation. This suggests considering coupled ring networks that differ in their tuning properties. Another modification would be to consider asymmetric coupling between layers, both in terms of the range of coupling and its strength. Interestingly, the properties of SM also differ across cortical layers, suggesting different circuits and mechanisms generating SM in different layers. More specifically, surround fields in input layer 4 are smaller than in other layers, and SM is weaker and untuned for orientation. Moreover, SM is stronger and more sharply orientation-tuned in superficial layers compared to deep layers [36]. Therefore, it would be interesting to consider coupled ring networks that combine models A and B.

Spiking versus rate-based models

One final comment is in order. Neural variability in experiments is typically specified in terms of the statistics of spike counts over some fixed time interval, and compared to an underlying inhomogeneous Poisson process. Often Fano factors greater than one are observed. In this paper, we worked with stochastic firing rate models rather than spiking models, so that there is some implicit population averaging involved. In particular, we focused on the statistics of the variables uj(x, t), which represent the activity of local populations of cells rather than of individual neurons, with f(uj) the corresponding population firing rate [30]. This allowed us to develop an analytically tractable framework for investigating how neural variability depends on stimulus conditions within the attractor model paradigm. In order to fit a neural field model to single-neuron data, one could generate spike statistics by taking f(uj) to be the rate of an inhomogeneous Poisson process. Since f(uj) is itself stochastic, this would result in a doubly stochastic Poisson process, which is known to produce Fano factors greater than unity [95]. Moreover, the various phenomena identified in this paper regarding stimulus-dependent variability would carry over to a spiking model, at least qualitatively. However, one should not expect a mean-field reduction to capture everything in a spiking model. For example, multivariate doubly stochastic Poisson processes can have correlations between their spike times in addition to the correlations induced by shared rate fluctuations. Spiking network models typically do produce these spike timing correlations that are not captured by most mean-field reductions, even those that account for correlated firing rate fluctuations [13, 15, 9698]. These correlations could, in turn, affect auto-correlation and firing rates in the network.

Materials and methods

We present the details of the derivation of the stochastic phase Eq (10).

Stationary bumps in a single uncoupled ring

First, suppose that there are no external inputs, no inter-network coupling (J12 = J21 = 0), and no noise (ϵ = 0). Each network can then be described by a homogeneous ring model of the form (47) Let and consider the trial solution u(θ, t) = U(θ) with U(θ) an even, unimodal function of θ centered about θ = 0. This could represent a direction tuning curve in MT ((in the marginal regime) or a stationary bump encoding a spatial working memory. It follows that U(θ) satisfies the integral equation (48) Substituting the cosine series expansion (49) into the integral equation yields the even solution U(θ) = A cos θ with the amplitude A satisfying the self-consistency condition (50) The amplitude Eq (50) can be solved explicitly in the large gain limit γ → ∞, for which f(u) → H(uκ), where H is the Heaviside function [21]. That is, , corresponding to a marginally stable large amplitude wide bump and an unstable small amplitude narrow bump, consistent with the original analysis of Amari [90]. On the other hand, at intermediate gains, there exists a single stable bump rather than an unstable/stable pair of bumps, see Fig 8.

thumbnail
Fig 8. Graphical solution of the bump amplitude Eq (50) for and η = 0.5.

At intermediate gains (γ = 4) the zero solution is unstable and there exists a single stable bump. In the high gain limit (γ = 20) the zero solution is stable, and coexists with a small amplitude unstable bump and a large amplitude stable bump.

https://doi.org/10.1371/journal.pcbi.1006755.g008

Linear stability of the stationary solution can be determined by considering weakly perturbed solutions of the form u(θ, t) = U(θ) + ψ(θ)eλt for |ψ(θ)| ≪ 1. Substituting this expression into Eq (47), Taylor expanding to first order in ψ, and imposing the stationary condition (48) yields the infinite-dimensional eigenvalue problem [27] (51) This can be reduced to a finite-dimensional eigenvalue problem by applying the expansion (49): (52) where (53) Substituting Eqs (52) into (53) then gives the matrix equation [21] (54) where for any periodic function v(θ) (55) Integrating Eq (50) by parts shows that for A ≠ 0 Hence, exploiting the fact that is a linear functional of v, Finally, integration by parts establishes that since U(θ) is even. Eq (54) now reduces to (56) which yields the pair of solutions (57) The zero eigenvalue is a consequence of the fact that the bump solution is marginally stable with respect to uniform shifts around the ring; the generator of such shifts is the odd function sinθ. The other eigenvalue λe is associated with the generator, cosθ, of expanding or contracting perturbations of the bump. Thus linear stability of the bump reduces to the condition λe < 0. This can be used to determine the stability of the pair of bump solutions in the high-gain limit [21]. (Note that there also exist infinitely many eigenvalues that are equal to −1, which form the essential spectrum. However, since they lie in the left-half complex λ-plane, they do not affect stability).

A variety of previous studies have shown how breaking the underlying translation invariance of a homogeneous neural field by introducing a nonzero external input stabilizes wave and bump solutions to translating perturbations [21, 99102]. For the sake of illustration, suppose that in the deterministic version of Eq (1). This represents a weak θ-dependent input with a peak at θ = 0. Extending the previous analysis, one finds a stationary bump solution , with A satisfying the implicit equation

Again, this can be used to determine both the width and amplitude of the bump in the high-gain limit. Furthermore, the above analysis can be extended to establish that, for weak inputs, the bump is stable (rather than marginally stable) with respect to translational shifts [21].

Perturbation analysis

The amplitude phase decompositions (βj, vj) defined by Eq (9) are not unique, so additional mathematical constraints are needed, and this requires specifying the allowed class of functions of vj (the appropriate Hilbert space). We will take take vjL2(S1), that is, vj(θ) is a periodic function with . Substituting the decomposition into the stochastic neural field Eq (1) and using Ito’s lemma gives [103] and Introduce the series expansions , Taylor expanding the nonlinear function F, imposing the stationary solution (48), and dropping all O(ϵ) terms. This gives [21, 30], after dropping the zero index on vj,0, (58a) (58b) where are the following linear operators (59) and (60)

It can be shown that the operator has a 1D null space spanned by . The fact that belongs to the null space follows immediately from differentiating Eq (48) with respect to θ. Moreover, is the generator of uniform translations around the ring, so that the 1D null space reflects the marginal stability of the bump solution. (Marginal stability of the bump means that the linear operator has a simple zero eigenvalue while the remainder of the discrete spectrum lies in the left-half complex plane. The spectrum is discrete since S1 is a compact domain.) This then implies a pair of solvability conditions for the existence of bounded solutions of Eq (58a), namely, that dvj is orthogonal to all elements of the null space of the adjoint operator . The corresponding adjoint operator is (61) Let span the 1D adjoint null space of . Now taking the inner product of both sides of Eq (58a) with respect to and using translational invariance then yields the following SDEs to leading order: (62a) (62b) where (63) for Hj(β + 2π) = Hj(β), (64) and (65) Here wj(t) are scalar independent Wiener processes, with (66)

Note that stochastic phase equations similar to (62) were previously derived in [21, 22], except that the functions Hj(β) and were linearized, resulting in a system of coupled Ornstein-Uhlenbeck (OU) processes: (67a) (67b) for constant coefficients ν1, ν2, r1, r2. Properties of one-dimensional OU processes were then used to explore how the variance in the position of bump solutions depended on inter-network connections and statistical noise correlations. However, it should be noted that the variables βj(t) are phases on a circle (rather than positions on the real line), so that the right-hand side of Eq (67) should involve 2π-period functions. Therefore, the linear approximation only remains accurate on sufficiently short times scales for which the probability of either of the phases winding around the circle is negligible. In order to illustrate this point, consider an uncoupled OU process evolving according to A standard analysis shows that [103] In particular, the variance approaches a constant ϵD/2νj in the large t limit. The corresponding density is given by the Gaussian Although the linear approximation is sufficient if one is interested in estimating the diffusivity Dj, which was the focus of [21, 22], it does not yield the correct steady-state distribution on the ring in the limit t → ∞. Indeed, for vj → 0, the density of the OU process converges point-wise to zero, whereas ρ(β, t) → 1/2π on the ring. In our paper, we are interested in the full steady-state densities rather than just the diffusivities Dj.

Evaluation of functions Hj and

In order to determine the functions Hj and we need to obtain explicit expressions for the null vectors . We will take . Applying the expansion (49) to the adjoint equation with defined by Eq (61), we can write [21] with Substituting the expression for into the expressions for Cj and Sj then leads to a matrix equation of the form (56) with λ = 0. Since , it follows that Cj = 0 so that, up to scalar multiplications, (68) Now substituting into Eq (63), we have (69) with (70) We have used the fact that f″(Uj(θ)) is an even function of θ, so that the coefficient for is zero. The constant Γj can be calculated from Eq (65): (71) It follows that (72)

The calculation of depends on whether we consider model A or model B, see Fig 1. From Eqs (6), (60) and (64), we have for model A (73a) where we have used the stationary condition (8), and (73b) Similarly, from Eqs (7), (60) and (64), we have for model B (74a) Similarly, (74b)

Evaluation of diffusion coefficients

Finally, from Eq (66), the diffusion coefficients Dj become (75) One finds that the diffusivities decreases as the spatial correlation lengths increase. For example, in the case of spatially homogeneous noise (), Dj = 0 since f′(Uj(θ)) is even. On the other hand, for spatially uncorrelated noise (), we have (76)

In Results we take so that (77)

Numerical methods

All numerical simulations were performed in Matlab. One dimensional numerical simulations were performed using a forward Euler method scheme in time and a trapezoidal rule for integration in θ. Time steps were taken to be Δt = 0.001, and orientation steps Δθ = 0.01π.

References

  1. 1. Shadlen MN, Newsome WT. Motion perception: Seeing and deciding. Proc Natl Acad Sci USA 1996; 93:628–633. pmid:8570606
  2. 2. Arieli A, Sterkin A, Grinvald A, Aertsen A. Dynamics of ongoing activity: Explanation of the large variability in evoked cortical responses. Science 1996; 273: 1868–1871. pmid:8791593
  3. 3. Kenet T, Bibitchkov D, Tsodyks M, Grinvald A, Arieli A. Spontaneously emerging cortical representations of visual attributes. Nature 2003; 425:954–956. pmid:14586468
  4. 4. Fiser J, Chiu C, Weliky M. Small modulation of ongoing cortical dynamics by sensory input during natural vision. Nature 2004; 431:573–578. pmid:15457262
  5. 5. Kohn A, Smith MA. Stimulus dependence of neuronal correlation in primary visual cortex of the macaque. J. Neurosci. 2005; 25:3661–3673. pmid:15814797
  6. 6. Cohen MR, Maunsell JH. Attention improves performance primarily by reducing interneuronal correlations. Nature Neurosci 2009; 12:1594–1600. pmid:19915566
  7. 7. Mitchell JF, Sundberg KA, Reynolds JH Differential attention-dependent response modulation across cell classes in macaque visual area V4. Neuron 2007; 55:131–141. pmid:17610822
  8. 8. Mitchell JF, Sundberg KA, Reynolds JH. Spatial attention decorrelates intrinsic activity fluctuations in macaque area V4. Neuron 2009; 63:879–888. pmid:19778515
  9. 9. Churchland MM, Yu BM, Cunningham JP, Sugrue LP, Cohen MR et al. Stimulus onset quenches neural variability: a widespread cortical phenomenon. Nature Neurosci. 2010; 13:369–78. pmid:20173745
  10. 10. Ponce-Alvarez A, Thiele A, Albright TD, Stoner GR, Devo G. Stimulus-dependent variability and noise correlations in cortical MT neurons. Proc. Natl. Acad. Sci. USA 2013;110:13162–13167. pmid:23878209
  11. 11. Kohn A, Coen-Cagli R, Kanitscheider I, Pouget A. Correlations and neuronal population information. Annu. Rev. Neurosci. 2016; 39:237–256. pmid:27145916
  12. 12. Ni AM, Ruff DA, Alberts JJ, Symmonds J, Cohen MR, Learning and attention reveal a general relationship between population activity and behavior. Science 2018; 359:463–465. pmid:29371470
  13. 13. Litwin-Kumar A, Doiron B. Slow dynamics and high variability in balanced cortical networks with clustered connections. Nat. Neurosci. 2012; 15:1498–1505. pmid:23001062
  14. 14. Deco D, Hugues E. Neural network mechanisms underlying stimulus driven variability reduction. PLoS Comput. Biol. 2012; 8: e1002395. pmid:22479168
  15. 15. Doiron B, Litwin-Kumar A. Balanced neural architecture and the idling brain. Front. Comput. Neurosci. 2014; 8:56. pmid:24904394
  16. 16. Mochol G, Hermoso-Mendizabal A, Sakata S, Harris KD, de la Rocha J. Stochastic transitions into silence cause noise correlations in cortical circuits. Proc. Natl. Acad. Sci. USA 2015; 112:3529–3534. pmid:25739962
  17. 17. Abbott LF, Rajan K, Sompolinksy H Interactions between intrinsic and stimulus-dependent activity in recurrent neural networks. In: Ding M. and Glanzman D. eds. The Dynamic Brain: An Exploration of Neuronal Variability and Its Functional Significance. New York: Oxford UP. 2011:65–82.
  18. 18. Rajan K, Abbott LF, Sompolinsky H. Stimulus-dependent suppression of chaos in recurrent neural networks. Phys Rev E. 2010; 82:011903.
  19. 19. Wolf F, Engelken R, Touzel MP, Weidinger JDF, Neef A, Dynamical models of cortical circuits. Curr. Opin. Neurobiol. 2014; 25:228–236. pmid:24658059
  20. 20. Hennequin G, Ahmadian Y, Rubin DB, Lengyel M, Miller KD. The dynamical regime of sensory cortex: stable dynamics around a single stimulus-tuned attractor account for patterns of noise variability. Neuron 2018; 98:846–860. pmid:29772203
  21. 21. Kilpatrick ZP, Ermentrout GB. Wandering bumps in stochastic neural fields. SIAM J. Appl. Dyn. Syst. 2013; 12:61–94.
  22. 22. Kilpatrick ZP. Interareal coupling reduces encoding variability in multi-area models of spatial working memory. Front. Comput. Neurosci. 2013;7:82. pmid:23898260
  23. 23. Kilpatrick ZP. Synaptic mechanisms of interference in working memory. Scientific Reports. 2018; 8:7879. pmid:29777113
  24. 24. Smith MA, Jia X, Zanvakili A, Kohn A. Laminar dependence of neuronal correlations in visual cortex. J. Neurophysiol. 2013; 109:9040–947.
  25. 25. Snyder AC, Morais MJ, Kohn A, Smith MA. Correlations in V1 are reduced by stimulation outside the receptive field. J. Neurosci. 2014; 34:11222–11227. pmid:25143603
  26. 26. Ben-Yishai R, Hansel D, Sompolinsky H. Traveling waves and the processing of weakly tuned inputs in a cortical network module. J. Comput. Neurosci. 1997; 4:57–77. pmid:9046452
  27. 27. Bressloff PC, Cowan JD. An amplitude approach to contextual effects in primary visual cortex. Neural Comput. 2002; 14:493–525. pmid:11860680
  28. 28. Camperi M, Wang X-J. A model of visuospatial short-term memory in prefrontal cortex: recurrent network and cellular bistability. J. Comp. Neurosci. 1998; 5:383–405.
  29. 29. Laing CR, Troy WC, Gutkin B, Ermentrout GB. Multiple bumps in a neuronal model of working memory. SIAM J. Appl. Math 2002; 63:62–97.
  30. 30. Bressloff PC, Webber MA. Front propagation in stochastic neural fields. SIAM J. Appl. Dyn. Syst. 2012; 11:708–740.
  31. 31. Webber M, Bressloff PC. The effects of noise on binocular rivalry waves: a stochastic neural field model. J. Stat. Mech: Special issue on statistical physics and neuroscience. 2013; 3: P03001.
  32. 32. Malach R, Amir Y, Harel M, Grinvald A, Relationship between intrinsic connections and functional architecture revealed by optical imaging and in vivo targeted biocytin injections in primate striate cortex. Proc. Natl. Acad. Sci. USA 2003; 90:10469–10473.
  33. 33. Yoshioka T, Blasdel GG, Levitt JB, Lund JS, Relation between patterns of intrinsic lateral connectivity, ocular dominance and cytochrome oxidase reactive regions in macaque monkey striate cortex. Cerebral Cortex 1996; 6:297–310. pmid:8670658
  34. 34. Yabuta NH, Callaway EM, Cytochrome oxidase blobs and intrinsic horizontal connections of layer 2/3 pyramidal neurons in primate V1. Vis. Neurosci. 1998; 15:1007–1027. pmid:9839966
  35. 35. Lund JS, Angelucci A, Bressloff PC, Anatomical substrates for functional columns in macaque monkey primary visual cortex. Cerebral Cortex 2003;12:15–24.
  36. 36. Angelucci A, Bijanzadeh M, Nurminen L, Federer F, Merlin S, Bressloff PC. Circuits and mechanisms for surround modulation in visual cortex. Ann. Rev. Neurosci. 2017; 40: 425–451. pmid:28471714
  37. 37. Callaway EM. Local circuits in primary visual cortex of the macaque monkey. Ann Rev Neurosci. 1998; 21:47–74. pmid:9530491
  38. 38. Zhang K. Representation of spatial orientation by the intrinsic dynamics of the head-direction cell ensemble: a theory. J Neurosci 1996; 16:2112–2126. pmid:8604055
  39. 39. Compte A, Brunel N, Goldman-Rakic PS, Wang X-J. Synaptic mechanisms and network dynamics underlying spatial working memory in a cortical network model, Cereb. Cortex 2000; 10: 910–923.
  40. 40. Laing CR, Chow CC. Stationary bumps in networks of spiking neurons. Neural Comput. 2001; 13:1473–1494. pmid:11440594
  41. 41. Chow CC, Coombes S. Existence and wandering of bumps in a spiking neural network model, SIAM J. Appl. Dyn. Syst. 2006; 5:552–574.
  42. 42. Bressloff PC, Kilpatrick ZP. Nonlinear Langevin equations for the wandering of fronts in stochastic neural fields. SIAM J. Appl. Dyn. Syst. 2015; 14:305–334.
  43. 43. Inglis J, MacLaurin JN. A general framework for stochastic traveling waves and patterns, with application to neural field equations, SIAM J. Appl. Dyn. Syst. 2016; 15:195–234.
  44. 44. Bressloff PC, Maclaurin JN. Wandering bumps and stimulus-dependent variability in a stochastic neural field: a variational approach. Preprint 2019.
  45. 45. Mardia KV, Jupp PE. Directional statistics. Wiley Series in Probability and Statistics. John Wiley and Sons, Chichester, second edition 2000.
  46. 46. Hill N, Hader DP. A biased random walk model for the trajectories of swimming micro-organisms. J. Theor. Biol. 1997; 186:503–526. pmid:11536821
  47. 47. Codling E, Hill N. Calculating spatial statistics for velocity jump processes with experimentally observed reorientation parameters. J. Math. Biol. 2005; 51:527–556. pmid:15868200
  48. 48. Frank TD. Nonlinear Fokker-Planck equations: Fundamentals and applications. Springer Series in Synergetics. Springer-Verlag, Berlin 2005.
  49. 49. Mardia KV, Taylor CC, Subramaniam GK. Protein bioinformatics and mixtures of bivariate von Mises distributions for angular data. Biometrics 2007; 63:505–551. pmid:17688502
  50. 50. Arandia-Romero I, Tanabe S, Drugowitsch J, Kohn A, Moreno-Bote R. Multiplicative and additive modulation of neuronal tuning with population activity affects encoded information. Neuron 2016; 89:1–12.
  51. 51. Renart A, Song P, and Wang XJ. Robust spatial working memory through homeostatic synaptic scaling in heterogeneous cortical networks. Neuron 2003;38:473–485. pmid:12741993
  52. 52. Kilpatrick ZP, Ermentrout B, Doiron B. Optimizing working memory with heterogeneity of recurrent cortical excitation J. Neurosci. 2013;33:18999–19011. pmid:24285904
  53. 53. Angelucci A, Levitt JB, Walton EJS, Hupe JM, Bullier J, Lund JS. Circuits for local and global signal integration in primary visual cortex. J Neurosci. 2002;22:8633–8646. pmid:12351737
  54. 54. Schwabe L, Obermayer K, Angelucci A, Bressloff PC. The role of feedback in shaping the extra-classical receptive field of cortical neurons: a recurrent network model. J Neurosci. 2006; 26:9117–9129. pmid:16957068
  55. 55. Sompolinsky H, Shapley R. New perspectives on the mechanisms for orientation selectivity. Curr. Opin. Neurobiol. 1997; 7:514–522. pmid:9287203
  56. 56. Ferster D, Miller KD Neural mechanisms of orientation selectivity in the visual cortex. Annu. Rev. Neurosci. 2000; 23:441–471. pmid:10845071
  57. 57. Vidyasagar TR, Eysel UT. Origins of feature selectivities and maps in the mammalian primary visual cortex. Trends Neurosci. 2015; 38:475–485. pmid:26209463
  58. 58. Priebe NJ. Mechanisms of orientation selectivity in the primary visual cortex. Annu. Rev. Vis. Sci. 2016; 2:85–107. pmid:28532362
  59. 59. Hubel DH, Wiesel TN. Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex. J. Physiol. 1962; 160:106–154. pmid:14449617
  60. 60. Reid RC, Alonso JM. Specificity of monosynaptic connections from thalamus to visual cortex. Nature. 1995; 378:281–284. pmid:7477347
  61. 61. Ferster D, Chung S, Wheat H. Orientation selectivity of thalamic input to simple cells of cat visual cortex. Nature. 1996; 380:249–252. pmid:8637573
  62. 62. Troyer TW, Krukowski AE, Priebe NJ, Miller KD. Contrast-invariant orientation tuning in cat visual cortex: thalamocortical input tuning and correlation-based intracortical connectivity. J. Neurosci. 1998; 18:5908–5927. pmid:9671678
  63. 63. Finn IM, Priebe NJ, Ferster D. The emergence of contrast-invariant orientation tuning in simple cells of cat visual cortex. Neuron 2007; 54:137–152. pmid:17408583
  64. 64. Sadagopan S, Ferster D. Feedforward origins of response variability underlying contrast invariant orientation tuning in cat visual cortex. Neuron 2012; 74:911–923. pmid:22681694
  65. 65. Sillito AM. The contribution of inhibitory mechanisms to the receptive field properties of neurones in the striate cortex of the cat. J. Physiol. 1975; 250:305–329. pmid:1177144
  66. 66. Douglas RJ, Koch C, Mahowald M, Martin KA, Suarez HH. Recurrent excitation in neocortical circuits. Science 1995; 269:981–985. pmid:7638624
  67. 67. Ringach DL, Hawken MJ and Shapley R. Dynamics of orientation tuning in macaque primary visual cortex. Nature. 1997; 387:281–284. pmid:9153392
  68. 68. Schummers J, Marino J, Sur M. Synaptic integration by V1 neurons depends on location within the orientation map. Neuron 2002; 36:969–978. pmid:12467599
  69. 69. Nauhaus I, Benucci A, Carandini M, Ringach DL. Neuronal selectivity and local map structure in visual cortex. Neuron 2008; 57:673–679. pmid:18341988
  70. 70. Stimberg M, Wimmer K,Martin R, Schwabe L, Marino J, et al. The operating regime of local computations in primary visual cortex. Cereb. Cortex 2009; 19:2166–2180. pmid:19221143
  71. 71. Koch E, Jin J, Wang Y, Kremkow J, Alonso JM, Zaidi Q. Cross-orientation suppression and the topography of orientation preferences. J. Vis. 2015; 15(12):1000.
  72. 72. Ben-Yishai R, Lev Bar-Or R, Sompolinsky H. Theory of orientation tuning in visual cortex. Proc. Nat. Acad. Sci. 1995; 92:3844–3848. pmid:7731993
  73. 73. Somers DC, Nelson S, Sur M. An emergent model of orientation selectivity in cat visual cortical simple cells. J. Neurosci., 15:5448–5465, 1995. pmid:7643194
  74. 74. Troyer TW, Krukowski AE, Miller KD. LGN input to simple cells and contrast-invariant orientation tuning: an analysis. J. Neurophysiol. 2002; 87:2741–2752. pmid:12037176
  75. 75. Carandini M, Ringach DL. Predictions of a recurrent model of orientation selectivity. Vis. Res. 1997; 37:3061–3071. pmid:9425519
  76. 76. Sceniak MP, Hawken MJ, Shapley RM. Visual spatial characterization of macaque V1 neurons. J. Neurophysiol. 2001; 85:1873–1887. pmid:11353004
  77. 77. Cavanaugh JR, Bair W, Movshon JA. Nature and interaction of signals from the receptive field center and surround in macaque V1 neurons. J. Neurophysiol. 2002; 88:2530–2546. pmid:12424292
  78. 78. Shushruth S, Mangapathy P, Ichida JM, Bressloff PC, Schwabe L, Angelucci A. Strong recurrent networks compute the orientation-tuning of surround modulation in primate primary visual cortex. J. Neurosci. 2012; 32:308–321. pmid:22219292
  79. 79. Adesnik H, Bruns W, Taniguchi H, Huang ZJ, Scanziani M. A neural circuit for spatial summation in visual cortex. Nature 2012; 490:226–231. pmid:23060193
  80. 80. Henry CA, Joshi S, Xing D, Shapley RM, Hawken MJ. Functional characterization of the extraclassical receptive field in macaque V1: contrast, orientation, and temporal dynamics. J. Neurosci. 2013; 33:6230–6242. pmid:23554504
  81. 81. Self MW, Lorteije JAM, Vangeneugden J, van Beest EH, Grigore ME, Levelt CN, Heimel JA, Roelfsema PR. Orientation-tuned surround suppression in mouse visual cortex. 2014; 34:9290–9304.
  82. 82. Miller KD. Canonical computations of the cerebral cortex. Curr. Opin. Neurobiol. 2016; 37:75–84. pmid:26868041
  83. 83. Wilson HR, Cowan JD. A mathematical theory of the functional dynamics of cortical and thalamic nervous tissue Kybernetik 1973; 13:55–80. pmid:4767470
  84. 84. Ermentrout GB. Neural networks as spatio-temporal pattern-forming systems. Rep Prog Phy. 1998;61:353–430.
  85. 85. Bressloff PC. Spatiotemporal dynamics of continuum neural fields. Invited topical review. J. Phys. A 2012; 45:033001.
  86. 86. Ermentrout GB, Noisy oscillators, in Stochastic methods in neuroscience, Laing CR and Lord GJ, eds., Oxford University Press, Oxford, 2009.
  87. 87. Bressloff PC, Maclaurin JN. A variational method for analyzing stochastic limit cycle oscillators. SIAM J. Appl. Dyn. 2018; 17:2205–2233.
  88. 88. Bressloff PC, Cowan JD. SO(3) symmetry breaking mechanism for orientation and spatial frequency tuning in visual cortex. Phys. Rev. Lett. 2002; 88:078102. pmid:11863943
  89. 89. Bressloff PC, Cowan JD. Spherical model of orientation and spatial frequency tuning in a cortical hypercolumn. Phil. Trans. Roy. Soc. Lond. B 2003; 358:1643–1667.
  90. 90. Amari S. Dynamics of pattern formation in lateral inhibition type neural fields. Biol. Cybern. 1977; 27:77–87. pmid:911931
  91. 91. Douglas RJ, Martin KAC. A functional microcircuit for cat visual cortex. J Physiol. 1991; 440:735–769. pmid:1666655
  92. 92. Hirsch JA, Martinez LM. Laminar processing in the visual cortical column. Curr Opin Neurobiol. 2006; 16:377–384. pmid:16842989
  93. 93. Neske GT, Patrick SL, Connors BW. Contributions of diverse excitatory and inhibitory neurons to recurrent network activity in cerebral cortex. J Neurosci. 2015;35:1089–1105. pmid:25609625
  94. 94. Bressloff PC, Carroll SR. Laminar neural field model of laterally propagating waves of orientation selectivity. PLoS Comput Biol 2015; 11(10):e1004545. pmid:26491877
  95. 95. Cox DR. Some statistical methods connected with series of events, J. R. Stat. Soc. B. 1955; 17:129–164.
  96. 96. Kriener B, Helias M, Rotter S, Diesmann M, Einevoll GT, How pattern formation in ring networks of excitatory and inhibitory spiking neurons depends on the input current regime, BMC neuroscience, 2013; 14(1):P123.
  97. 97. Keane A, Henderson JA, Gong P, Dynamical patterns underlying response properties of cortical circuits, J. Roy. Soc. Interface, 2018; 15 (140):20170960.
  98. 98. Huang C, Ruff DA, Pyle Ryan, Rosenbaum R, Cohen MR, Doiron B, Circuit models of low-dimensional shared variability in cortical networks, Neuron, 2019; 101:1–12.
  99. 99. Folias S, Bressloff PC. Breathing pulses in an excitatory neural network SIAM J. Dyn. Syst. 2004; 3:378–407.
  100. 100. Folias S, Bressloff PC. Stimulus—locked traveling pulses and breathers in an excitatory neural network. SIAM J. Appl. Math. 2005; 65;2067–2092.
  101. 101. Ermentrout GB, Jalics JZ, Rubin JE. Stimulus-driven traveling solutions in continuum neuronal models with a general smooth firing rate function. SIAM J. Appl. Math. 2010; 70:3039–3064.
  102. 102. Veltz R, Faugeras O. Local/global analysis of the stationary solutions of some neural field equations. SIAM J. Appl. Dyn. Syst., 2010; 9:954–998.
  103. 103. Gardiner CW. Handbook of stochastic methods, 4th edition. Springer, Berlin 2009.