Figures
Abstract
Recurrent network models are instrumental in investigating how behaviorally-relevant computations emerge from collective neural dynamics. A recently developed class of models based on low-rank connectivity provides an analytically tractable framework for understanding of how connectivity structure determines the geometry of low-dimensional dynamics and the ensuing computations. Such models however lack some fundamental biological constraints, and in particular represent individual neurons in terms of abstract units that communicate through continuous firing rates rather than discrete action potentials. Here we examine how far the theoretical insights obtained from low-rank rate networks transfer to more biologically plausible networks of spiking neurons. Adding a low-rank structure on top of random excitatory-inhibitory connectivity, we systematically compare the geometry of activity in networks of integrate-and-fire neurons to rate networks with statistically equivalent low-rank connectivity. We show that the mean-field predictions of rate networks allow us to identify low-dimensional dynamics at constant population-average activity in spiking networks, as well as novel non-linear regimes of activity such as out-of-phase oscillations and slow manifolds. We finally exploit these results to directly build spiking networks that perform nonlinear computations.
Author summary
Behaviorally relevant information processing is believed to emerge from interactions among neurons forming networks in the brain, and computational modeling is an important approach for understanding this process. Models of neuronal networks have been developed at different levels of detail, with typically a trade off between analytic tractability and biological realism. The relation between network connectivity, dynamics and computations is best understood in abstract models where individual neurons are represented as simplified units with continuous firing activity. Here we examine how far the results obtained in an analytically-tractable class of rate models extend to more biologically realistic spiking networks where neurons interact through discrete action potentials. Our results show that abstract rate models provide accurate predictions for the collective dynamics and the resulting computations in more biologically faithful spiking networks.
Citation: Cimeša L, Ciric L, Ostojic S (2023) Geometry of population activity in spiking networks with low-rank structure. PLoS Comput Biol 19(8): e1011315. https://doi.org/10.1371/journal.pcbi.1011315
Editor: Peter E. Latham, UCL, UNITED KINGDOM
Received: November 25, 2022; Accepted: June 27, 2023; Published: August 7, 2023
Copyright: © 2023 Cimeša et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: Code available at: https://github.com/LjubicaCimesa/Spiking-low-rank-networks.
Funding: The project was supported by the CRCNS project PIND (ANR-19-NEUC-0001-01 to SO), the program “Ecoles Universitaires de Recherche” launched by the French Government and implemented by the ANR, with the reference ANR-17-EURE-0017 (to LC and SO). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
Introduction
Recurrent network models are an essential tool for understanding how the collective dynamics of activity in the brain give rise to computations that underlie behavior. Network models at different levels of biological detail are typically used to describe different phenomena [1, 2], but integrating findings across scales of abstraction remains challenging. Networks of excitatory and inhibitory spiking neurons [3] are a popular class of models which incorporate the key biological fact that neurons interact through discrete action potentials, a.k.a. spikes. Balanced excitatatory-inhibitory spiking networks in particular naturally lead to asynchronous irregular activity [4–12] that captures some of the main features of the spontaneous neural firing in vivo [13–19]. Beyond spontaneous activity, how rich behavioral computations are implemented in spiking networks has been an open issue [20–22]. This question has so far been more easily tackled in more abstract models such as recurrent neural networks (RNNs) [23–25], where individual units are represented in terms of continuous firing rates rather than discrete spikes. A particularly fruitful approach has been to interpret the emerging computations in terms of the geometry of dynamics in the state space of joint activity of all neurons [26–29], as commonly done with experimental data [30–35]. In particular, in a large class of rate networks in which the connectivity contains a low-rank structure [36–48], the geometry of activity and the resulting computations can be analytically predicted from the structure of connectivity [49–52]. A comparable mechanistic picture has so far been missing in spiking networks.
A key question is therefore to which extent mechanistic insights from RNNs extend to more biologically plausible spiking models. In this regard, a central underlying issue is exactly how spiking models are related to abstract rate networks [53, 54]. This question has been addressed in various specific cases [55–61], but a systematic mathematical reduction of arbitrary spiking networks to rate models has been elusive. One common heuristic has been to interpret each unit in a rate network as an average over a sub-population of spiking neurons [4, 62–69]. A possible alternative is instead to approximate each individual spiking neuron by a Poisson rate unit [70], and therefore hypothesize that a full spiking network can be directly mapped onto a rate network with identical connectivity [71–76]. If this hypothesis is correct, the analytic predictions for the geometry of activity in rate networks should directly translate to spiking networks with a low-rank structure in connectivity. This implies that the geometry of activity and range of dynamics in spiking networks may be much broader than apparent on the level of population-averaged spike trains.
To test this hypothesis, we consider a classical spiking network model consisting of excitatory-inhibitory integrate-and-fire neurons [7], and add low-rank structure on top of the underlying random, sparse connectivity. Varying the statistics of the low-rank structure, we systematically compare the geometry of activity and dynamical regimes in the spiking model with predictions of networks of rate units with statistically identical low-rank connectivity. We find that rate networks predict well the structure of activity in the spiking network even outside the asynchronous irregular regime, as long as spike-times are averaged over timescales longer than the synaptic and membrane time constants to estimate instantaneous firing rates. In particular, the predictions of the rate model allow us to identify low-dimensional dynamics at constant population-average activity in spiking networks, as well as novel non-linear regimes of activity such as out-of-phase oscillations and slow manifolds. We finally show that these results can be exploited to directly build spiking networks that perform nonlinear computations based on principles identified in rate networks.
Results
Geometry of the activity in the state space
We consider recurrent networks of N neurons, modeled either as rate units or leaky integrate-and-fire (LIF) neurons (Fig 1A, see Methods for details). We quantify the activity of each neuron i in terms of its time-dependent firing rate ri(t). In rate networks, each unit is described by the dynamics of its activation xi(t), an abstract variable usually interpreted as the total input or membrane potential [77], that obeys (1)
A: Illustration of recurrent neural network architecture, consisting of inputs and recurrent connectivity. B: Representation of inputs and connectivity in terms of vectors. The input weights form an input vector I. In spiking networks, the recurrent connectivity is composed of a sparse excitatory-inhibitory part (zero entries in white, excitatory connections in red, inhibitory in blue) and a low-rank structure defined by pairs of connectivity vectors m and n. The illustration shows a unit-rank example (R = 1). C: Left: Spike times of three neurons in the spiking network. Right: dynamics of instantaneous firing rates computed from spikes using an exponential filter with timescale τf = 100ms. D: Three-dimensional illustration of low-dimensional dynamics in the activity state space where each axis represents the firing rate of one neuron. In a unit-rank network, the activity is expected to be confined to a two-dimensional plane spanned by the vectors, m and I. We refer to the direction (1, 1,…1) as the global axis (orange). E: Projections of activity on two axes: (top) global axis corresponding to the population-averaged firing rate; (bottom) axis defined by the input vector I.
Here Pij is the recurrent connectivity weight from unit j to unit i, u(t) is the input amplitude shared by all units, Ii is the weight of the external input on unit i, and the firing rate is obtained as ri(t) = ϕ(xi(t)), where ϕ(x) is the single-unit current-to-rate function that we here choose to be a positive sigmoid ϕ(x) = 1 + tanh(x − xoff). In the LIF network, single-unit firing rates are instead estimated from a running average over spike times, computed using an exponential filter with timescale τf (Fig 1C, Methods Eq (18)).
Following a common approach for analyzing neural data [32, 78], we represent the collective activity at any time point as a vector r(t) = {ri}i = 1…N in the activity state space where each axis corresponds to the firing rate ri of one neuron (Fig 1D). We then examine the geometry of the dynamical trajectories by projecting at each time point the activity vector r(t) along different directions in that space. Each direction is specified by a vector w = {wi}i = 1…N in state space, so that projecting onto it is equivalent to assigning to every neuron a weight wi and computing a weighted average of the activity (Methods Eq (20)).
Analyses of experimental data and works on rate networks commonly examine how collective activity changes along arbitrary directions in state space, where the weight wi of each neuron is chosen independently. In contrast, studies of spiking networks have often focused on firing rates averaged over the whole network or over specific sub-populations [4, 63–68]. Taking a population average over the whole network is equivalent to projecting activity along the direction (1, 1, …, 1), which we call the global axis [79]. Similarly, vectors with unit entries on a specific subset of neurons and zeros elsewhere define directions in state space that represent firing rates averaged over specific sub-populations. The goal of the present study is to instead examine how inputs and connectivity in spiking networks shape activity along arbitrary directions of state space, and in particular directions orthogonal to the global axis which correspond to changes in collective activity that modify the pattern of activity but keep the population-averaged activity constant. To this end, we compare the geometry of activity in spiking networks with rate networks that share an identical part of the connectivity P.
We specifically focus on rate networks with a low-rank connectivity matrix parametrized as (2) where and for r = 1, …, R are connectivity vectors (Fig 1B). Rate networks with such a connectivity are analytically tractable, in the sense that the geometry of dynamics in state space can be directly predicted from the arrangement of connectivity vectors and inputs, as summarized below.
Our key hypothesis is that, for a spiking network in the asynchronous irregular state, each neuron can be directly mapped onto a unit in a rate network with statistically identical connectivity. To test this hypothesis, we start from an LIF network with sparse excitatory-inhibitory connectivity JEI, and choose parameters in the inhibition-dominated regime that leads to asynchronous irregular activity [7] (Methods). We add to this random component a low-rank part P given by Eq (2). We then compare the geometry of the resulting spiking activity to the predictions of a rate model with low-rank connectivity P.
Geometry of responses to external inputs
We start by examining the geometry of transient dynamics in response to external inputs. We first summarize the predictions of low-rank rate models developed in previous studies [49]. We then examine whether these predictions hold in networks of integrate-and-fire models with statistically identical low-rank components.
Each neuron i receives a step input u(t) (Fig 2B) multiplied by a weight Ii. The set of feed-forward weights Ii over neurons form an input vector I = {Ii}i = 1…N. This input vector, as well as the connectivity vectors m(r) and n(r) introduced in Eq 2, each define a specific direction in state space (Fig 2A). Previous work [49, 51] has shown that in rate networks with low-rank connectivity, the dynamics of the activations x(t) = {xi(t)} in response to an input are confined to a subspace of state space spanned by the input vector I and connectivity vectors m(r) for r = 1…R. Focusing on a unit rank network (R = 1), this implies that the activation xi(t) of unit i in the rate network can be expressed as (3) where κ(t) and v(t) are two scalar variables (Methods). The variable v(t) represents feed-forward activity propagated along the direction I, while κ(t) quantifies activity that recurrent dynamics generate along the direction m. An input along the direction I will generate a non-zero recurrent response κ(t) only if I has a non-zero overlap with the vector n, i.e., if the scalar product nTI is non-zero [49].
A: Illustration of the geometry in the activity state-space. The input vector I, the connectivity vectors m and n (green), and the global axis (1, 1, …, 1) (orange) define a set of directions and a subspace within which the low-dimensional dynamics unfold. The overlaps of the vectors I and m with the global axis predict whether inputs give rise to a change in the population-averaged activity. The overlap of I with n instead determines whether an input engages recurrent activity along the direction m. The three columns display three different arrangements of the input vector (depicted in a different color in each column). Left: I aligned with the global axis; middle: I orthogonal to both n and the global axis; right: I aligned with n, but I and m orthogonal to the global axis. B: Input vector is multiplied by a scalar u(t) which is a step function from t = 1s. C: Individual firing rates ri(t) for a subset of 10 neurons in each network. D: Population firing rate, averaged over all neurons in the network. E: Projections of the firing rate trajectory r(t) onto the (I,m) plane. F: PCA analysis of the firing rate dynamics r(t). Variance explained by each of the first 8 PCs. Inserts: Projections of the first two PCs onto the global axis, the input vector I and the connectivity vector m. The connectivity vectors m and n have a zero mean and unit standard deviation, and are orthogonal to each other. Vectors n and I are orthogonal except in blue where the overlap is σnI = 1. Vectors I in gray and blue have a zero mean and unit standard deviation, while vector I in purple is along the global axis. Network parameters are given in Table 1.
Additional analyses show that the low-dimensional geometry described by Eq (3) at the level of activations x(t) is largely preserved when applying the non-linear function ϕ(x) to obtain rates (Methods). More specifically, the projection of the firing rates r(t) = {ri(t)} on an arbitrary axis w is determined by the projection of x(t) on that same axis if the response was weak and therefore approximately linear, or if the entries of wi of w follow a Gaussian distribution (Eq (34)), which we assume throughout this study. In low-rank rate networks, the dynamics of r(t) in response to an input therefore dominantly lie in the subspace spanned by the input and connectivity vectors I and m (Fig 2E), with the non-linearity generating a potential additional component along the global axis (Methods Eq (34)). These theoretical predictions were confirmed by a PCA analysis of simulated trajectories of firing rates r(t) (Fig 2F).
Since the population-averaged firing rate is obtained by projecting r(t) on the global axis (1, 1, …, 1), the analysis of low-rank rate models predicts that a given input induces a strong change of population-averaged firing rates when the mean of inputs weights over neurons, , is non-zero (Fig 2D, left panel), or if both the average 〈m〉 of elements of m and the overlap nTI are non-zero. Conversely, if 〈I〉 = 0 and 〈m〉 = 0, inputs evoke changes in single-unit firing rate that essentially average-out on the population-average level (Fig 2D, middle and right panel), but instead explore the plane I − m that is orthogonal to the global axis in state space (Fig 2E, middle and right panel).
We compared these predictions of low-rank rate models to the geometry of activity in spiking networks where a low-rank structure P was added on top of sparse excitatory-inhibitory connectivity JEI (Fig 3). Note that, because of the 1/N scaling in Eq (2), the magnitude of elements of P was much smaller than the non-zero elements of JEI that were independent of N [7]. The excitatory-inhibitory part of the connectivity therefore controlled the firing regime of the network. Starting from a network in the inhibition-dominated asynchronous irregular state [7], as expected inputs evoked a change in population-averaged firing rates if the mean of the input vector 〈I〉 was non-zero (Fig 3D, left panel). Input vectors of zero mean instead elicited patterns of responses across neurons that did not modify the population-averaged firing rate (Fig 3D, middle and right panel), but explored directions in state space orthogonal to the global axis. These directions were accurately predicted by low-rank rate models: input vectors I orthogonal to the vector n led to responses only along the direction I (Fig 3E, left and middle panel), while inputs that overlapped with n led to responses in the I − m plane (Fig 3E, right panel). A PCA analysis confirmed that these low-dimensional projections explained the dominant part of variance in the full trajectories (Fig 3F).
A: Illustration of the geometry of input (varying color) and connectivity vectors (green) with respect to the global axis (orange). Left: input vector I along the global axis; middle: input vector I orthogonal to n; right: input vector I along the vector n. B: Input vector is multiplied by a scalar u(t) which is a step function from t = 1s. C: Raster plot showing action potentials for a subset of 30 neurons out of N = 12500 in each network. D: Population firing rate obtained by averaging instantaneous firing rates of all neurons. E: Projections of the firing rate trajectory r(t) onto the (I, m) plane. F: PCA analysis of firing rate dynamics r(t). Variance explained by each of the first 8 PCs. Inserts: Projections of the first 3 PCs onto the global axis (first row), and vectors I and m. The connectivity vectors m and n have a zero mean and unit standard deviation, and are orthogonal to each other. Vectors n and I are orthogonal except in blue where the overlap is nTI/N = 0.4mV2. Vectors I in gray and blue have a zero mean and unit standard deviation, while vector I in purple is along the global axis. All analyses were performed on instantaneous firing rates computed using a filter timescale of τf = 100ms. Network parameters are given in Table 2.
Because the low-rank structure P is generated using Gaussian connectivity vectors, the full connectivity matrix obtained by superposing it with the random component JEI does not obey Dale’s law and is not sparse. The entries of P are however much weaker (standard deviation σn*σm/N) than the non-zero elements of JEI. Setting to zero the entries of P for which JEI is zero was therefore sufficient to enforce Dale’s law in the full connectivity matrix. The resulting sparsified matrix P is not low-rank anymore, but for rate networks the predictions of low-rank theory still hold up to high values of sparsity [80]. We verified that the results in the spiking network were unchanged when including sparsity in the low-rank term and Dale’s law in the full connectivity (Fig 4).
The connectivity consisted of the superposition of the random term JEI and a sparsified unit-rank part P, in which we set to zero entries for which . The input vector I is along the vector n. A: Raster plot showing action potential for a subset of 30 neurons. B: Population firing rate obtained by averaging instantaneous firing rates of all neurons. C: Projection of the firing rate trajectory r(t) onto the (I, m) plane. D: PCA analysis of firing rate dynamics r(t). Variance explained by each of the first 8 PCs parameters in Table 2.
Altogether, the predictions of low-rank rate models were fully borne out when treating each individual spiking neuron as a rate unit.
Responses in spiking networks outside of the irregular asynchronous regime
Our initial hypothesis was that low-rank rate networks predict well the geometry of responses of spiking networks in the asynchronous irregular regime, where individual neurons can be approximated as independent Poisson processes [7]. We next asked to which extent the predictions hold outside of this regime, when the spiking activity is either not asynchronous, i.e. exhibits some degree of synchronization and oscillations [7], or is regular rather than irregular. Following Brunel 2000, we set the network to operate in a specific regime by varying the strength of the inhibition g in the random part of the connectivity, the external input μext and the synaptic delay τdel (Methods). We then examined how much the underlying regime influences the low-dimensional dynamics in response to external inputs in networks with a unit-rank structure. For this, we repeated the PCA analysis in spiking networks with zero-mean input and connectivity vectors, and I overlapping with n as in the right column of Fig 2.
We first considered a network of integrate-and-fire neurons that operates in the synchronous irregular (SI) regime [7] in which individual neurons fire irregularly (Fig 5A, top), but are sparsely synchronised, leading to oscillations in the population rate (Fig 5A, bottom). The frequency of these oscillations is set by the synaptic delay τdel, and is therefore high for physiologically realistic values of τdel [7]. These oscillations can therefore only be observed in the firing rates when the filter timescale τf used for averaging over spikes is comparable to the delays, i.e. of the order of milliseconds (Fig 5A, blue). Longer filter timescales instead totally average-out the oscillatory dynamics (Fig 5A, orange). We therefore found that the dimensionality and geometry of the responses in state space depend on the filter timescale used to determine single-unit firing rates (Fig 5B). Performing a PCA analysis on firing rate trajectories r(t) obtained with a filter timescale of 1ms indicated that the activity was high-dimensional. Indeed, the explained variance was distributed along many principal components (Fig 5B, top), with the first PC capturing population-level oscillations along the global axis (insert in Fig 5B, top panel), while strong fluctuations were present in other directions (Fig 5C). In contrast, for a filter timescale of 100ms the first PC explained a much larger fraction of variance (Fig 5B, bottom), and corresponds instead to activity along a combination between the connectivity vector m and the input vector I (insert in Fig 5B, bottom), as predicted by the rate model (insert in Fig 2F, blue). In between these two extremes, progressively increasing the filter timescale (Fig 5C, bottom) shows that for timescales below 10ms, the geometry of activity is dominated by fluctuations along the global axis, while for longer timescales the dynamics are lower-dimensional and reside dominantly in the (m, I) plane as expected from the rate network (Fig 2E and 2F, blue).
A-C: Synchronous irregular (SI) regime. A: Top: Raster plot showing action potentials for a subset of 30 neurons in the network. Bottom: population-averaged firing rate computed using filter time constants of 1ms (blue) and 100ms (orange). B: PCA analysis of trajectories of instantaneous firing rates computed from spike trains using two different filter time constants (top: 1ms, bottom: 100ms). Main panels: variance explained by each of the first 8 PCs; inserts: projections of the first 3 principal components on the global vector, I and m. C: Top: Projections of the firing rate trajectories on the plane defined by vectors m and I. Bottom: Projection of the first principal component on the global axis (black) and on the vector m (green) as a function of the filter time constant. D-F: Similar to A-C, for the network in asynchronous irregular regime shown in the right column of Fig 3. G-I Similar to A-C, for a network without the background E-I connectivity. The firing regime was controlled by varying the inhibition strength in the random EI connectivity, the baseline input and synaptic delays (see Table 2). The unit-rank connectivity structure was identical to Fig 3 right column, with zero-mean input and connectivity vectors. At time t = 1s, a step input was given along the input vector I that was aligned with n. Network parameters are given in Table 2.
Given the strong influence of the filter timescale on the results, we repeated the same analysis in the asynchronous irregular (AI) regime, which in Fig 3 was investigated only using a long timescale of 100ms. We found that the results of the PCA were similar to the SI regime: fluctuations along the global axis dominated at timescales below 10ms, and low-dimensional dynamics predicted by the rate model emerged at longer timescales (Fig 5D–5F). The main difference between the AI and SI regimes is that the global fluctuations at short timescales are weaker in the AI regime (with an amplitude that decays as the network size is increased), and do not show the periodic structure found in the SI regime (Fig 5D).
We then examined the role of irregular activity, by turning to networks in which the connectivity consisted only of a low-rank structure without the random E-I part. As noted above, on the level of individual synapses this removed the dominant part of the connectivity, as the magnitude of non-zero terms in JEI was much larger than the magnitude of terms in P. In such networks, individual neurons fired almost periodically, in contrast to Poisson-like activity in the AI regime. The action potentials of the different neurons were however highly asynchronous (Fig 5G top), and the fluctuations in the population activity were weak even for filter timescales of 1ms (Fig 5G bottom). Similarly to SI and AI regime, the dynamics in this network became low-dimensional for long filter timescales (Fig 5H and 5I), but the projection along m was higher for all filter timescales, and saturated above 10ms (Fig 5I bottom).
In summary, our analyses indicate that the predictions of the rate networks for the geometry of responses hold in different activity regimes in the spiking network if the single neuron firing rates are determined by averaging action potentials on a timescale longer than the synaptic delays. At shorter timescales, the activity is dominated by spiking synchronization that leads to prominent fluctuations along the global axis which corresponds to the population-averaged firing rate.
Nonlinear autonomous activity in networks with unit-rank structure
In previous sections we studied the geometry of dynamics in response to external inputs. We next turned to autonomous dynamics generated by the recurrent connectivity in the absence of inputs. As before, our goal was to determine whether the dynamics in a spiking network with low-rank connectivity are well predicted by a rate network with an analogous low-rank structure in the connectivity. We first summarize the results for rate networks developed in earlier studies, and then compare dynamics in spiking networks with these predictions.
In a rate network with unit rank structure, in absence of time-varying external inputs, the low-dimensional dynamics in Eq (3) are described only by the recurrent variable κ(t). The temporal evolution of κ(t) obeys (Methods): (4)
The steady state state value of κ therefore satisfies: (5) where xi is the steady state value of xi(t).
Assuming as previously a Gaussian distribution of the entries (mi, ni) of the connectivity vectors, and using a mean-field analysis in the large N limit, Eq 5 can be further expressed as (Methods Eq (40)) (6) where 〈m〉, 〈n〉 and σmn are the mean values and covariance of connectivity vectors m and n, while 〈ϕ(μ, Δ)〉 and 〈ϕ′(μ, Δ)〉 are the mean firing rate and mean gain obtained by averaging the transfer function and its derivative over a Gaussian distribution of mean μ = 〈m〉κ and variance (Methods Eq (40), Methods Eq (51)). Eq (6) provides a self-consistent equation for the steady state value of κ, which enters implicitly in the r.h.s. through μ and Δ. The two terms in the r.h.s can therefore be interpreted as two different sources of feedback, a first one controlled by the mean values 〈m〉, 〈n〉, and a second one controlled by the covariance σmn between m and n. Previous works analyzed the bifurcations in networks with a symmetric transfer function [49], or positive transfer function with non-zero 〈m〉 and 〈n〉 [49, 81]. The respective contributions of the two sources of feedback in networks with a positive transfer function have so far not been examined.
To extend previous studies, we therefore analyzed the bifurcations obtained by separately increasing each source of feedback in Eq (6) in networks with a positive transfer function. For σmn = 0, the feedback is generated only by the first term, and we controlled it by changing 〈n〉 while keeping 〈m〉 fixed. As the non-linearity in that term is given by 〈ϕ〉(κ), which is a positive sigmoid (Fig 6A, insert), increasing 〈n〉 beyond a critical value leads to a bifurcation to two asymmetric states with low and high values of κ (Fig 6A). Since the mean 〈m〉 of the vector m is non-zero, these two values of κ correspond to two states with a low and a high population-averaged firing rate (Fig 6C), as usually found when positive feedback is high [49, 62, 81–83].
A-F: Rate networks. A-C: Connectivity vectors m and n with non-zero means 〈m〉, 〈n〉, and zero covariance σmn. A: Fixed points of the collective variable κ as a function of the overlap nTm/N, low (black) and high (red) activity state. Insert: RHS of the equation dκ/dt (Eq (6)), κ (yellow) and 〈n〉〈ϕ〉(κ) (gray), shown for the overlap nTm/N = 10. Fixed points (red dots) correspond to the intersections of κ and 〈n〉ϕ(κ) which is a positive function. The bifurcation therefore leads to a low and a high state. B: Illustration of the single-unit firing rates in the two states when nTm/N = 10 (dashed line in A, green) for 100 units. Top: low activity state. Bottom: high activity state. C: Population-averaged firing rate as a function of nTm/N. D-F: same as A-C, for connectivity vectors m and n with zero means 〈m〉, 〈n〉, and non-zero covariance σmn. D: Fixed points of the collective variable κ as a function of the overlap nTm/N. Insert: RHS of the equation dκ/dt (Eq (6)), κ (yellow) and κ〈ϕ′〉(κ) (gray), shown for the overlap nTm/N = 11.2. Fixed points (red dots) correspond to the intersection of κ and κ〈ϕ′〉(κ), which is symmetric around the y axis. The bifurcation therefore leads to two symmetric states (red and blue) on top of the low activity state. E: Illustration of the single-unit firing rates in the two symmetric states. F: Population-averaged firing rate as a function of nTm/N. G-L: Simulations of the spiking network. G-I: connectivity vectors m and n with non-zero means 〈m〉, 〈n〉 and zero covariance σmn. G: bifurcation to low and high states as 〈n〉 is increased. H: raster plots of the spiking activity in the two states when nTm/N = 1.35mV (dashed line in J, green) for 20 neurons. Top: activity of 20 neurons in the high state. Bottom: activity of all (12500) neurons in the low state. The activity in the low state is highly sparse [7]. I: population-averaged firing rate in the two states. J-L: same as G-I connectivity vectors m and n with zero means 〈m〉, 〈n〉 and non-zero covariance σmn. J: bifurcation to two symmetric states as σmn is increased. K: raster plots of the spiking activity in the two states when nTm/N = 32mV (dashed line in J, green) for 20 neurons. L: population-averaged firing rate in the two states. Dots: simulations, lines: Monte Carlo integration predictions. Network parameters are shown in Tables 3 and 4.
In contrast, when 〈m〉 = 〈n〉 = 0 and σmn ≠ 0, the recurrent feedback is generated only by the second term in Eq (6), for which the non-linearity is given by κ〈ϕ′(0, Δ)〉. Independently of the precise form of ϕ, κ〈ϕ′(0, Δ)〉 as function of κ is in general symmetric around zero (Fig 6D, insert). In consequence, increasing σmn beyond a critical value leads to the emergence of two symmetric fixed points for κ (Fig 6D), which correspond to two activity states with different patterns of activity (Fig 6E), but identical population-averaged firing rates (Fig 6F).
In summary, a mean-field analysis of rate networks with unit-rank connectivity predicts two qualitatively different types of bifurcations and bistable states depending on whether the connectivity vectors m and n have zero or non-zero mean. We therefore examined whether these two types of bifurcations appeared when increasing the overlap between n and m in spiking networks with unit-rank connectivity added on top of a random EI background. Increasing 〈n〉 with non-zero 〈m〉 and zero σmn is in fact equivalent to increasing the mean excitation in the underlying EI connectivity [81]. In agreement with previous studies [7], we found that this could lead to the emergence of an asymetric bistability between a low and a high average activity state (Fig 6G–6I). Increasing σmn in networks with 〈m〉 = 〈n〉 = 0 instead gives rise to a bifurcation to two symmetric activity patterns with equal population-averaged firing rates (Fig 6J–6L). The predictions of the mean-field analysis in low-rank rate networks were therefore directly verified in spiking networks, and allowed us to identify a novel bifurcation to two symmetric states of activity. As for transient dynamics, these findings held also outside of the asynchronous irregular regime, when the neurons were sparsely synchronized or fired regularly.
Geometry of nonlinear autonomous activity in rank-two networks
Going beyond unit-rank connectivity, we next examined non-linear autonomous dynamics in network with a rank-two structure. As before, we first summarize the analyses of rate networks performed in previous studies, and then compare the dynamics in spiking networks with those predictions.
A rank-two connectivity structure is defined by two pairs of vectors (m(1), n(1)) and (m(2), n(2)): (7)
In absence of external inputs, the activation dynamics x(t) are confined to the two-dimensional plane spanned by the vectors m(1) and m(2), so that, in analogy to Eq (3) the activation xi of unit i can be expressed as: (8)
Here κ1(t) and κ2(t) are two collective variables that describe the projection of x on the connectivity vectors m(1) and m(2).
Previous works [50, 51] have shown that in low-rank rate networks with Gaussian connectivity vectors, non-linear dynamics are fully determined by the eigenspectrum of the connectivity matrix. A rank-R matrix defined as in Eq (2) has in general R non-zero eigenvalues, that coincide with the eigenvalues of the R × R overlap matrix Pov obtained from scalar products between pairs of connectivity patterns [50]: (9)
For rank-one networks, the overlap matrix reduces to a single parameter given in Eq (39), while for rank-two networks it is a 2 × 2 matrix. In the following, we focus on connectivity vectors with zero-mean entries, in which case for large N the overlap matrix converges to (10) where is the covariance between the entries of vectors and . A mean-field analysis then predicts that such networks have a fixed point at (0, 0), the stability of which is determined by the eigenvalues of ϕ′Pov (Methods).
We specifically examined connectivity structures with two different forms of the overlap matrix, that lead to different configurations of eigenvalues and thereby generate qualitatively different types of nonlinear dynamics in rate networks [51].
We first consider rank-two networks with overlap matrices of the form: (11)
Such matrices have two complex conjugate eigenvalues σ ± iσw. A mean-field analysis predicts spiral dynamics around the origin that decays to zero if ϕ′(0)σ < 1, or generate limit cycle in the m(1) − m(2) plane if ϕ′(0)σ > 1. On the other hand, ϕ′(0)σw determines the frequency of these oscillations. Simulations of rate networks for ϕ′(0)σ > 1 show that the firing rates of individual units oscillate strongly (Fig 7A, top), but out of phase, so that oscillations are not visible at the level of the population average (Fig 7A, bottom). Projecting r(t) on the m(1) − m(2) plane however uncovers a clear limit cycle (Fig 7B) that corresponds to oscillations of κ1(t) and κ2(t) (Fig 7C).
A-F: Connectivity structure with two complex-conjugate eigenvalues. A-C: Rate networks. A: Top: Illustration of the single-unit firing rates for the first 100 neurons. Bottom: Population-averaged firing rate. B: Projections of the firing rates r(t) on the m(1) − m(2) plane. Insert: overlap matrix. C: Projections of the firing rates r(t) on vectors m(1) and m(2) as a function of time. D-F: Analogous to (A-C), spiking network. D: Top: raster plots of the spiking activity for first 50 neurons. Bottom panel: population firing rate. E: Projections of the firing rates r(t) on the m(1) − m(2) plane. Insert: overlap matrix. F: Projections of the firing rates r(t) on vectors m(1) and m(2) as a function of time. (G-I) Rate network dynamics for an overlap matrix that has two real, degenerate eigenvalues. G: Top panel: illustration of the single-unit firing rates for the first 100 neurons. Bottom panel: Population firing rate. H: Projections of the firing rates r(t) on the m(1) − m(2) plane. Insert: overlap matrix. I: Projections of the firing rate r(t) on vectors m(1) and m(2) as a function of time. (J-L) same analysis as in (G-I) for a spiking model. J: Top panel: raster plots of the spiking activity for first 50 neurons. Bottom panel: population firing rate. K: Projections of the firing rates r(t) on the m(1) − m(2) plane. Insert: overlap matrix. L: Projections of the firing rate r(t) on vectors m(1) and m(2) as a function of time. Different colors in the middle column (B,E,H,K) corresponds to network instances with different connectivity vectors but identical statistics. Network parameters are shown in Tables 5 and 6.
To check whether qualitatively similar dynamics occur in spiking networks, we added a rank-two structure with complex eigenvalues on top of random excitatory-inhibitory connectivity. As the two parts of the connectivity are independent, the spectrum of the full connectivity matrix consists of a continuous bulk corresponding to the random part and discrete outliers given by the eigenvalues of the rank-two structure [49, 84, 85]. For large values of σ, simulations of the resulting spiking network show that the firing rates of individual neurons oscillate strongly (Fig 7D, top), but out of phase with each other, so that oscillations on the population-averaged level are weak (Fig 7D, bottom). Projections of the population rate r on the plane m(1) − m(2) however identified clear limit cycles (Fig 7E and 7F).
We next turned to rank-two structure with overlap matrices of the form: (12)
The resulting connectivity matrices have two degenerate real eigenvalues σ, and mean-field analyses of rate networks have shown that in the limit N → ∞, as σ is increased this leads to a continuum of fixed points arranged on a ring in the m(1) − m(2) plane [43, 49, 51]. In finite-size networks, sampling fluctuations of random connectivity vectors breaks the exact degeneracy, so that only a small number of points on the ring attractor remain actual stable fixed points while the rest form a slow manifold: dynamics quickly converge to the ring, after which they slowly evolve on it until reaching a fixed point (Fig 7H and 7I).
We verified that analogous dynamics emerge in spiking networks with a degenerate rank-two structure added on top of the random excitatory-inhibitory connectivity matrix. As in rate networks, dynamics quickly converge to a ring in the m(1) − m(2) plane, after which they evolve along the ring towards stable fixed points (Fig 7K and 7L). Different instances of the rank-two structure generated with identical statistics lead to different fixed points that are all located on the same ring (Fig 7K). In spiking networks, the fluctuations in activity are stronger than in rate networks because of a combination of variability in spike times, random excitatory-inhibitory connectivity and fluctuations in low-rank connectivity, but the low-dimensional dynamics are qualitatively similar.
In summary, mean-field analyses of rate networks with low-rank connectivity allow us to identify analogous non-trivial dynamical regimes in networks of spiking neurons. These findings did not rely on the network being in the asynchronous irregular regime, as we observed similar dynamics when the neurons were sparsely synchronized (Fig 8C and 8E) or fired regularly (Fig 8D and 8F).
A,C,E: Network operating in the SI regime. A: nonlinear dynamics in rank-one network. Left, top: fixed points of the collective variable κ as a function of the overlap nTm/N, for connectivity vectors m and n with zero means 〈m〉, 〈n〉, and non-zero covariance σmn. Left, bottom: population averaged firing rate as a function of nTm/N. Right: raster plots in the two symmetric states. C: non-linear dynamics in rank-two network, connectivity structure with two complex-conjugate eigenvalues. Left, top: raster plot of the spiking activity for first 50 neurons. Left, bottom: population firing rate. Right: projections of the firing rates r(t) on the m(1) − m(2) plane. E: non-linear dynamics in rank-two network for an overlap matrix that has two real, degenerate eigenvalues. Left, top: raster plots of the spiking activity for first 50 neurons. Left, bottom: population firing rate. Right: projections of the firing rates r(t) on the m(1) − m(2) plane. B,D,F: same as A,C,E for network without the background E-I connectivity.
Perceptual decision making task
Our results so far show that the geometry and firing regimes in networks of spiking neurons can be predicted from the statistics of low-rank connectivity by following the principles identified in rate networks. In a final step, here we illustrate how this finding can be exploited to directly implement computations in spiking networks. We consider the perceptual decision-making task [86] in which a network receives a noisy scalar stimulus along a random input vector I, and needs to report the sign of its temporal average along a random readout vector w.
Previous works have identified requirements on rank-one network to perform this task [52]. They showed that a unit-rank network was sufficient to implement the task, with connectivity statistics requiring a strong overlap σnI to integrate inputs, and an overlap σmn ≈ 1 to generate a long integration timescale via positive feedback. We built a spiking network based on an analogous connectivity configuration.
Fig 9 illustrates the dynamics in the network in response to two inputs with positive and negative means. The two inputs lead to different patterns of activity with opposite readout values (Fig 9A and 9B), but similar population averaged firing rates (Fig 9C). As expected from the theory of low-rank networks, the dynamics evolve in a two-dimensional plane spanned by the input pattern I and the output connectivity pattern m (Fig 9D), as observed in experimental data [27]. The psychometric curve generated by the network strongly depends on the values of the connectivity overlaps (Fig 9E).
A: Top panel: two instances of the fluctuating input signal with a positive (orange) and a negative (blue) mean. Bottom panel: network readout of the activity generated by the two inputs. B: Raster plots for the first 50 neurons. C: Population firing rate. D: Dynamics projected onto the I − m plane. E: Psychometric function showing the fraction of positive outputs at different values of the overlap σnm. Orange color corresponds to positive (), while blue to negative mean-input (). Parameters: N = 12500, σu = 1, σnI = 0.26, σmw = 2.1, σm2 = 0.02, τf = 100ms.
Discussion
In this study, we set out to examine how far theoretical predictions for the relation between connectivity and dynamics in recurrent networks of rate units translate to networks of spiking neurons. To this end, we compared the population activity in rate networks with low-rank connectivity to networks of integrate-and-fire neurons in which a low-rank structure was added on top of random, sparse excitatory-inhibitory connectivity. Altogether, we found the geometry of low-pass filtered activity in spiking networks is largely identical to rate networks when the low-rank structure in connectivity is statistically identical. In particular, this allowed us to identify novel regimes of linear and non-linear dynamics in spiking networks, and construct networks that implement specific computations.
A widespread experimental observation across cortical areas is that sensory inputs lead to both increases and decreases of activity in individual neurons, so that different stimuli are often indistinguishable at the population-average level albeit they induce distinct patterns of responses [79, 87, 88]. Within the state-space picture, this implies that the responses take place primarily along directions orthogonal to the global axis [79], suggesting that behaviorally-relevant computations may rely on dynamics along these dimensions complementary to the population-average. So far, most studies on spiking networks have however focused on averaging spiking activity either over the whole network or over sub-populations. Here we instead show that, when a low-rank connectivity structure is included in the connectivity, spiking networks naturally lead to rich dynamics along dimensions orthogonal to the global axis. Our results therefore open the door to a closer match between spiking models and analyses of experimental data.
Our starting hypothesis was that spiking networks in the asynchronous irregular regime can be directly mapped onto rate networks with identical connectivity, by identifying each integrate-and-fire neuron with a rate unit. Here we tested a restricted version of this hypothesis by focusing exclusively on low-rank structure in the connectivity. We found that the population dynamics in spiking networks with a superposition of random and low-rank connectivity match well the predictions of rate networks with connectivity given by an identical low-rank part. To which extent these results extend to more general types of connectivity remains to be determined. A key feature of a low-rank connectivity structure is that it leads to discrete, isolated eigenvalues in the complex plane [47, 49, 80, 84, 85] (or singular values on the real line [89, 90], while purely random connectivity generates a continuously distributed bulk of eigenvalues [91, 92]. We expect that our findings hold as long as dynamics rely on discrete outliers in the the eigenspectrum (or singular value distribution), in which case the connectivity can be accurately approximated by a low-rank structure [81]. Networks performing specific computations typically rely on such outliers in the connectivity spectrum [21, 45, 93], so that our results may help explain in which case functional spiking networks can be directly built from trained rate networks [74–76].
A surprising result of our analyses is that rate networks predict well the activity in spiking networks even outside of the asynchronous irregular regime, i.e. when neurons spike regularly, or with some degree of synchrony. Our original hypothesis that asynchronous irregular activity is required appears to have been too restrictive. Indeed we found that our results hold as long as spike-trains are averaged over timescales longer than the synaptic or membrane time constants. When do spiking networks then qualitatively differ from their rate-based counter-parts? Do spikes have a potential advantage over rate-based computations? One regime we have not explored here is ultra-sparse activity, in which each neuron emits only a handful of spikes in response to a stimulus. In this regime, information can be encoded in the precise timing of isolated spikes of individual neurons [94–96], and a comparison with state-space trajectories predicted by rate-based models may be less useful. The ultra-sparse firing regime provides a fruitful framework for energy-efficient neuromorphic computing [97], and suggests a potential computational role for spikes distinct from rate-based coding. An alternative possibility is that action potentials play mainly an implementational role, as a biological mechanism for transmitting information across long distances over myelinated axons, and therefore act as a discretization of a fundamentally continuous underlying signal. Ultimately, these computational and implementational interpretations of action potentials are not mutually exclusive, and it is possible that spikes may play different functional roles in different brain structures or species.
Materials and methods
Rate network model
We consider rate networks of N units. Each unit is described by its activation xi(t), with dynamics evolving according to [98]: (13)
Here u(t) is the input amplitude shared by all units, Ii is the weight of the external input on unit i, and ϕ(x) = 1 + tanh(x−xoff) is the firing rate transfer function. The firing rate of unit i is therefore ri = ϕ(xi).
The recurrent connectivity matrix P is a rank R matrix, represented as a sum of R unit-rank terms, where the r-th term is given by the outer product of two vectors m(r), n(r): (14)
We refer to vectors , as the right and left connectivity vectors, and to I = {Ii}i = 1…N as the input vector.
In this study, we focus on the case where the entries , , Ii of connectivity and input vectors are generated independently for each unit from a Gaussian distribution with means 〈mr〉, 〈nr〉, 〈I〉, standard deviations , , σI and covariances σxy (x, y ∈ {nr, mr, I}).
To simulate network activity, Eq (13) was discretised using Euler’s method with time step dt, for a total simulation time trun. Network parameters are shown in Tables 1, 3 and 5.
Spiking network model
We consider networks of N leaky-integrate and fire neurons [7], where the membrane potential of neuron i evolves according to: (15)
Here τm is the membrane time constant, μ0 a constant baseline input, ξi(t) a white noise independent for each neuron, σ0 the amplitude of the noise, total recurrent input defined below, and Ii, u(t) the weights and the amplitude of the external input.
An action potential, or “spike”, is generated when the membrane potential crosses the threshold Vthr. The membrane potential is then reset to the value of Vr, and maintained at that value during a refractory period tref.
The total recurrent input to neuron i is given by (16) where Jij is strength of the synaptic connection from neuron j to neuron i, is the time of the kth spike of the presynaptic neuron j, τdel is the synaptic delay and δ(t) is the delta function.
The connectivity matrix J consists of a sum of a full-rank excitatory-inhibitory part JEI and a rank-R matrix P: (17)
The matrix P is identical to the rate model (Eq (14)), while JEI is a sparse, random excitatory-inhibitory matrix identical to [7]. Each neuron receives inputs from C neurons, C being much smaller of the total number of neurons N. The fraction of non-zero connections is fc = C/N = 0.1, where 80% of incoming connections are excitatory and the rest are inhibitory. All non-zero excitatory synapses have the same strength J, while non-zero inhibitory synapses have the strength −gJ.
The network was simulated using the Euler method implemented in Brian2 package [99] with simulation step dt and simulation time trun.
Single-neuron firing rates were computed from spikes using an exponential filter with a time constant τf. The instantaneous rate of i-th neuron at time t is given by (18) where δ(t − tk) is the delta function centered at tk. In the case of multiple trials, rates ri are averaged over trials: (19)
If different values are used in a specific figure, these value are specified in a dedicated table (Tables 2, 4 and 6). The parameter notations for spiking models are summarized in Table 7. Parameters whose value do not change over different simulations/figures are given in Table 8.
Geometry of responses to external inputs
To characterize the geometry of activity in the high-dimensional state space, we examined the projections of the firing rate trajectories r(t) = {ri(t)}i = 1…N on an arbitrary direction w = {wi}i = 1…N, defined as: (20)
In particular, taking w to be the global axis where wi = 1 for all i, the projection gives the population-averaged firing rate: (21)
Based on previous works [49–52], below we summarize the predictions of low-rank rate models for the geometry of activity, and then describe a method for verifying these predictions using principal components analysis.
Rate networks.
In rate networks with a low-rank connectivity matrix, the dynamics of the activations x(t) = {xi(t)}i = 1…N are explicitly confined to a low-dimensional subspace of state space [51, 52, 100], meaning that projections of x(t) are non-zero only on vectors w belonging to this subspace. Here we first reproduce the derivation of the geometry of the activations x(t) [51, 52]. We then explore the implications for the geometry of firing rates r(t) where ri(t) = ϕ(xi(t)).
For low-rank connectivity, the dynamics of xi(t) are given by (22) where for completeness we included Nin scalar inputs us(t) along input vectors I(s) with s = 1…Nin.
We start by assuming that at time 0, the initial state x(0) lies in the subspace spanned by m(r) and I(s), i.e. that 〈wTx(0)〉 ≠ 0 if and only if w is a linear combination of m(r) for r = 1…R and I(s) for s = 1…Nin. This assumption can be made without loss of generality. Indeed, if it is not full-filled, the initial state x(0) can be included as an additional input with us(t) = δ(t) and I(s) = x(0) in Eq (22).
It is then straightforward to show by induction from Eq (22) that for any t, 〈wTx(t)〉 ≠ 0 if and only if w is a linear combination of m(r) and I(s). The activations x(t) therefore lie for any t in the the subspace spanned by m(r) and I(s). Assuming for simplicity that these vectors form an orthogonal set, the activation of the i-th neuron xi, can be written as: (23)
Here κr and vs are scalar latent variables that correspond to the coordinates of x(t) along the vectors m(r) and I(s), and can be computed by projecting x(t) on normalized directions m(r)/||m(r)||2 and I(s)/||I(s)||2: (24) (25)
Projecting Eq (22) on the vector I(s)/||I(s)||2, we then obtain (26) while the projection on m(r)/||m(r)||2 gives (27)
Inserting Eq (23) into Eq (27) then leads to the following dynamical system: (28) (29)
To simplify notations, from here on we consider unit-rank networks with a single input (R = 1 and Nin = 1, we therefore drop the indices r and s), where the entries of m, n and I are generated from a joint Gaussian distribution with means 〈m〉, 〈n〉, 〈I〉, standard deviations σm, σn, σI and covariances σxy for x, y ∈ {m, n, I}. In the limit N → ∞, the sum over j in Eq (29) can then be replaced by an integral over the joint Gaussian distribution, which can be computed using Stein’s Lemma for Gaussian integrals [50–52]. The dynamics for κ then become (Methods): (30) where the brackets denote the following Gaussian integral (31) and (32)
We next turn to the geometry of firing rates r(t) where ri(t) = ϕ(xi(t)), and examine the projection of r(t) on an arbitrary direction w in the activity state space: (33)
In linear networks (i.e. when ϕ(x) = x), firing rates are equivalent to activations x(t), and therefore their dynamics are confined to the subspace spanned by m and I. The projection of r(t) on any direction orthogonal to m and I is therefore zero. In particular, the projection on the global axis is non-zero only if 〈I〉 ≠ 0, or if 〈m〉 ≠ 0 and κ ≠ 0.
Here we focus on non-linear networks, and directions w with entries generated from a joint Gaussian distribution with entries of m and I, specified by a mean 〈w〉, variance , and covariances σwm and σwI. As for Eq (29), the r.h.s. of Eq (33) can be replaced by a Gaussian integral, and, using Stein’s Lemma, be expressed as (see Methods): (34)
In Eq (34), the first term in the r.h.s. represents the population-averaged firing rate, i.e. the projection of r(t) on the global axis. Indeed, taking wi = 1/N for i = 1…N, only the first term is non-zero. Moreover, Eqs (31) and (32) show that changes in the population averaged firing-rate 〈ϕ(μ, Δ)〉 can be induced either through the mean input μ by non-zero means 〈I〉 and 〈m〉, or through the variance of the input Δ by non-zero variances σI and σm of the input and connectivity vectors.
The last two terms in Eq (34) respectively represent the projection of firing rates on the zero-mean parts of m and I, i.e. changes in r(t) along directions orthogonal to the global axis. Altogether, Eq (34) therefore predicts that the projection of the firing rate vector r(t) is zero on any direction w orthogonal to the global axis, m and I. Interestingly, for Gaussian connectivity vectors considered here, the geometry of firing rate dynamics is therefore largely equivalent to linear networks (i.e. to the geometry of x(t)). The main difference is that in the non-linear case, the heterogeneity across neurons quantified by the input variance Δ can induce a non-zero component along the global axis even when 〈I〉 = 0 and 〈m〉 = 0.
These theoretical predictions are verified through simulations in Fig 2. The corresponding network parameters are given in Table 1.
Principal component analysis.
In order to extract the low-dimensional subspace of the population activity from simulations, we performed dimensional reduction via a standard Principal Component Analysis (PCA). First, we construct the matrix X in which every column corresponds to the time trace of firing rates X[:, i] = ri(t). The matrix X is then normalized by subtracting the mean in every column. We compute the principal components (PCs) as the normalized eigenvectors of the correlation matrix C = XTX, sorted in decreasing order of their eigenvalues λi. The activity matrix X is then projected on the orthonormal basis generated by the PC vectors, yielding X′ = XE where E is the N × N matrix with columns formed by PC components. The variance explained by each component is the corresponding entry on the diagonal of the rotated correlation matrix C′ = X′TX′. For rate networks, we run PCA on individual trial, as we did not include noise in the dynamics. For spiking networks we run the PCA on firing rates averaged over trials Ntr (see Tables 4 and 6).
Geometry of nonlinear autonomous activity in unit-rank networks
Rate network.
We now turn to the autonomous activity in unit-rank networks without external inputs. The autonomous dynamics of the collective variable κ are described by Eqs (28) and (29) in which the external input is zero: (35)
Any fixed point κ0 obeys: (36) where (37)
The stability of κ0 is determined by linearizing Eq (35), yielding: (38)
The stability of κ0 is therefore controlled by the overlap (39)
In the large N limit, replacing the sum with a Gaussian integral and applying Stein’s lemma, the r.h.s in Eq (37) can be further expressed as (40)
To examine the effects of the two terms in F(κ), in the results we vary the overlap either by setting σmn = 0 and changing 〈n〉 or by setting 〈m〉, 〈n〉 = 0 and changing σmn. To compute F(κ), we approximate the Gaussian integrals 〈ϕ′〉, 〈ϕ〉 in Eq (40) using the Monte-Carlo method. Specifically, we choose an array of values for κ, and for each element compute the corresponding F(κ) (Eq (37)) by averaging over 50 different realisation of vectors m and n. We then determine the fixed point by solving for κ = F(κ). The predicted population-averaged firing rate can then be computed as .
The corresponding results are shown in Fig 6. Network parameters are given in Table 3.
Spiking network.
In Fig 6 the overlap is varied as in the rate network, either through 〈n〉 or the covariance σmn. We run the dynamics for Nnets different network instances keeping the overlap nTm/N fixed, while resampling connectivity vectors m and n from Gaussians with mean 〈m〉, 〈n〉 and variance σm, σn respectively. The dynamics for each network instance is run for Ntr number of trials. In every trial, we resample the initial membrane potential V(0) from a Gaussian distribution. At a fixed overlap, for each network configuration and at each trial, the collective variable is computed as . To get the plot in Fig 6 we first set a threshold κtr that is a boundary between zero state and the the high state (Fig 6G) or two symmetric states (Fig 6J). Then we average over all collective variables κcurr that have |κcurr|<κthr to compute low states. For those |κcurr|>κthr, we average over all positive or over all negative κcurr values to get the high states or the two symmetric states. The parameters used for simulating spiking model in Fig 6 are presented in Table 4. (41)
Assuming zero-mean Gaussian connectivity vectors, replacing sums by integrals in the N → ∞ limit, and applying Stein’s lemma leads to: (42) where (43)
For zero-mean connectivity vectors, (κ1, κ2) = (0, 0) is always a fixed point. A linear analysis shows that its stability is given by the eigenvalues of ϕ′(0)Pov [50, 51], where ϕ′(0) is the gain at zero, and Pov the overlap matrix: (44)
For Fig 7, we ran simulations for Nnets network instances and Ntr trials for each instance, and plot the projections without averaging over trials. The parameters used for simulating rate and spiking model in Fig 7 are presented in Tables 5 and 6.
Perceptual decision-making task
We start from a network in AI regime as in Section Geometry of responses to external inputs, and add a unit rank structure on top of the random part.
In each trial, the model was run for trun = 1020ms: a fixation epoch of duration Tfix = 100ms was followed by a simulation epoch of Tstim = 800ms, delay epoch of Tdel = 100ms and a decision epoch Tdec = 20ms. The feed-forward input to neuron i on trial k was (45) where during the stimulation, , with ψ(k)(t) a zero-mean Gaussian white noise of standard deviation σu = 1. Connectivity vectors and the input vector were generated from a Gaussian distribution with zero mean. The standard deviation of vector m was σm2 = 0.02, and the covariance between pairs of vectors σmn = 0.016, σnI = 0.26, σmw = 2.1. During the decision epoch, a single readout was evaluated by output of the network is defined by readout value: (46) where w is a readout vector generated from a Gaussian with zero mean and standard deviation .
On trial k, if the mean of the readout if above zero, we label the output as 1, and as 0 otherwise. At every value of the overlap, psychometric curve is computed by plotting the fraction of trials that had an output 1. The network was run for 30 trials at each overlap.
Mean-field theory and gaussian integrals
Using the mean-field theory, we derive in detail the projection in Eq (33) for the rank-one case, which can then be extended to higher ranks. Vectors m, I and w are generated as (47) (48) (49) where X, Y and Z are independent vectors generated from a Gaussian distribution with zero mean and unit standard deviation, σm, σI, σw standard deviations of vectors m, I, w and σmw, σIw overlaps of vectors w and m, I respectively.
The dynamics in Eq (33) consist of a sum over the N units in the network. In the limit of large networks with defined statistics, the sum over N elements corresponds to the empirical average over the distribution of its elements. Therefore, we can replace the sum by an integral over the distribution P(m, n, I). (50)
We represented the integral in Eq (50) as a function of variables X, Y and Z, which are independent, so that the joint distribution obeys P(X, Y, Z) = P(X)P(Y)P(Z). Eq (50) then becomes: (51) where μr = κrσm, μs = vsσI and Δ = (κrσm)2 + (vsσI)2. In the last line we use the Gaussian integral notation: (52)
References
- 1. Herz AVM, Gollisch T, Machens CK, Jaeger D. Modeling Single-Neuron Dynamics and Computations: A Balance of Detail and Abstraction. Science. 2006;314(5796):80–85. pmid:17023649
- 2. Gerstner W, Naud R. How Good Are Neuron Models? Science. 2009;326(5951):379–380.
- 3.
Gerstner W, Kistler WM, Naud R, Paninski L. Neuronal Dynamics: From Single Neurons to Networks and Models of Cognition. USA: Cambridge University Press; 2014.
- 4. Amit DJ, Brunel N. Model of global spontaneous activity and local structured activity during delay periods in the cerebral cortex. Cerebral Cortex. 1997;7(3):237–252. pmid:9143444
- 5. van Vreeswijk C, Sompolinsky H. Chaos in Neuronal Networks with Balanced Excitatory and Inhibitory Activity. Science. 1996;274(5293):1724–1726. pmid:8939866
- 6. Troyer TW, Miller KD. Physiological Gain Leads to High ISI Variability in a Simple Model of a Cortical Regular Spiking Cell. Neural Computation. 1997;9(5):971–983. pmid:9188190
- 7. Brunel N. Dynamics of Sparsely Connected Networks of Excitatory and Inhibitory Spiking Neurons. Journal of Computational Neuroscience. 2000;8. pmid:10809012
- 8. Renart A, de la Rocha J, Bartho P, Hollender L, Parga N, Reyes A, et al. The Asynchronous State in Cortical Circuits. Science. 2010;327(5965):587–590. pmid:20110507
- 9. Boerlin M, Machens CK, Denève S. Predictive Coding of Dynamical Variables in Balanced Spiking Networks. PLOS Computational Biology. 2013;9(11):1–16. pmid:24244113
- 10. Hennequin G, Agnes EJ, Vogels TP. Inhibitory Plasticity: Balance, Control, and Codependence. Annual Review of Neuroscience. 2017;40(1):557–579. pmid:28598717
- 11. Rosenbaum R, Doiron B. Balanced Networks of Spiking Neurons with Spatially Dependent Recurrent Connections. Phys Rev X. 2014;4:021039.
- 12. Sanzeni A, Histed MH, Brunel N. Emergence of Irregular Activity in Networks of Strongly Coupled Conductance-Based Neurons. Phys Rev X. 2022;12:011044. pmid:35923858
- 13. Shadlen MN, Newsome WT. Neural Basis of a Perceptual Decision in the Parietal Cortex (Area LIP) of the Rhesus Monkey. Journal of Neurophysiology. 2001;86(4):1916–1936. pmid:11600651
- 14. Destexhe A, Rudolph M, Fellous JM, Sejnowski TJ. Fluctuating synaptic conductances recreate in vivo-like activity in neocortical neurons. Neuroscience. 2001;107(1):13–24. pmid:11744242
- 15. Shu Y, Hasenstaub A, McCormick DA. Turning on and off recurrent balanced cortical activity. Nature. 2003;423(6937):288–293. pmid:12748642
- 16. Haider B, Duque A, Hasenstaub AR, McCormick DA. Neocortical Network Activity In Vivo Is Generated through a Dynamic Balance of Excitation and Inhibition. Journal of Neuroscience. 2006;26(17):4535–4545. pmid:16641233
- 17. London M, Roth A, Beeren L, Häusser M, Latham PE. Sensitivity to perturbations in vivo implies high noise and suggests rate coding in cortex. Nature. 2010;466(7302):123–127. pmid:20596024
- 18. Sanzeni A, Akitake B, Goldbach HC, Leedy CE, Brunel N, Histed MH. Inhibition stabilization is a widespread property of cortical networks. eLife. 2020;9:e54875. pmid:32598278
- 19. Ahmadian Y, Miller KD. What is the dynamical regime of cerebral cortex? Neuron. 2021;109(21):3373–3391. pmid:34464597
- 20. Abbott LF, DePasquale B, Memmesheimer RM. Building functional networks of spiking model neurons. Nature Neuroscience. 2016;19(3):350–355. pmid:26906501
- 21. Ingrosso A, Abbott LF. Training dynamically balanced excitatory-inhibitory networks. PLOS ONE. 2019;14(8):e0220547. pmid:31393909
- 22. Tavanaei A, Ghodrati M, Kheradpisheh SR, Masquelier T, Maida A. Deep learning in spiking neural networks. Neural Networks. 2019;111:47–63. pmid:30682710
- 23. Sussillo D. Neural circuits as computational dynamical systems. Current Opinion in Neurobiology. 2014;25:156–163. pmid:24509098
- 24. Barak O. Recurrent neural networks as versatile tools of neuroscience research. Current Opinion in Neurobiology. 2017;46:1–6. pmid:28668365
- 25. Yang GR, Wang XJ. Artificial Neural Networks for Neuroscientists: A Primer. Neuron. 2020;107(6):1048–1070. pmid:32970997
- 26. Sussillo D, Barak O. Opening the Black Box: Low-Dimensional Dynamics in High-Dimensional Recurrent Neural Networks. Neural Computation. 2013;25(3):626–649. pmid:23272922
- 27. Mante V, Sussillo D, Shenoy K, Newsome W. Context-dependent computation by recurrent dynamics in prefrontal cortex. Nature. 2013;503:78–84. pmid:24201281
- 28. Vyas S, Golub M, Sussillo D, Shenoy K. Computation Through Neural Population Dynamics. Annual Review of Neuroscience. 2020;43:249–275. pmid:32640928
- 29. Chung S, Abbott LF. Neural population geometry: An approach for understanding biological and artificial neural networks. Current Opinion in Neurobiology. 2021;70:137–144. pmid:34801787
- 30. Churchland MM, Shenoy KV. Temporal Complexity and Heterogeneity of Single-Neuron Activity in Premotor and Motor Cortex. Journal of Neurophysiology. 2007;97(6):4235–4257. pmid:17376854
- 31. Buonomano D, Maass W. State-dependent computations: Spatiotemporal processing in cortical networks. Nature reviews Neuroscience. 2009;10:113–25. pmid:19145235
- 32. Cunningham J, Yu B. Dimensionality reduction for large-scale neural recordings. Nature neuroscience. 2014;17. pmid:25151264
- 33. Gallego JA, Perich MG, Miller LE, Solla SA. Neural Manifolds for the Control of Movement. Neuron. 2017;94(5):978–984. pmid:28595054
- 34. Saxena S, Cunningham J. Towards the neural population doctrine. Current Opinion in Neurobiology. 2019;55:103–111. pmid:30877963
- 35.
Jazayeri M, Ostojic S. Interpreting neural computations by examining intrinsic and embedding dimensionality of neural activity; 2021. Available from: https://arxiv.org/abs/2107.04084.
- 36. Hopfield JJ. Neural networks and physical systems with emergent collective computational abilities. Proceedings of the National Academy of Sciences. 1982;79(8):2554–2558. pmid:6953413
- 37. Eliasmith C, Anderson CH. Neural Engineering: Computation, Representation, and Dynamics in Neurobiological Systems. IEEE Transactions on Neural Networks. 2004;15:528–529.
- 38. Sussillo D, Abbott LF. Generating Coherent Patterns of Activity from Chaotic Neural Networks. Neuron. 2009;63(4):544–557. pmid:19709635
- 39. Boerlin M, Machens CK, Denève S. Predictive Coding of Dynamical Variables in Balanced Spiking Networks. PLOS Computational Biology. 2013;9(11):1–16. pmid:24244113
- 40. Ahmadian Y, Fumarola F, Miller KD. Properties of networks with partially structured and partially random connectivity. Phys Rev E. 2015;91:012820. pmid:25679669
- 41. Pereira U, Brunel N. Attractor Dynamics in Networks with Learning Rules Inferred from In Vivo Data. Neuron. 2018;99(1):227–238.e4. pmid:29909997
- 42. Landau ID, Sompolinsky H. Coherent chaos in a recurrent neural network with structured connectivity. PLOS Computational Biology. 2018;14(12):e1006309. pmid:30543634
- 43. Beiran M, Meirhaeghe N, Sohn H, Jazayeri M, Ostojic S. Parametric control of flexible timing through low-dimensional neural manifolds. bioRxiv. 2021.
- 44. Landau ID, Sompolinsky H. Macroscopic fluctuations emerge in balanced networks with incomplete recurrent alignment. Phys Rev Research. 2021;3:023171.
- 45.
Schuessler F, Mastrogiuseppe F, Dubreuil A, Ostojic S, Barak O. The interplay between randomness and structure during learning in RNNs. In: Larochelle H, Ranzato M, Hadsell R, Balcan MF, Lin H, editors. Advances in Neural Information Processing Systems. vol. 33. Curran Associates, Inc.; 2020. p. 13352–13362. Available from: https://proceedings.neurips.cc/paper/2020/file/9ac1382fd8fc4b631594aa135d16ad75-Paper.pdf.
- 46.
Kadmon J, Timcheck J, Ganguli S. Predictive coding in balanced neural networks with noise, chaos and delays. In: Larochelle H, Ranzato M, Hadsell R, Balcan MF, Lin H, editors. Advances in Neural Information Processing Systems. vol. 33. Curran Associates, Inc.; 2020. p. 16677–16688. Available from: https://proceedings.neurips.cc/paper/2020/file/c236337b043acf93c7df397fdb9082b3-Paper.pdf.
- 47. Logiaco L, Abbott LF, Escola S. Thalamic control of cortical dynamics in a model of flexible motor sequencing. Cell Reports. 2021;35(9):109090. pmid:34077721
- 48. Valente A, Ostojic S, Pillow JW. Probing the Relationship Between Latent Linear Dynamical Systems and Low-Rank Recurrent Neural Network Models. Neural Computation. 2022;34(9):1871–1892. pmid:35896161
- 49. Mastrogiuseppe F, Ostojic S. Linking Connectivity, Dynamics, and Computations in Low-Rank Recurrent Neural Networks. Neuron. 2018;99(3):609–623.e29. pmid:30057201
- 50. Schuessler F, Dubreuil A, Mastrogiuseppe F, Ostojic S, Barak O. Dynamics of random recurrent networks with correlated low-rank structure. Phys Rev Research. 2020;2:013111.
- 51. Beiran M, Dubreuil A, Valente A, Mastrogiuseppe F, Ostojic S. Shaping Dynamics With Multiple Populations in Low-Rank Recurrent Networks. Neural Computation. 2021;33(6):1572–1615. pmid:34496384
- 52. Dubreuil A, Valente A, Beiran M, Mastrogiuseppe F, Ostojic S. The role of population structure in computations through neural dynamics. Nature Neuroscience. 2022;25:1–12. pmid:35668174
- 53. Schaffer ES, Ostojic S, Abbott LF. A Complex-Valued Firing-Rate Model That Approximates the Dynamics of Spiking Networks. PLOS Computational Biology. 2013;9(10):1–11. pmid:24204236
- 54. DePasquale B, Sussillo D, Abbott LF, Churchland MM. The centrality of population-level factors to network computation is demonstrated by a versatile approach for training spiking networks. Neuron. 2023;111(5):631–649.e10. pmid:36630961
- 55. Ginzburg I, Sompolinsky H. Theory of correlations in stochastic neural networks. Phys Rev E. 1994;50:3171–3191. pmid:9962363
- 56. Pinto DJ, Brumberg JC, Simons DJ, Ermentrout GB, Traub R. A quantitative population model of whisker barrels: Re-examining the Wilson-Cowan equations. Journal of Computational Neuroscience. 1996;3(3):247–264. pmid:8872703
- 57. Buice MA, Cowan JD, Chow CC. Systematic Fluctuation Expansion for Neural Network Activity Equations. Neural Computation. 2010;22(2):377–426. pmid:19852585
- 58. Mattia M, Del Giudice P. Population dynamics of interacting spiking neurons. Phys Rev E. 2002;66:051917. pmid:12513533
- 59. Montbrió E, Pazó D, Roxin A. Macroscopic Description for Networks of Spiking Neurons. Phys Rev X. 2015;5:021028.
- 60. Trousdale J, Hu Y, Shea-Brown E, Josić K. Impact of Network Structure and Cellular Response on Spike Time Correlations. PLOS Computational Biology. 2012;8(3):1–15. pmid:22457608
- 61. Ocker GK. Dynamics of Stochastic Integrate-and-Fire Networks. Phys Rev X. 2022;12:041007.
- 62. Wilson HR, Cowan JD. Excitatory and Inhibitory Interactions in Localized Populations of Model Neurons. Biophysical Journal. 1972;12(1):1–24. pmid:4332108
- 63. Shriki O, Hansel D, Sompolinsky H. Rate Models for Conductance-Based Cortical Neuronal Networks. Neural Computation. 2003;15(8):1809–1841. pmid:14511514
- 64. Wong KF. A Recurrent Network Mechanism of Time Integration in Perceptual Decisions. Journal of Neuroscience. 2006;26(4):1314–1328. pmid:16436619
- 65. Litwin-Kumar A, Doiron B. Slow dynamics and high variability in balanced cortical networks with clustered connections. Nature Neuroscience. 2012;15(11):1498–1505. pmid:23001062
- 66. Hennequin G, Vogels TP, Gerstner W. Optimal Control of Transient Dynamics in Balanced Networks Supports Generation of Complex Movements. Neuron. 2014;82(6):1394–1406. pmid:24945778
- 67. Baker C, Zhu V, Rosenbaum R. Nonlinear stimulus representations in neural circuits with approximate excitatory-inhibitory balance. PLOS Computational Biology. 2020;16(9):e1008192. pmid:32946433
- 68. Timón LB, Ekelmans P, Konrad S, Nold A, Tchumatchenko T. Synaptic plasticity controls the emergence of population-wide invariant representations in balanced network models. Physical Review Research. 2022;4(1).
- 69. Zenke F, Agnes EJ, Gerstner W. Diverse synaptic plasticity mechanisms orchestrated to form and retrieve memories in spiking neural networks. Nature Communications. 2015;6(1). pmid:25897632
- 70. Ostojic S, Brunel N. From Spiking Neuron Models to Linear-Nonlinear Models. PLOS Computational Biology. 2011;7(1):1–16. pmid:21283777
- 71. Tetzlaff T, Helias M, Einevoll GT, Diesmann M. Decorrelation of Neural-Network Activity by Inhibitory Feedback. PLOS Computational Biology. 2012;8(8):1–29. pmid:23133368
- 72. Ostojic S. Two types of asynchronous activity in networks of excitatory and inhibitory spiking neurons. Nature neuroscience. 2014;17. pmid:24561997
- 73. Harish O, Hansel D. Asynchronous Rate Chaos in Spiking Neuronal Circuits. PLOS Computational Biology. 2015;11(7):1–38. pmid:26230679
- 74. Nicola W, Clopath C. Supervised learning in spiking neural networks with FORCE training. Nature Communications. 2017;8(1). pmid:29263361
- 75. Kim CM, Chow CC. Learning recurrent dynamics in spiking networks. eLife. 2018;7:e37124. pmid:30234488
- 76. Kim R, Li Y, Sejnowski TJ. Simple framework for constructing functional spiking recurrent neural networks. Proceedings of the National Academy of Sciences. 2019;116(45):22811–22820. pmid:31636215
- 77. Miller KD, Fumarola F. Mathematical Equivalence of Two Common Forms of Firing Rate Models of Neural Networks. Neural Computation. 2012;24(1):25–31. pmid:22023194
- 78. Harvey C, Coen P, Tank D. Choice-specific sequences in parietal cortex during a virtual-navigation decision task. Nature. 2012;484:62–8. pmid:22419153
- 79. Kobak D, Pardo-Vazquez JL, Valente M, Machens CK, Renart A. State-dependent geometry of population activity in rat auditory cortex. eLife. 2019;8:e44526. pmid:30969167
- 80. Herbert E, Ostojic S. The impact of sparsity in low-rank recurrent neural networks. PLOS Computational Biology. 2022;18(8):1–21. pmid:35944030
- 81. Shao Y, Ostojic S. Relating local connectivity and global dynamics in recurrent excitatory-inhibitory networks. bioRxiv. 2022.
- 82.
Ashby WR. Principles of the Self-Organizing System. In: Foerster HV, Jr GWZ, editors. Principles of Self-Organization: Transactions of the University of Illinois Symposium. Pergamon Press; 1962. p. 255–278.
- 83.
Lerchner A, Latham PE. A unifying framework for understanding state-dependent network dynamics in cortex; 2015. Available from: https://arxiv.org/abs/1511.00411.
- 84. Rajan K, Abbott LF. Eigenvalue Spectra of Random Matrices for Neural Networks. Phys Rev Lett. 2006;97:188104. pmid:17155583
- 85. Tao T. Outliers in the spectrum of iid matrices with bounded rank perturbations. Probability Theory and Related Fields. 2011;155(1-2):231–263.
- 86. Gold JI, Shadlen MN. The Neural Basis of Decision Making. Annual Review of Neuroscience. 2007;30(1):535–574. pmid:17600525
- 87. Wohrer A, Humphries MD, Machens CK. Population-wide distributions of neural activity during perceptual decision-making. Progress in Neurobiology. 2013;103:156–193. pmid:23123501
- 88. Bagur S, Averseng M, Elgueda D, David S, Fritz J, Yin P, et al. Go/No-Go task engagement enhances population representation of target stimuli in primary auditory cortex. Nature Communications. 2018;9(1). pmid:29955046
- 89. Bondanelli G, Ostojic S. Coding with transient trajectories in recurrent neural networks. PLOS Computational Biology. 2020;16(2):1–36. pmid:32053594
- 90.
Benaych-Georges F, Nadakuditi RR. The singular values and vectors of low rank perturbations of large rectangular random matrices; 2011. Available from: https://arxiv.org/abs/1103.2221.
- 91. Sompolinsky H, Crisanti A, Sommers HJ. Chaos in Random Neural Networks. Phys Rev Lett. 1988;61:259–262. pmid:10039285
- 92. Sommers HJ, Crisanti A, Sompolinsky H, Stein Y. Spectrum of Large Random Asymmetric Matrices. Phys Rev Lett. 1988;60:1895–1898. pmid:10038170
- 93. Susman L, Mastrogiuseppe F, Brenner N, Barak O. Quality of internal representation shapes learning performance in feedback neural networks. Phys Rev Research. 2021;3:013176.
- 94. Masquelier T, Thorpe SJ. Unsupervised Learning of Visual Features through Spike Timing Dependent Plasticity. PLOS Computational Biology. 2007;3(2):1–11. pmid:17305422
- 95. Masquelier T, Guyonneau R, Thorpe SJ. Competitive STDP-Based Spike Pattern Learning. Neural Computation. 2009;21(5):1259–1276. pmid:19718815
- 96. Kheradpisheh SR, Ganjtabesh M, Thorpe SJ, Masquelier T. STDP-based spiking deep convolutional neural networks for object recognition. Neural Networks. 2018;99:56–67. pmid:29328958
- 97. Zenke F, Bohté SM, Clopath C, Comşa IM, Göltz J, Maass W, et al. Visualizing a joint future of neuroscience and neuromorphic engineering. Neuron. 2021;109(4):571–575. pmid:33600754
- 98. Sompolinsky H, Crisanti A, Sommers HJ. Chaos in Random Neural Networks. Phys Rev Lett. 1988;61:259–262. pmid:10039285
- 99. Stimberg M, Brette R, Goodman DF. Brian 2, an intuitive and efficient neural simulator. eLife. 2019;8:e47314. pmid:31429824
- 100. Valente A, Ostojic S, Pillow JW. Probing the Relationship Between Latent Linear Dynamical Systems and Low-Rank Recurrent Neural Network Models. Neural Computation. 2022;34(9):1871–1892. pmid:35896161