• Loading metrics

Correlation-based model of artificially induced plasticity in motor cortex by a bidirectional brain-computer interface

  • Guillaume Lajoie ,

    Affiliation University of Washington Institute for Neuroengineering, University of Washington, Seattle, WA, USA

  • Nedialko I. Krouchev,

    Affiliation Montreal Neurological Institute, McGill University, Montreal, QC, Canada


  • John F. Kalaska,

    Affiliation Groupe de recherche sur le système nerveux central, Département de neurosciences, Université de Montreal, Montreal, QC, Canada

  • Adrienne L. Fairhall,

    Affiliations University of Washington Institute for Neuroengineering, University of Washington, Seattle, WA, USA, Dept. of Physiology and Biophysics, University of Washington, Seattle, WA, USA, Dept. of Physics, University of Washington, Seattle, WA, USA

  • Eberhard E. Fetz

    Affiliations University of Washington Institute for Neuroengineering, University of Washington, Seattle, WA, USA, Dept. of Physiology and Biophysics, University of Washington, Seattle, WA, USA

Correlation-based model of artificially induced plasticity in motor cortex by a bidirectional brain-computer interface

  • Guillaume Lajoie, 
  • Nedialko I. Krouchev, 
  • John F. Kalaska, 
  • Adrienne L. Fairhall, 
  • Eberhard E. Fetz


Experiments show that spike-triggered stimulation performed with Bidirectional Brain-Computer-Interfaces (BBCI) can artificially strengthen connections between separate neural sites in motor cortex (MC). When spikes from a neuron recorded at one MC site trigger stimuli at a second target site after a fixed delay, the connections between sites eventually strengthen. It was also found that effective spike-stimulus delays are consistent with experimentally derived spike-timing-dependent plasticity (STDP) rules, suggesting that STDP is key to drive these changes. However, the impact of STDP at the level of circuits, and the mechanisms governing its modification with neural implants remain poorly understood. The present work describes a recurrent neural network model with probabilistic spiking mechanisms and plastic synapses capable of capturing both neural and synaptic activity statistics relevant to BBCI conditioning protocols. Our model successfully reproduces key experimental results, both established and new, and offers mechanistic insights into spike-triggered conditioning. Using analytical calculations and numerical simulations, we derive optimal operational regimes for BBCIs, and formulate predictions concerning the efficacy of spike-triggered conditioning in different regimes of cortical activity.

Author summary

Recent developments in Bidirectional Brain-Computer Interfaces (BBCI) not only allow the reading out of neural activity from cortical neurons, but also the delivery of electrical signals. These drive neural dynamics and shape synaptic plasticity, thus opening the possibility of engineering novel neural circuits, with important applications for clinical treatments of spinal cord injuries and stroke. However, synaptic changes in recurrent networks of neurons are hard to predict: they involve complex dynamic mechanisms on multiple temporal and spatial scales. Based on experiments, we develop a computational network model with plastic synapses that serves as a predictive tool for BBCI protocols. We show how the efficacy of BBCIs is influenced by cortical activity statistics and we propose state-based stimulation strategies for driving artificially-induced synaptic plasticity.


The cerebral cortex contains interacting neurons whose functional connections are modified through repeated patterns of activation. For example, motor and somatosensory cortices are typically organized into somatotopic regions in which localized neural populations are associated with muscles or receptive fields and show varied levels of correlated activity (e.g. [15]). Functional relationships between such neural populations are known to change over time, reinforcing relevant pathways [69]. These changes are the result of plasticity mechanisms acting on myriad synaptic connections between cortical neurons. Most of them are relatively weak but can potentiate under the right conditions. However, it is not always clear what such conditions might be, or how one can interact with them for experimental or clinical purposes. Unanswered questions include the way local synaptic plasticity rules lead to stable, emergent functional connections, and the role of neural activity—and its statistics—in shaping such connections. While recent and ongoing work elucidates various plasticity mechanisms at the level of individual synapses, it is still unknown how these combine to shape the recurrently connected circuits that support population-level neural computations.

Bidirectional brain-computer interfaces (BBCI) capable of closed-loop recording and stimulation have enabled targeted conditioning experiments that probe these issues. In a seminal experiment [10], a BBCI called the Neurochip recorded action potentials of a neuron at one site in motor cortex (MC) and delivered spike-triggered stimuli at another for prolonged times in freely behaving macaque monkeys. This conditioning was able to increase functional connectivity from neurons in the recorded site to the ones in the stimulated site (c.f. [11]), as measured by electromyogram (EMG) of muscle activation evoked by intracortical microstimulation (ICMS) in MC. Importantly, the relative strength of induced changes showed a strong dependence on the spike-stimulus delay, consistent with experimentally derived excitatory spike-timing-dependent plasticity (STDP) time windows [1214]. The effects of this protocol were apparent after about a day of ongoing conditioning, and lasted for several days afterwards. Similar spike-triggered stimulation showed that corticospinal connections could increase or decrease, depending on whether the postsynaptic cells were stimulated after or before arrival of the presynaptic impulses [15]. This BBCI protocol has potential clinical uses for recovery after injuries and scientific utility for probing information processing and learning in neural circuits.

The observations outlined above suggest that STDP is involved in shaping neural connections by normal activity during free behavior, and is the central mechanism behind the success of spike-triggered conditioning. However, this could not be verified directly as current experiments only measure functional relationships between cortical sites. Furthermore, interactions between BBCI signals and neural activity in recurrent networks are still poorly understood, and it remains unclear how BBCI protocols can be scaled up, made more efficient, and optimized for different experimental paradigms. For example, during spike-triggered stimulation, the spikes from a single unit are used to trigger stimulation of a neural population. While STDP can explain how the directly targeted synapses may be affected (i.e. from the origin of the recorded spikes to the stimulated population), the observed functional changes must rely on a broader scope of plastic changes involving other neurons that are functionally related to the recorded ones. What are the relevant network mechanisms that govern these population-level changes? How can a BBCI make use of population activity to trigger optimal stimulation?

Here we advance a modeling platform capable of capturing the effects of BBCI on recurrent, plastic neuronal networks. Our goal is to use the simplest dynamical assumptions in a “bottom-up” approach, motivated by neuronal and synaptic physiology, that enable the reproduction of key experimental findings from [10] at the functional level in MC and related muscles. In turn, we seek to use this model to provide insights into plasticity mechanisms in recurrent MC circuits (and other cortical regions) that are not readily accessible experimentally, as well as establish a theoretical framework upon which future BBCI protocols can be developed.

We build on a well-established body of work that enables analytical estimates of synaptic changes based on network statistics [1622] and compare theoretical results with experiments and numerical simulations of a probabilistic spiking network. In our model, every neuron is excitatory—the modulatory role of inhibition in MC is instead represented implicitly by non-homogeneous probabilistic activation rates. While inhibition likely plays an important role in cortical dynamics, we consider results from our exclusive use of excitation to be a significant finding, suggesting that a few key mechanisms can account for a wide range of experimental results. Using data from previous work as well as from novel experiments, we calibrate STDP synaptic dynamics and activity correlation timescales to those typically found in MC neural populations. The result is a spiking model with multiplicative excitatory STDP and stable connectivity dynamics which can reproduce three key experimental findings:

  1. We capture the two different timescales of onset and dissipation of conditioning effects with an emergent property of our modelling framework: naturally distinct “learning” and “forgetting” speeds.
  2. With a simple filtered readout of our model’s output, we reproduce conditioning effects as measured by muscle EMG recordings evoked by ICMS.
  3. We reproduce the overall dependence of conditioning on spike-triggered stimulation delay.

Furthermore, we make the following novel findings:

  1. The distinct temporal statistics that characterize different regimes of cortical activity have an important impact on the efficacy of BBCI protocols.
  2. Multi-synaptic mechanisms can lead to changes in circuit connectivity that are not predicted by STDP of synapses directly targeted by conditioning protocols.
  3. When stimulating a subset of a given neural population, we find that the overall efficacy of conditioning depends supra-linearly on the proportion of this subset.

Together, these results provide quantifiable experimental predictions. They arise from a theoretical framework that is easily scalable and serves as a potential testbed for next-generation applications of BBCIs. We discuss ways to use this framework in state-dependent conditioning protocols.


Our model’s key assumptions are: (i) we only consider interactions between excitatory neurons (but revisit the role of inhibition in the discussion); (ii) neurons in MC are sparsely and randomly connected by synapses that are plastic and follow a single multiplicative STDP rule; (iii) the spiking activity of neurons is modelled with interacting probabilistic processes.

We use a standard probability-based network model in which the instantaneous spiking probability rate λi(t) of neuron i changes in time and depends linearly on an external command νi(t), as well as the exponentially filtered spikes of other neurons in the network. Synaptic weights modulate the latter inputs and are stored in the connectivity matrix J(t), with the entry Jij(t) representing the synapse from neuron j to neuron i. This connectivity is chosen to be sparse and is initially drawn at random between the N neurons in the network. All non-zero synapses are evolve in time according to a multiplicative STDP rule Wt, Jij), but no new synapse is allowed to emerge (see Methods). Here, Δt is the interspike interval between a pre-synaptic spike from neuron j and a post synaptic spike from neuron i. Both axonal and dendritic spike transmission delays are sampled randomly for each synapse from two respective distributions with means and . For numerical simulations, each neuron is treated as an independent Poisson process with rate λi(t). We refer the reader to Methods for more details about the model and its implementation.

To model the BBCI conditioning experiments—marked by the symbol “†” throughout the paper—we divide our network of MC neurons into three groups of equal sizes as illustrated in Fig 1A. Neurons in each group α = a, b, c receive external inputs that have a group-specific modulating rate να(t) but are otherwise independent from neuron to neuron (except for synaptic interactions). We drive the network with idealized input rates in all numerical simulations. These are composed of truncated sinusoidal functions offset with an equal phase difference between each group (Fig 1B), and produce mean firing rates on the order of 10 Hz for most connectivity matrices. We later introduce more biologically relevant statistics for activation rates to match simple model attributes to experimental data. To mimic the spike-triggered stimulation experiments in [10], we assign group a to be the “Recording” site, group b to be the “Stimulation” site and c to be the “Control” site. Neuron i = 1 from group a is the “recorded” neuron, whose spikes trigger stimulation of every neuron in group b after a stimulation delay d. Stimulation of subsets of b is also considered.

Fig 1.

(A) Diagram of model network with three functional groups (a, b, c). In the stimulation protocol, the BBCI records spikes from a single neuron in group a and delivers a large stimulus to group b after a delay d. (B) Top: external rates νa(t) (solid black), νb(t) (dotted red) and νc(t) (dashed blue) delivered to each neural group. Middle: sample spike raster plot from a simulation subject to rates plotted above with sparse, homogeneous connectivity J(0) (N = 60). Bottom: same as middle but with spike-triggered stimulation turned “on”. Spike from neuron number 1, circled in red, trigger the stimulation of all of population b (neurons 20 to 40) after a delay of d = 10 ms. (C) Top: snapshot of connectivity matrix J(t) for network in B after spiking simulation of 5000 seconds (grayscale: white = Jmin, black = Jmax). Bottom: example time traces of individual synapses from the same simulation.

The relationship between spiking and synaptic activity in similar models of plastic networks has been extensively studied and predictions of synaptic dynamics under a variety of contexts were analytically derived (see e.g. [1631]). We build on these results and contribute novel findings that enable representations of artificial connections from BBCIs in generalized activity regimes. Our approach largely follows that of [18] but contains key innovations which are described in detail in Methods.

For illustration, Fig 1C shows a snapshot of a connectivity matrix for a network of size N = 60 driven by the rates shown in Fig 1B and with spike-triggered stimulation turned “off”, as well as traces of synaptic weights Jij(t) evolving in time. It is evident that some structure emerges due to external commands, and that individual synapses show ongoing fluctuations around that structure. Below, we often make use of averaged synaptic matrices : 3 × 3 matrices that contain group-averaged synaptic weights. In Methods, we derive analytical expressions for them, allowing us to forego lengthy spiking simulations. These expressions make use of group-averaged cross-correlation functions of different quantities that we often use throughout: for the external command rates (), for the network spiking activity without (C(u)) and with (C(u)) spike-triggered stimulation.

Our modelling approach makes important simplifying assumptions, and clarifying remarks are in order before we present our main results. First, we acknowledge that the absence of inhibitory activity is at odds with known dynamic mechanisms in cortex, where a balance between excitation and inhibition modulates activity and prevents run-away excitation (see e.g. [3234]). Our goal, however, is to extract key mechanisms related to excitatory STDP in a setting that is as simple as possible. As is shown below, excitatory STDP dynamics are only influenced by activity statistics. We choose to model this activity using a simplified probabilistic model, as opposed to a complex balanced state mechanism, for the sake of analytical tractability and to better understand the extent to which excitatory STDP alone can account for BBCI conditioning results. Activity modulation is built into probabilistic firing rates, and bounds on synaptic strengths prevent run-away activity. We revisit the implications of this modelling choice in the Discussion. Second, we note that many parameter values used throughout this study were chosen in the absence of experimental evidence and can have important impacts on specific dynamic signatures of our model. Particularly influential parameters include axonal and dendritic transmission delays (see e.g. [18, 19, 30, 31]), which we choose to be in a reasonable range, and to produce desired dynamical regimes. However, our model can easily be adapted to make precise quantitative matches with future experimental data, and we expect the qualitative mechanisms reported below to persist in a wide range of regimes.

Emergent synaptic structure and impact of spike-triggered stimulation

When neurons in the network’s three groups a, b, c are subject to external commands ν(t) = (νa(t), νb(t), νc(t)) with stationary statistics, their averaged connectivity evolves toward an equilibrium that reflects these inputs’ correlations (c.f. [24, 25]), although individual synapses may continue to fluctuate. This has been observed in a number of theoretical studies (see e.g. [18, 23]) and is consistent with the formation and dissociation of muscle assemblies in MC due to complex movements that are regularly performed [8]. The mean synaptic equilibrium strongly depends on the external inputs ν(t)’s correlation structure (see Fig 2A). Indeed, a narrow peak near the origin for correlations within groups, as is the case for the periodic external rates shown in Fig 1B, along with the absence of such peaks for cross-group correlations, contribute to strengthening synapses within groups and weakening those across groups. Under such conditions, what will be the impact of spike-triggered stimulation?

Fig 2.

(A) Mean network correlations for synaptic equilibria under normal activity C(u) (thick green line) and under spike-triggered stimulation C(u) (dashed red line). Mean correlations for normal network activity with synaptic equilibrium obtained from stimulation () is shown in dashed blue. External correlations also shown (thin black line). (B) Evolution of synaptic weight averages over time, computed with analytical estimates, color-coded as in C. External rates as in Fig 1B. Initiated at equilibrium (see Methods), spike triggered stimulation (†) is switched “on” for the indicated period. Inset shows evolutions of 10 randomly chosen synapses from group a to group b, from corresponding spiking network simulation. (C) Top: plots of mean equilibrium matrices in normal activity and under spike-triggered stimulation (left/right resp.). Bottom: snapshot of full simulated matrix J(t) once equilibria are reached (N = 60). (D) Plot of relative synaptic changes between equilibria: with α, β ∈ {a, b, c}.

Fig 2B shows the evolution of synaptic averages , analytically computed (see Methods) for a system initiated at the synaptic equilibrium associated with external rates ν(t) from Fig 1B. The inset of Fig 2B shows the evolution of individual synapses from group a to group b from full network simulations. At 15 hours, the spike-triggered stimulation protocol is turned “on”, with a set delay d = 20 milliseconds, and synapses start changing. In ∼10 hours they reach a new equilibrium which differs from the initial one in a few striking ways, as seen in Fig 2C and 2D, where normalized differences are plotted for all pre- and post-group combinations. First, as expected and in accordance with experiments [10], the mean strength of synapses from group a (recorded site) to group b (stimulated site) are considerably strengthened (by about 80%). As described in more detail below, this massive potentiation relies on two ingredients: correlated activity within group a and an appropriate choice of stimulation delay d.

Perhaps more surprising are collateral changes in other synapses, although they are of lesser magnitude. While this was previously unreported, it is consistent with unpublished data from the original spike-triggered stimulation experiment [10]. It is unclear how many of these changes are due to the particular external rate statistics and other model parameters; we return to this question below when realistic activity statistics are considered.

We also show that spike-triggered stimulation induces novel correlation structures due to synaptic changes as illustrated in Fig 2A, which plots the correlation functions , C(u), C(u) and . Here, denotes the correlations one observes under baseline activity (i.e. without ongoing stimulation) but with the equilibrium connectivity obtained after prolonged spike-triggered stimulation, i.e., at the end of spike-triggered stimulation. It is clear that every interaction involving group b is considerably changed, most strongly that with group a, which includes the neuron used to trigger stimulation. More surprising is the increased cross-correlation of group b with itself, even though connectivity within group b is not explicitly potentiated by conditioning. In fact, it is slightly depressed (Fig 2D). This happens because connections from group a to group b are considerably enhanced, which causes the mean firing rate of group b to grow and its correlations to increase. Later, we explore similar collateral changes that occur because of multi-synaptic interactions. We see in the next section how these correlations translate into functional changes in the network.

Finally, Fig 2B shows a crucial feature of our model: the timescale for the convergence from the normal to the artificial equilibrium is different from that of the decay back to normal equilibrium after the end of spike-triggered stimulation. In the conditioning experiments from [10, 15], the effect of spike-triggered stimulation was seen after about 5–24 hours of conditioning, while the changes decayed after 1 to 7 days. With the simplified external drives producing a reasonable mean firing rate of about 10Hz for individual cells, an STDP learning rate of η = 10−8 was adequate to capture the two timescales of synaptic changes.

Thus, the simple excitatory STDP mechanisms in our model give rise to distinct timescales for increases and decay of synaptic connectivity strength produced by spike-triggered conditioning, in agreement with experimental observations. The emergence of distinct timescales was previously reported and studied in related modelling contexts [30, 31]. It should be noted that a number of parameters are shown to affect the magnitude of timescale separation, such as types of synaptic delays, weight dependence of the STDP rule, the firing rate of ongoing baseline activity, etc. We reiterate that many of these parameters are not well resolved experimentally for macaque MC, and that we have made simplifying choices (see Methods) that can be adapted to new experimental data. Nevertheless, we expect our model to robustly produce distinct synaptic timescales that can be fitted by a single time-scaling parameter. This does not contradict the findings from human psychophysical studies that feedback error-driven motor adaptation may involve two or more different and independent parallel processes with different temporal dynamics for learning and decay of the motor skill [35]. Nevertheless, the cellular mechanisms in our model may have some relation to the different timescales proposed to underlie motor adaptation at the system level [35]. Such relationships could be further investigated by direct experimentation and appropriate simulations.

In summary, our model satisfies the first experimental observation from [10] we set out to reproduce (point a. in Introduction). Indeed, we find that two distinct timescales of synaptic changes (during and after conditioning) are an emergent property of our model, and tuning a single parameter is sufficient to fit the rates observed in experiments.

Simulating ICMS protocols: Recovering functional changes

Changes in correlations due to spike-triggered conditioning indicate that there is an activity-relevant effect of induced synaptic changes, which is measurable from spiking statistics (see Fig 2A). We now show how this is directly observable in evoked activity patterns that are consistent with intra-cortical microstimulation (ICMS) protocols employed in experiments. In [10], connectivity changes were inferred using ICMS and electromyogram (EMG) recordings of the monkey’s wrist muscles, as well as evoked isometric torques. To summarize, a train of ICMS stimuli lasting 50 ms was delivered to each MC site; simultaneously, EMG activity in three target muscles were recorded. The average EMG responses for repeated trials were documented for each of three MC sites (i.e. group a, b and c) before and after spike-triggered conditioning.

The experiment showed that prior to conditioning, ICMS stimulation of any MC site elicited well-resolved average EMG responses, largest in one muscle but not the two others. After conditioning, ICMS stimulation of the recording site (group a) not only elicited an EMG response in its associated muscle, but also in that of stimulated site (group b). While it was conjectured that synaptic changes in MC were responsible for the changes, this could not be verified directly. Our model suggests that synaptic changes can indeed occur in MC-like networks, but it remains unclear if such changes can lead to the experimentally observed motor output changes. We address this by simulating EMG responses of our model, before and after spike-triggered conditioning (†).

Fig 3 shows a simulated ICMS protocol before (panel A) and after (panel B) spike-triggered stimulation conditioning. For each case, synaptic matrices are chosen from full network simulations and fixed (STDP is turned off), as shown in the top row of Fig 3 (reproduced from Fig 2C). To mimic the ICMS stimulus, we add a square-pulse input rate of 100 Hz lasting 50 ms to the external rate να(t) of a target group. An example of the spiking output of our network for α = a is shown in the top row of Fig 3 where the solid black bar below the graph shows the stimulus duration. Next, we filter the spike output of all neurons within a group using the synaptic filter ε(t) described in Methods, and add them to obtain population activity time-courses. Finally, we take these summed profiles and pass them through a sigmoidal non-linearity—(1 + exp−a(xb))−1 where x is the filtered activity—meant to represent the transformation of neural activity to EMG signals. Here, we assume that the hypothetical motoneurons whose target muscle EMG is recorded receive inputs only from a single neural group and that network interactions are responsible for cross-group muscle activation. We label our modelled motor output EMG measurements by , α ∈ {a, b, c}.

Fig 3.

(A) ICMS with baseline connections. (B) ICMS after spike-triggered stimulation. Top left panels: Connectivity matrix J. Top right panels: Raster plot of entire network as in Fig 1. Bottom black bar shows a 50 ms stimulation of all neurons in group a at 100 Hz. Bottom: Filtered responses of group projections. Columns designate neural groups in MC being stimulated (see e.g. A for stimulation of group a) and rows designate the output EMG evoked from each group, respectively. Circle indicates the biggest change induced by spike-triggered conditioning.

We choose the nonlinearity parameters a = 2.5 and b = 5 to qualitatively reproduce the EMG responses seen in the experiment before spike-triggered conditioning: namely, well-resolved EMG responses are observed only when the relevant MC group α is stimulated. The bottom row of Fig 3 shows responses of each group, averaged over 15 trials each, when ICMS stimulation is delivered to a single group at a time. Panel A shows little cross-group response to ICMS stimulation before spike-triggered stimulation conditioning. However, after conditioning, stimulation of group a evokes an emergent response in the muscle activated from group b (see circles in Fig 3), as well as a small increase for group c.

These features were both present in the original experiments (see Figure 2 in [10]) and are consistent with the synaptic strengths across groups before and after conditioning. In addition to EMG, the authors of [10] also measured the effects of ICMS using a manipulandum that recorded isometric torques produced by evoked wrist motion. In our model, the newly evoked EMG responses, after conditioning, agree with the observation that torques evoked from the recorded site (group a) typically changed toward those previously evoked from the stimulated site (group b). As such, from now on we equate an increase in mean synaptic strength between groups to an increase in functional connectivity.

We conclude that our model satisfies the second experimental observation from [10] we set out to reproduce (point b. in Introduction). That is, a simple interpretation of evoked network activity—a filtered output of distinct neural group spiking activity—is consistent with the functional changes in muscle activation in ICMS protocols observed before and after conditioning.

Conditioning effects modulated by realistic activation statistics

Up to now, we used toy activation profiles ν(t) in the form of truncated sinusoidal bumps to drive neural activity (Fig 1B). In this section, we modify our simple model to incorporate experimentally observed cross correlation functions, whenever possible, in an effort to eliminate artificial activation commands and capture more realistic regimes. As a result, we no longer rely on numerical simulations of spiking activity, but rather on analytically derived averaged quantities to explore a wide range of conditioning regimes. There is no longer a need to specifically define the activation functions ν(t), we instead rely solely on cross-correlation functions and C(u). Below, we aim to construct versions of these functions that are as close to experimental data as possible.

Before discussing spiking statistics, we note an important advantage of only considering mean synaptic strengths . For spiking simulations shown above, we used networks of N = 60 neurons with probability of connection p = 0.3, which are considerably far from realistic numbers. Nevertheless, the important quantity for mean synaptic dynamics is the averaged summed strengths of synaptic inputs that a neuron receives from any given group: . Notice that many choices of p and N can lead to the same quantity, therefore creating a scaling equivalence. Moreover, additional scaling of Jmax can further accommodate different network sizes. So far, we assumed that every neuron receives an average of 6 synapses from each group. If each of these synapse were at maximal value Jmax = 0.1, then simultaneous spiking from a pre-synaptic group would increase the post-synaptic neuron’s spiking probability by 60%, a number we consider reasonable.

Realistic activation statistics.

The changing dynamics of MC neurons in behaving macaques (e.g., [3639]) yield cross-correlations between cell pairs whose preferred features can change depending on activity context [4042]. We do not attempt to capture these subtleties with our simplified model. Rather, we create a family of cross-correlation functions leading to network activity statistics that roughly match averaged quantities observed experimentally, in an attempt to extract qualitative mechanisms.

To calibrate our model, we use estimates of two quantities from MC recordings: the mean cross-correlation between two neurons in the same group and the mean cross-correlation between neurons in different groups. We assume, as was the case thus far, that all mean cross-correlations are identical for neurons within the same group and neurons across groups respectively ( and , for α, β, γ, κ ∈ {a, b, c}). We use statistics of two types of recorded neuron pairs in MC: pairs for which there is a functional relationship, but that are not necessarily colocated, and pairs that are close to one another, but do not necessarily have a strong functional relationship. Data from the former type comes from [41], and data about the latter come from novel experiments (see also notes in [40, 41]).

In [41], spike time cross-correlation functions between functionally related macaque MC neurons were documented. The recordings were gathered during a stereotyped motor task and 84 pairs out of 215 showed correlation features. The majority of these pairs showed Gaussian-shaped cross-correlations, with a mean width of the central peak (taken at half height) of 22 ms. However, the distances between neurons in these pairs were variable. In a novel experiment, recordings were obtained from a monkey implanted with a Utah array in primary MC and performing a 2D target tracking task while seated in a chair. See Methods for details. In this data set, there were only a few pairs of neurons recorded from the same electrode. Nevertheless, such well-resolved pairs typically showed Gaussian-shaped cross-correlations roughly centered at the origin. An important observation is that the width of the Gaussian peaks varied greatly. Fig 4A top and middle show both extremes where the cross-correlograms were fitted to Gaussian functions with a standard deviation σ ranging from ∼10 to ∼90 milliseconds. In contrast, neuron pairs from different electrodes showed more subtle structure in their cross-correlograms, and a wide variety of feature shapes (e.g. asymmetrical relationships, anti-correlations, etc.). Most of them showed very mild or no correlation features, as depicted by the typical flat cross-correlogram in the bottom of Fig 4A.

Fig 4.

(A) Experimentally obtained cross-correlograms of MC neurons in macaque monkey during a tracking task (blue) and Gaussian fit (red). Top, Middle: for neuron pairs recorded by the same electrode, respectively (using spike-sorting). Top shows the thinnest correlation peak (σ = 9.8 ms) and Middle shows the widest (σ = 89.3 ms). Bottom: For two neurons recorded by distinct electrodes. (B) Illustration of Gaussian-shaped external cross-correlations with width (thin black line) and resulting network cross-correlation C(u) with width σ (thick blue line). (C) Relative differences for all group-averaged synapses, as a function of stimulation delay d, for the model network fitted to two extremal values of correlation width. Top: σ = 10 ms. Bottom: σ = 90 ms. (D) Relative differences for averaged synaptic strengths from group a to group b () as a function of stimulation delay d and fitted correlation peak width σ. Black dotted line corresponds to best fit plotted in E. (E) Superposition of relative difference for and normalized mean torque change from spike-triggered conditioning experiments on macaque monkeys. Experimental data from Figure 4 of [10]; error bars show the standard error of the mean. Best fit between model and experimental curve is for σ ≃ 17 ms (see black dotted line in D).

Both our novel experiment and [41] provide evidence on mean correlation structure, which is the defining quantity that guides synaptic dynamics in our model. The effect on synaptic changes bestowed by multiple cross-correlations among different neurons is additive. As such we note that many distributions of cross-correlations may lead to comparable synaptic changes, as long as they have the same mean width. Our goal for now is to better understand the impact of this mean correlation timescale on spike-triggered conditioning.

In light of these observations, we make the following simplifications concerning the model’s idealized cross-correlations : cross-correlations of neurons within a group are Gaussian while those for neurons across groups are flat: (1) where B is a baseline, L is the height of the peak and modulates the peak’s width (i.e. standard deviation). This choice of leads to a network equilibrium cross-correlation function C(u) with Gaussian shapes. Fig 4B illustrates our idealized cross-correlation function and the resulting C(u). Our goal is to choose the parameters of in such a way that C(u) matches experimental data as closely as possible.

From the analysis in Methods, it can be shown that the parameters B and L, along with η, only influence the speed at which synaptic changes occur (see Eq (24)). Therefore, we set these to obtain a mean firing of about 10 Hz as for the previously assumed activation rates ν(t), along with η to obtain convergence timescales similar to those shown in Fig 2B. We henceforth set B = 0.17, L = 2.13 and η = 10−8. Since experiments show that can take a range of values, we leave it as a free parameter in our regime exploration below. In Methods, we derive a function Φ(σ) such that with leads to C(u) for which the width of average cross-correlations between neurons from the same group is σ. Note from Fig 4B that the resulting cross-correlation between neurons from distinct groups does not remain exactly flat, but is considerably less pronounced, as is observed in experimental data.

In summary, thanks to from Eq (1) and the function (see Methods), we are now equipped to control the cross-correlation function of our network at synaptic equilibrium (C(u)), and study its influence on the efficacy of spike-triggered conditioning.

Cross-correlation width influences optimal stimulation delay.

We compute the relative change in mean synaptic strengths () before and after spike-triggered stimulation, for a range of cross correlation widths σ and stimulation delays d. Fig 4C shows these differences, for all pre- and post-synaptic neural group combinations, for σ = 10 ms and σ = 90 ms. Notice that the correlation width has significant effects on the d-dependence of BBCI conditioning for all synapses. For now, we concentrate on the synapses directly targeted by the stimulation procedure, namely those from group a to group b (), which naturally show the greatest changes. We discuss the changes incurred by other synapses below.

For both values of σ used in Fig 4C, the changes incurred by depend non-monotonically on the triggering delay d, admitting either a maximum or a plateau. However the maximizing d value (or range) changes with σ, and so does the sensitivity to choices of d, i.e., how rapidly the curve reaches its maximum as d is varied. Fig 4D comprehensively illustrates this dependence, showing for a range of both σ and d. The main features of this relationship are: (i) optimal delays (d at peak) get larger as grows. (ii) sensitivity is mitigated by larger σ, i.e., large correlation timescales lead to optimal delays whose conditioning outcomes are less sensitive to perturbations.

The relative simplicity of our model’s equations enables a mechanistic interpretation of these findings. Eq (23) illustrates that it is the integral of the network’s correlations multiplied by the STDP rule that dictates synaptic dynamics. When artificial stimulation is delivered, components of this integral are shifted by d. Wider correlations imply that small changes in shifts d have slower effects on the integral, leading to both point (i) and (ii) listed above.

When the stimulation of group b is triggered on spikes from the recorded neuron in group a, the plasticity incurred by synapses from other neurons in group a to any neuron in group b will depend on how likely it was that a-neurons fired within a short interval of the recorded neuron’s spikes. Correlation width measures the synchrony of spiking activity. For narrow correlations, neurons in group a fire closely together and the delay required to induce the maximal plastic change directly depends on the combination of axonal delay and the off-width of the potentiation peak of the STDP rule (see Methods). Essentially, the peak of the cross-correlation needs to be shifted so it aligns with the peak STDP potentiation. For wider correlation functions, synaptic changes depend less on synchrony and more on interactions with larger time lags. Indeed, optimal delays are those that shift the cross-correlation’s peak past the STDP peak, so that the the latter is aligned with the right-hand-side tails of cross-correlations. This relationship leads to more robust potentiation at larger stimulation delays.

This phenomenon constitutes our first novel finding (point 1. from Introduction): neuronal activity statistics have an important impact on the value of optimal spike-triggered stimulation delays (d), and can lead to synaptic potentiation that is more robust to deviations from that optimal value.

Qualitative reproduction of stimulation/plasticity dependence

It remains unclear if the mechanisms described above are consistent with the experimentally observed relationship between stimulation delay (d) and efficacy of spike-triggered conditioning in macaque MC. We investigated this by comparing efficacy, as measured by the percentage of torque direction change evoked by ICMS before and after conditioning [10], to relative synaptic strength changes in our model. This is motivated by the above demonstration that synaptic strengths are well correlated with amplitude of evoked muscle activations in a ICMS experiment (see Fig 3 and point b. in Introduction). Nevertheless, the following comparison between model and experiment is qualitative, and meant to establish a correspondence of (delay) timescales only.

We use the data originally presented in Figure 4 of [10], describing the shift in mean change in evoked wrist torque direction by ICMS of the recorded site (group a), as a function of stimulation delay d. We plot the same data in Fig 4E, with the maximal change (in degrees) normalized to one. On the same graph, we plot the v.s. d curve for the value of σ that offers the best fit (in L1-norm). This amounts to finding the best “σ-slice” of the graph in Fig 4D to fit the experimental data. We found that σ ≃ 17 ms gives the best correspondence.

We reiterate that this comparison is qualitative. Nevertheless, the fit between the d-dependence of experimentally observed functional changes and modelled synaptic ones is clear. As our model’s spiking activity and STDP rule are calibrated with experimentally observed parameters (see Methods), this evidence suggests that our simplified framework is consistent with experiments. Importantly, σ = 17 ms is comparable to correlation timescales between functionally related MC neurons in macaque, as reported in [41] (see also [5, 40]) and discussed earlier. It was shown that such neurons have gaussian-like correlation functions with mean peak width at half height on the order of 22 ms, corresponding roughly to σ = 10 ms. While this is slightly lower than our estimate, we note that task-specific motion is known to introduce sharp correlations and that free behaving, rest and sleep states induce longer-range statistics [42]. Cross-correlation functions reported above were recorded during a stereotyped center-out reach task experiment, in contrast to the spike-triggered conditioning experiment which was conducted over a wide range of states, including sleep, which may lead to longer mean cross-correlation timescales [10]. A prediction of our model is that spike-triggered conditioning restricted to periods of long timescale correlations in MC, such as during sleep [42], could lead to a more robust conditioning dependence on stimulation delays (see Discussion).

This finding implies that our model successfully reproduces the third and last experimental observation from [10] (point c. from Introduction): using simplified cross-correlation functions of MC neural populations calibrated from experimental measurements, our model reproduces the relationship between the magnitude of plastic changes and the stimulation delay in a spike-triggered conditioning protocol.

Multi-synaptic and collateral effects

We now explore the effects of spike-triggered stimulation on collateral synaptic strengths, i.e., other than the targeted a-to-b pathway. For a wide range of parameters, there is little change other than for the a-to-b synapses. Indeed, when cross-correlation width σ is moderate to large, spike-triggered stimulation has little effect on collateral connections, for any stimulation delay. This is in conjunction with the robustness of a-to-b changes discussed in the previous section (see Fig 4D). Nevertheless, some localized features arise when cross-correlation width σ is small. Fig 5A shows color plots of these changes as a function of d and σ, for the nine combinations of pre- and post-synaptic groups. We now review the mechanisms responsible for these indirect synaptic alterations.

Fig 5.

(A) Relative differences (i.e. before versus after spike-triggered stimulation) for mean synaptic strengths as a function of stimulation delay d and correlation peak width σ (as in Fig 4D). Subplots show outcome for all combinations of pre- and post-synaptic groups a, b, c, for 2.5 ≤ d ≤ 12.5 (ms) and 1 ≤ σ ≤ 50 (ms). Star indicates parameter choice for panel B. (B) Bar plots of mean synaptic strength relative difference for all combinations of pre- and post-synaptic groups, with d = 5 ms and σ = 17 ms. Top: normal network (values marked by a star in panel A). Middle: network where b-to-c synapses are inactivated (Jcb ≡ 0). Bottom: network where a-to-c synapses are inactivated (Jca ≡ 0). Red crosses indicate the artificially inactivated synapses and red arrows show the most drastic change due to inactivation, in comparison to the normal network (top).

First, b-to-b synapses become depressed, regardless of stimulation delay, for short correlation timescales. This is due to the occurrence of synchronized population spiking produced by artificial stimulation which, because of our choice of dendritic and axonal delays (see Methods), promote depression. Such synchronized spikes induce sharp δ-peaks in the network’s cross-correlation (see Methods) and the combination of transmission delays shifts this peak toward the depression side of the STDP rule. When cross-correlations are narrow (i.e. small σ), their interaction with the STDP rule –which manifests in the integral in Eq (13)– is more sensitive to the addition of such δ-peaks, resulting in overall depression. In contrast, when cross-correlations are wider, the addition of δ-peaks has a smaller effect since a wider range of correlations contribute to the integrated STDP changes.

Second, the b-to-a synapses become potentiated for short delays d when σ is small enough. This happens because of a combination of factors. When the recorded neuron in group a spikes, the population-wide spike artificially elicited in b quickly follows and travels to the b-to-a synapses. This means that the spike of a single neuron in a effectively reaches all neurons in a, with an effect amplified by the strength of many synapses, shortly after the neuron originally fired. When cross-correlations among a-neurons are wide, the effect of this mechanism is diluted, similarly to the b-to-b synapses discussed above. However, when neurons in a are highly synchronous, this short-latency feedback produces synaptic potentiation of the b-to-a synapses.

Third, the synapses from both groups a and b onto the control group c are also potentiated when σ and d are small enough. This can be explained in two parts and involves di- and tri-synaptic mechanisms. When the recorded neuron in a fires a spike, a population-wide spike is artificially evoked in b shortly after, which travels down to b-to-c synapses and elicits a response from neurons in c. Narrow cross-correlations imply that many spikes in a fall within a favorable potentiation window of spikes in c, thereby contributing to the potentiation of a-to-c synapses. To test this mechanism, we compute the relative synaptic changes due to spike-triggered stimulation in an altered network, where b-to-c synapses are inactivated (Jcb ≡ 0). Fig 5B shows the synaptic changes for the normal network (top) and this altered one (middle) for fixed parameters d = 5 ms and σ = 17 ms (same σ that best fitted experiments, see Fig 4E). We can clearly see that without b-to-c synapses, a-to-c synapses do not potentiate under spike-triggered stimulation. In turn, the strengthening of a-to-c synapses imply that spikes in a are more likely to directly elicit spikes in c, thereby repeating the same process in a different order for b-to-c synapses. Note that without a-to-c synapses, the b-to-c synapses would not potentiate. Indeed, as was the case for b-to-b synapses, the combination of transmission delays do not conspire to promote direct potentiation following a population-wide synchronous spike. This is tested by inactivating the a-to-c synapses, which prevents the potentiation of b-to-c synapses, as shown in the bottom panel of Fig 5B.

Finally, there is a moderate increase of the c-to-b synapses for stimulation delays d from about 5 to 20 ms. These are observed because of a-to-c synapses, promoting spikes in c that precede the stimulation of group b. This mechanism only works if neurons in a are tightly correlated, i.e., for small σ. Using the same process described above, we tested this mechanism by inactivating a-to-c synapses which prevents the potentiation of c-to-b synapses.

We reiterate that our specific choice of synaptic transmission delays may influence the magnitude of the changes described above. This is because many of the outlined mechanisms rely on well-timed series events. See e.g. [18, 19, 30, 31] for more details about the impact of delays. We expect that tuning our model to experimentally measured delay distributions, as they become available, will help validate and/or improve our framework’s predictive power.

Together, these mechanisms form our second novel finding (point 2. from the Introduction): multi-synaptic mechanisms give rise to significant changes in collateral synapses during conditioning with short delays, and these cannot be attributed to STDP mechanisms directly targeted by the BBCI, instead emerging from network activity. We note that the correlation time-scale that best fits experiments (σ = 17ms, see Fig 4E) falls within the parameter range where these effects occur.

Stimulating a subgroup of neurons

We have assumed that spike-triggered stimulation elicits population-wide synchronous spiking of all neurons in group b (Nstim in [10]). This is valid if the neural group b represents all the neurons activated by the stimulating electrode of a BBCI, but is not necessarily representative of the larger population of neurons that share external activation statistics due to a common input νb(t). Indeed, some neurons that activate in conjunction with those close to the electrode may be far enough from it so they do not necessarily spike in direct response to a stimulating pulse. Alternatively, selective activation of neurons within a group can also be achieved via optogenetic stimulation in a much more targeted fashion [43]. We now consider the situation in which only a certain proportion of neurons from group b is activated by the spike-triggered stimulus.

We denote the stimulated subgroup by b and the unstimulated subgroup by b. All neurons in group b receive the same external rates νb(t) as before, but only a few (solid red dots in Fig 6A) are stimulated by the BBCI. Let Nb = N/3 be the number of neurons in group b and the parameter ρ, with 0 ≤ ρ ≤ 1, represent the proportion of stimulated neurons in b. The sizes of groups b and b are given by and , respectively. We now adapt our analytical averaged model (23) to explore the effect of stimulation on subdivided synaptic equilibria. We verified that the analytical derivations used below match the full spiking network simulations as before.

Fig 6.

(A) Network with group b subdivided into two subgroups: stimulated neurons (b, red) and unstimulated neurons (b, white). (B) Top: plots of equilibrium synaptic strengths between group a and both subgroups of b in normal activity and under spike-triggered stimulation (left/right resp.). Proportion of stimulated neurons is set to ρ = 0.5. Bottom: Plot of relative synaptic changes between equilibria: . Control group c not shown. (C) Relative synaptic changes as a function of proportion of neurons in group b receiving stimulation ρ. Plotted are subgroup averaged synaptic changes for synapses from a to b and b and from a to entire group b. (d = 30 ms)

Both subgroups of b receive the external rate νb(t) but only one receives spike-triggered stimulation. These changes are captured in the averaged analytical derivations by tracking the number of neurons in each sub-group in the averaging steps leading to Eqs (22) and (23) accordingly—replacing N/3 by and where necessary. This way, we obtain subgroup-specific synaptic averages (e.g. ).

Fig 6B shows the group-averaged connectivity strength between group a (the recording site) and both subgroups of b, before and after spike-triggered stimulation. External cross-correlations are as in Fig 4B with σ = 20ms, and the stimulation delay is set at d = 30ms. The proportion of stimulated neurons in b is set to ρ = 0.5. The bottom of the same panel shows the normalized changes of mean synaptic strengths due to spike-triggered stimulation. As established for the original network (see Fig 2B), the biggest change occurs for synapses from a to the subgroup that is directly stimulated (b). However for subgroup b, we see a noticeable change in its incoming synapses from group a, in contrast to synapses of other unstimulated groups (c) that do not appreciably change. This means that sharing activation statistics with stimulated neurons is enough to transfer the plasticity-inducing effect of conditioning to a secondary neural population.

Next, we investigate how this phenomenon is affected by the proportion of neurons in b that receive stimulation, ρ. Fig 6C shows the subgroup-averaged normalized changes of synapses from group a to subgroups b and b, as ρ varies between 0 and 1. When more neurons get stimulated, the transferred effect on the unstimulated group is amplified. This means that the combined outcome on the entirety of group b grows even faster—supralinearly—with ρ, as shown in Fig 6C, where the combined b-averaged changes in synaptic strength, , are plotted as a function of ρ.

In summary, this phenomenon represents our third and final finding (point 3. in Introduction). Our model shows that neurons not directly stimulated during spike-triggered conditioning can be entrained into artificially induced plasticity changes by a subgroup of stimulated cells, and that the combined population-averaged effect grows supra-linearly with the size of the stimulated subgroup.



In this study, we used a probabilistic model of spiking neurons with plastic synapses obeying a simple STDP rule to investigate the effect of a BBCI on the connectivity of recurrent cortical-like networks. Here the BBCI records from a single neuron within a population and delivers spike-triggered stimuli to a different population after a set delay. We developed a reduced dynamical system for the average synaptic strengths between neural populations; these dynamics admit stable fixed points corresponding to synaptic equilibria that depend solely on the network activity’s correlations and spike-triggered stimulation parameters (see Methods). In this framework, individual synapses may fluctuate with ongoing network activity but their population average remains stable in time. We validate our findings with detailed numerical simulations of a spiking network and calibrate our result based on experiments in macaque MC. To our knowledge, this is the first time a plastic spiking network model includes recurrent network interactions to capture the effects of a BBCI.

We successfully reproduce key experimental results from [10] that describe synaptic changes due to spike-triggered conditioning. Specifically, we recover the two emergent timescales with which these changes occur (hours to days), we show that filtered evoked activity from our network mimics muscle EMG patterns produced by ICMS protocols, and we show that maximal changes in mean synaptic strength from the recording site to the stimulated site occur with stimulation delays in the 20–50 ms range, as was the case for experiments. Furthermore, we formulate three novel findings using our model. We outline the relationship between temporal statistics within neural populations and optimal stimulation parameters, we uncover multi-synaptic mechanisms that emerge from spike-triggered conditioning that are not directly predicted by a single-synapse STDP rationale, and we find that the stimulation of a subset of neurons within a population can lead to supra-linear scaling of effects.

Based on this, we formulate two main experimental predictions:

  1. Spiking statistics in a network greatly influence the effects of stimulation: a wider range of spike-stimulation delays lead to optimal evoked plasticity when correlations have longer timescales. We predict that restricting spike-triggered stimulation to states with these statistics, such as sleep, or potentially driving such correlation artificially, may provide more robust outcomes.
  2. For very short delays, stimulation may produce collateral effects in other synaptic connections due to polysynaptic interactions. We propose that under special conditions, it may be possible to influence collateral synaptic pathways that are not directly targeted by a spike-triggered stimulation protocol.

Cortical activity: Driven vs emergent

In our model, common external inputs are needed to endow neural groups with desired statistics. In cortical networks, it is unclear how much of observed activity is driven from external sources and how much is due to intra-network interactions. We argue that the use of prescribed external activations is appropriate since we show that plasticity works to “learn” activation correlations (see also [1822]), thereby guiding synaptic connections to promote the same spontaneous network statistics as its driven ones. Therefore, while external input rates are necessary in our model, the resulting network correlations C(u) reflect both emergent and external dynamics, and serve as a proxy for MC activity.

State-dependent stimulation: Designing adaptive stimulation strategies

An interesting outcome of our findings is that spike-triggered conditioning is strongly influenced by the source and target cross- and auto-correlation structure C(u). Throughout the paper, we assumed that these statistics were stationary for the entire simulated period. However, it is well known that cortical circuits can show a wide range of activity regimes depending on the state of the animal. The statistics of MC neurons may be very different if the animal is awake and behaving freely or performing a precise motor task, or is asleep [42]. While such states have limited durations, their timescales (from minutes to hours) may be long enough to define locally-stationary statistics that BBCI protocols could leverage to optimize desired effects. For example, stimulation during sleep, which is known to produce oscillation-rich activity with longer-range correlations [42, 44], could have more robust but slower effects, while stimulation during a specific task can have more targeted outcomes.

Scalable framework: From experimental to clinical applications

Our model can easily be scaled up to include multiple recording and stimulation sites, and different stimulation protocols, including, e.g. EMG-triggering [45] or paired-pulse stimulation [46]. It is also easily adaptable to optically-based stimulation which can target specific neurons within functional groups (see e.g. [43, 47, 48]). As it does not require costly simulations—only theoretical estimates—it is straightforward to apply optimization algorithms to find the best stimulation protocol to achieve a desired connectivity between cortical sites. Furthermore, it can easily incorporate closed-loop signals such as changes in recorded statistics in real-time. This framework offers a flexible testbed to help design experiments and clinical treatments.

Neural implants such as BCIs and BBCIs are under active development as they have significant potential for clinical use [49]. Among other outcomes, they are promising avenues for treatment of motor disabilities. Indeed, a BBCI capable of inducing plastic changes in cortical circuits could be used to promote novel synaptic pathways in order to restore functional connectivity after a stroke or injury [50, 51].

For such bidirectional neural implants to be successful, a number of real-time computational issues need to be resolved. Our modelling framework presents a step toward the development of rapidly computable guides for controllers, based on anatomical organization, measurable network properties and known physiological mechanisms such as STDP.

Model limitations and future steps

While being able to capture important features, our model misses some important physiological aspects of MC circuits. Nevertheless, we believe that the simplicity of our model is a strength: it captures complex network-level plasticity changes with only excitatory activity and STDP. This suggests that excitatory mechanisms are central to artificially-induced plasticity by a BBCI. However, this simplicity will likely be insufficient to reproduce more complex protocols that also recruit inhibitory populations. An example is paired-pulse conditioning, where the BBCI stimulates several neural sites, with different time delays. Additional biological realism is key to expand our bottom-up theoretical framework.

A number of steps can be taken to add physiological realism, each of them adding some complications. For inhibition, there are technical issues when considering probabilistic spiking (e.g. inhibition can induce “negative” spiking probabilities if unchecked), and the STDP mechanisms for inhibitory synapses are not well understood, although this is changing rapidly (see e.g. [52, 53]). The implementation of our framework in a dynamical model setting, building on theoretical models of inhibitory plasticity (following e.g. [54, 55]) are natural next steps. However, with added realism comes additional complexity, and it is not clear if analytically tractable results can be derived. Nevertheless, preliminary numerical simulation of our model, with added inhibition shows that although novel mechanisms emerge from inhibitory dynamics, the qualitative phenomena described in this article persist.

In addition, the inclusion of spatial structure and heterogeneous delay distributions, based on anatomical data [9], is necessary to expand the framework to multiple cortical sites and spinal cord. There is also evidence that more complex excitatory synaptic rules involving spike timing, such as the “triplet rule” [56] and history dependent rules [57], may play important roles in cortex. Such synaptic mechanisms involve more complex statistical interactions between neurons and even between a neuron and its spiking history. Mathematical methods that extract the higher order statistics needed to derive the self-consistency equations for synaptic equilibria are currently being developed (e.g. [58]), and their application to BBCI modelling frameworks similar to ours is a promising avenue for future work. Finally, the inclusion of modulatory mechanisms, activity- or chemically-dependent, is crucial to capture phenomena such as synaptic consolidation and adaptation.

In summary, our reduced dynamical system approach is a promising basis upon which to build and ultimately to predict the effects of finer-grained cell-type specific and temporally structured activation patterns afforded by next-generation neural implants using electrical or optical stimulation.


This section contains details about our model, numerical simulations and analytical derivations of averaged synaptic dynamics. We closely follow prior literature [18], and develop key innovations to better suit our model’s BBCI components.


Spiking activity.

We consider a network of N model neurons whose underlying connections are given by an N × N connectivity matrix J(t) with synaptic weights that evolve according to a plasticity rule (as described below). The presence of a directed synapse Jij from neuron j to neuron i is randomly and independently determined with probability p. Throughout this paper, we consider sparse networks with p = 0.3. Once an initial connectivity matrix J(0) is drawn, existing synapses are allowed to change but no new synapses can be created. All synapses are excitatory so that Jij ≥ 0.

The spike train of neuron i is a collection of times and can be written as sum of Dirac-δ functions: . We are interested in the probability of observing a certain spike train Si(t) given a certain connectivity matrix J(t), the other neurons’ spike trains and the external driving signal νi(t). We assume that this spiking probability can be expressed as a time-dependent density λi(t), the parameters of which are governed by the ensemble averageSi(t)〉 and must be consistent across the network. For simulations, we assume λi(t) is a Poisson rate, but most of the derivations we make below are generalizable to other probabilistic or dynamical models of network spiking activity. For example, an integrate-and-fire spiking mechanism could be used, with a stochastic transfer function derived from Fokker-Planck equations using linear response theory, as is done in [59, 60].

The instantaneous firing rate at time t for the ith neuron in the network, subject to an external driving rate νi(t), is given by (2) where is a synaptic filter with H denoting the Heaviside function and τ a synaptic time constant which we set to τ = 5ms. This filter is normalized () and causal (ε(t) = 0 for t < 0) so that only past spikes influence spiking probabilities at t. Finally, we introduce axonal delays, independently sampled for each synapse from a uniform distribution with ms and σa = 1, a range consistent with experimental studies (see e.g. [9, 14]). We simulate Eq (2) numerically by discretizing time in small bins of size δt and independently drawing spikes for neuron i in bin [t, t + δt] according to the probability λi(t)δt (see Simulation details below).

Plastic synapses.

For each pair of pre- and post-synaptic spike times tpre (from neuron j) and tpost (from neuron i), the synapse Jij, if present, will be changed according to a STDP rule W. This rule defines increments that are added to existing synaptic weights, which are updated every time a new spike is fired. We describe this update procedure below (see Eq (6)). For an interspike interval Δt = tpretpost at a synapse with weight Jij, this increment is given by: (3) This is a multiplicative STDP rule since it is a product of a weight-dependent term f±(Jij) and a spike-time-dependent term W±t). Fig 7A shows a plot of Wt, Jij) as a function of Δt for various values of Jij.

Fig 7.

(A) STDP function Wt, Jij). Dashed black line indicates the W+ and W functions. Coloured lines indicate the full, weight-dependent rule W for different synaptic weight values Jij shown in the color bar. (B) Top: Periodic external rates delivered to each neural group (as in Fig 1). Middle: sample spike raster plot from a simulation subject to rates plotted above with sparse, homogeneous connectivity J(0) (N = 60). (C) Simulated synaptic evolution J(t) with Jmin = 0, Jmax = 0.1 and η = 10−8 under baseline conditions (stimulation “off”) as in B. Left: initial connectivity matrix with connectivity probability p = 0.3 and Jij(0) = Jmax/4 (grayscale: white = Jmin, black = Jmax). Middle: 10 evolving weights J(t) randomly selected from each of 9 possible pre- and post-synaptic group combinations. Color indicates whether a synapse connects neurons within the same group (orange) or across different groups (purple). Right: synaptic matrix J(t) at the end of the simulation, once group-averaged weights have reached steady state.

The functions W+t) and Wt) describe the potentiation and depression of a synapse, respectively, as a function of spike timing. We assume Hebbian plasticity and require that these functions be both positive and decay to zero as |Δt| grows, so that a pre-synaptic spike preceding a post-synaptic one leads to potentiation while the opposite leads to depression, in accordance with experimental estimates for excitatory STDP [1214]. Many functions W± may lead to similar results; here we choose (4) as in [16, 18] and others where A+ and A indicate maximal increments and τ+ and τ are decay time constants. Experimental evidence suggests that W+ and W are not identical so that STDP is not symmetric, with the maximum potentiation increment often being larger than its depression counterpart (A+ > A), and decay lengths following opposite trends (τ > τ+) [12]. As in [16, 18], respecting generally accepted ranges from various animal models [12, 13, 61, 62], we set: A+ = 30, A = 20, τ+ = 8.5 ms, τ = 17 ms.

The terms f+(Jij) and f(Jij) control the magnitude of plastic increments as a function of synaptic strength and decay to zero as Jij approaches set bounds Jmax and Jmin respectively. Thus, synaptic weights can only asymptotically approach these bounds under Eq (3). Here, we choose these functions to be (5) where the exponent γ controls the strength of the weight dependence. We set γ = 0.1, Jmin = 0, Jmax = 0.1 for our simulations and discuss maximal synaptic strengths in the Theory section below. This weight-dependent mechanism reflects the fact that synapses cannot be infinitely strong and that as a synapse potentiates, it becomes harder to potentiate it further as resources to do so deplete (e.g. availability of neurotransmitters and vesicles, etc.). This is consistent with experimental findings [61, 62]. Rather than introducing unrealistic rate-dependent plasticity terms and/or imposing periodic rescaling of synapses, as is often done in the modelling literature, the use of weight-dependent terms in Eq (3) will enable stable connectivity equilibria, akin to homeostasis, while remaining biologically plausible.

In addition to axonal delays introduced earlier, we add dendritic delays independently sampled from a uniform distribution with ms and σd = 1 as an estimate of physiological values (see e.g. [14]). These account for the time for a spike in neuron i to travel back up dendrites and trigger STDP effects for its synapses. Therefore, for every pair of spike times (, ), the synapse Jij will be updated at time : (6) where η ≪ 1 scales the overall STDP learning rate and is set to η = 10−8 for simulations. It is chosen by directly comparing our model’s predicted timescale for synaptic changes and the BBCI experiment we aim to capture [10] (see also Results).

Functional groups and spike-triggered stimulation.

As illustrated in Fig 1A and outlined in Results, we separate our network in three equally sized groups (a, b and c) in which neurons receive the same external drive να(t) (α = a, b or c). These groups are indexed in order so that neurons i = 1, …, N/3 are in group a, etc. For exposition, Fig 7C shows time-traces of the synaptic weights J(t), with J(0) randomly initiated as a sparse matrix with homogeneous weights Jinit = Jmax/4. We can already see some connectivity structure emerge from network activity, with synapses that connect neurons within the same group (in orange) gradually increasing and synapses connecting neurons of different groups (in purple) decreasing.

To mimic the spike-triggered stimulation experiments in [10], we assign group a to be the “Recording” site, group b to be the “Stimulation” site and c to be the “Control” site. Neuron i = 1 from group a is the “recorded” neuron, whose spikes trigger stimulation of every neuron in group b after a stimulation delay d. Stimulation of subsets of b is also considered in Results. Thus, we impose the following interactions: (7) where “†” indicates the presence of stimulation. The bottom panel of Fig 1B shows the simulated spiking activity of a network with this stimulation protocol turned “on” and a delay d = 10 ms. Further effects of spike-triggered stimulation conditioning are described in Results.

Simulation details.

Spiking network simulations were implemented using Python and MATLAB programming languages with the Mersenne Twister algorithm [63] for pseudo-random number generation. A simulation of system (2) is performed by discretizing time in bins of width δt = 0.1 ms and pseudo-randomly drawing spikes for each neuron i according to the probability λi(t)*δt. For each new spike drawn, all affected synapses are updated according to the summation of rule Eq (6) for all temporally-filtered preceding spikes.

For system (7), in the presence of spike-triggered stimulation, every time a spike from neuron 1 is drawn, the spiking probability of every neuron in group b is artificially set to 1, after a delay d. Unless otherwise noted, the parameters used for all spiking simulations are listed in Table 1.


Our goal is to derive an analytical expression for the dynamics of the synaptic matrix J(t), to allow us to predict the timescale of plastic changes and relative equilibria. Eqs (2) and (6) show that changes of the synapses stored in J(t), modulated by the spiking activity of the network, depend on the external rates ν(t) and the synaptic matrix J(t) itself. This leads to a self-consistent relationship between J(t) dynamics and network spiking activity. Earlier work [1822] describes a framework to effectively decouple these interactions, which we follow and adapt to our needs.

The key idea, originally proposed in [16], is to use a separation of timescales, assuming that synapses are locally constant over time intervals [t1, t2], on which the network has stable and stationary spiking statistics. It follows that accumulated increments for the synapse Jij over that interval, denoted ΔJij = Jij(t2) − Jij(t1), is given by the sum of all discrete plastic “steps” due to spike-time pairs arriving at the ij-synapse (ignoring delays for clarity): (8) An equivalent formulation of Eq (8) uses the density of interspike intervals over the interval [t1, t2]. To this end, let Cij(u) be the count of spike pairs separated by u occurring over the interval [t1, t2], which is the (non-normalized) cross-correlation between neuron j and neuron i. Then, (9) Eq (9) outlines the basic mechanism governing the evolution of the synaptic matrix J(t) over consecutive time intervals. The challenge is to express Cij(u)—itself nontrivially dependent on J(t) and external rates ν(t)—in a closed form, so that Eq (9) can be iterated by updating J at each step.

The authors of [18] present detailed descriptions of this iteration process, and of ways to estimate the cross-correlations Cij(u). However, a number of limitations of this original derivation prevent us from using it directly: (i) The assumption that external rates νi(t) and νj(t) are δ-correlated; we require arbitrary cross-correlation functions to fit the model to experiments. (ii) Artificial spike-triggered stimulation induced by the BBCI introduces non-trivial statistical dependencies between groups and single neurons.

In the following, still closely following [18], we present a theoretical derivation that addresses these issues. First, we describe parametric constraints that ensure that spiking activity remains stable and does not run away in a self-excitation cascade. Second, we formally describe the timescale separation argument outlined above, followed by estimates for network cross-correlations, both for normal activity and in the presence of spike-triggered stimulation. Finally, we formulate and analyze a dynamical system for averaged synaptic dynamics.

Network stability.

Here, we find constraints on synaptic weight bounds to ensure that spiking activity does not grow unbounded due to run-away self-excitation.

To see how this happens, assume that J is fixed and external rates are constant in time: ν(t) ≡ ν0. It follows that the mean network rates can be approximated by the following expansion: (10) If the terms in this expansion do not decay fast enough, the average activity rates quickly diverge. However, it is easy to show that this can be prevented if we require the eigenvalues of J to remain bounded. For more details about precise stability conditions, see [18] and references therein. For the remainder of this paper, we assume the sufficient, but not necessary condition that all eigenvalues of J(t) remain within the complex unit circle at all times, and adjust the bounds Jmin and Jmax accordingly. This means that Jn → 0 as n → ∞, and that for any external rates ν(t) with stationary statistics, mean rates are well defined.

Separation of timescales.

We now present mathematical arguments in support of the simplifying assumption that synaptic matrices remain constant on short time intervals, since synaptic changes occur at slower timescales than spiking activity. In turn, this leads to the formulation of well-defined cross-correlation quantities for network activity. The arguments presented here closely follow those originally presented in [16] and used throughout [18].

The timescale on which impactful plastic changes occur is much longer than the spiking activity timescale, a fact imposed by η ≪ 1 which implies that only tiny plastic increments can occur for each pair of pre- and post-synaptic spikes. This allows one to effectively separate the spiking dynamics from the synaptic dynamics by assuming that over a reasonably long time period of length T (on the order of a few seconds), synaptic weights Jij(t) are approximately constant (≈ Jij). As a result, it is possible to derive dynamic equations for the synaptic connectivity matrix J on the timescale given by T. In this article, we set T = 2 seconds for the remainder of this paper.

As defined in Eq (2), λi(t) gives the instantaneous spiking probability of neuron i at time t. This probability depends both on the external rate νi(t) and on past spikes from other neurons in the network, which are themselves random processes governed by λj(s) for s < t. In what follows, we consider the ensemble average over all these processes to get the expected rate 〈λi(t)〉. For the sake of convenience, we drop the brackets and write (11) where we define the delayed convolution as Eq (11) is defined over plasticity epochs [tT, t] where J is kept constant. At the end of this epoch, we compute the changes in J.

The density count of spike pairs separated by over the epoch [tT, t] is given by the (non-normalized) cross-correlation between neurons i and j: . In matrix form, we get (12) with λ(t) = (λ1(t), …, λN(t)) where ⊤ denotes matrix transposition. For “post/pre” neurons (i, j), the expected number of spike pairs separated by Δt ∈ [u1, u2] in epoch [tT, t] is given by . It follows that the expected incremental change for the synapse Jij, after the epoch [tT, t], is given by (13) Expression (13) illustrates that all relevant information needed to predict expected synaptic changes is contained in the network’s cross-correlation C(u; t).

Cross-correlations for normal network activity.

We now derive estimates for cross-correlations of network spiking activity C(u) in terms of J and the statistics of the external driving rates ν(t), so that network statistics can be expressed without implicit references to spiking probabilities λ(t). While a similar quantity is used in [18], we introduce a new expansion-based approach that generalizes to arbitrary external statistics.

Consider the (non-normalized) cross-correlation between external rates: defined as in Eq (12). We assume that such “external” cross-correlations are stationary in time and therefore omit the dependence on t: . For our example rates ν(t) from Fig 1C, is plotted in Fig 8 (thin black line). How will the network react to external drives with these statistics?

Fig 8. Correlation functions averaged over all pairs of connected cells in respective group combinations: external rate correlations (solid black); estimates for network correlations from full spiking network simulations (orange line with shaded green area showing standard deviation of 5 seconds batched estimates) and analytical estimates of order 4 (dashed green).

(A) For network under normal activity. (B) For network under spike-triggered stimulation (†). Boxes in the middle show zooms from C(†)(u) from A and B.

We start by replacing every delay in the network by its mean. As long as the delay distributions are not too wide, this gives accurate results. This enables us to write Eq (11) in matrix form: (14) where εa(t) is shorthand for . Ideally, one would like to isolate λ from Eq (14) but this is difficult because of the convolution with the synaptic kernel ε. In [1822], the authors circumvent this problem by performing calculations in Laplace space, thus transforming the convolution into a product and enabling a self-consistent equation for λ(t). However, this required important assumptions about the shape of external cross-correlations so that inverse transforms remain tractable. In our case, we need to remain as general as possible which makes solving a self-consistent equation exactly challenging.

To circumvent this problem we rely on the stability condition that J must have eigenvalues within the unit circle, and use this to produce expansions that we truncate at higher powers of J. We start by substituting Eq (14) into the expression (12) for C(u): (15) To simplify further, we replace the cross-correlation terms involving convolved rates with delayed correlations which is justified for T large enough and ν(t) fluctuating on longer timescales than τ (ε’s time-constant). In turn, we can further expand this expression to obtain: (16) with as n → ∞. A similar expression can be derived for 〈ν(s)(λ * εa)(s + u)〉. Combining Eqs (15) and (16), we obtain the following truncated, nth-order expansion formula: (17) that provides an algorithm to estimate self-consistent network cross-correlations. Fig 8A shows C(u) averaged over functional groups, both estimated from spiking simulations and obtained from Eq (17) truncated to fourth order; they agree nicely.

Cross-correlations for spike-triggered stimulated networks.

We next adapt our expansion-based approach to derive an expression for C(u): the network’s cross-correlation under spike-triggered stimulation. Novel challenges arise since complex statistical dependencies emerge from triggering artificial stimulation on single neuron spikes, and are beyond the scope of the original framework described in [18].

Recall that after a delay d, the entirety of neural group b is artificially stimulated and forced to spike whenever neuron 1 from group a fires a spike (see Eq (7)). This is easily implementable in numerical simulations but raises a conceptual difficulty when attempting to transform it into a consistent expression for the densities λ(t). This is because a particular realization of spike times from neuron 1 must be fixed across all neurons i in group b. This is akin to a Hawkes process [64] and demands special corrections to be made to C(u) estimates. We revisit these corrections; for now, we follow the same logic as for Eq (14) and write (18) where Aj1 = 1 for jb = {N/3 + 1, …, 2N/3}. The matrix A adds a copy of the spiking probability from neuron 1 to all neurons in group b, after a delay d.

We replace the synaptic filter convolution with the delayed rate, as done in Eq (16), directly in Eq (18), while keeping in mind these terms will be combined later into cross-correlation terms. We get (19) where Π(Ar, Jl) denotes the sum of all possible ordered permutations of r times A and l times J (e.g. Π(A1, J2) = AJ2 + J2 A + JAJ). In turn, this leads to the following truncated estimate: (20) where is a correction term defined below. As delays accumulate and since the matrices A and J do not necessarily commute, there is no simple way to concisely express Eq (20). Nevertheless, its simple combinatorial nature enables a relatively straightforward algorithmic implementation. Moreover, while comparing estimates to numerical simulations with current parameters, we find very good agreement with third- or fourth-order estimates.

We still need to account for the fact that a single spike train is copied into group b rather than independent realizations sharing the same rate . Since the rates λ(t) are treated as ensemble averages (〈λ(t)〉) and assuming independence between disjoint time intervals (as is the case for Poisson processes), it follows that if u ≠ 0. To account for the fact that all neurons in group b are driven by a single spike copy, the following modifications are made via the correction term in Eq (20): (21) where i, jb and is defined as in Eq (10) with (the mean over the epoch) using the same expansion truncation order n used in the cross-correlation estimate. Fig 8B shows agreement between group-averaged C(u) derived from Eqs (19) and (21) and estimated from spiking numerical simulations.

Dynamical system for averaged synaptic strengths.

Using the estimates for network cross-correlations with or without BBCI stimulation, C(†)(u), we derive a dynamical system for synaptic weights averaged over neural groups. This dynamical system has a temporal resolution of T, as it describes the evolution of synapses subject to STDP every T-step. Stability arguments are also presented, describing conditions under which one can expect stable, mean synaptic equilibria. Here, we closely follow [18] although stability arguments differ slightly due to the differences in our model presented above.

In order to use the correlation estimates to predict synaptic changes, we begin by restricting ourselves to group-averaged quantities. This greatly simplifies calculations and gives remarkably precise estimates for networks where N is large enough. Consider the 3 × 3 matrix whose entries represent the mean strength of non-zero synapses from a neuron in group β to a neuron in group α. Recall that p is the probability that any two cells are connected which means that is the mean strength of total synaptic inputs from neurons in group β to a single neuron in group α.

We follow derivations outlined in the previous section to obtain estimates for the group-averaged network cross-correlations. For tractability, we explicitly index these estimates by the (averaged) synaptic matrix and external cross-correlation used in their derivation: and . These are now u-dependent 3 × 3 matrices obtained by using in Eqs (17) and (20). Furthermore, the correction term in Eq (20) following this group average is replaced by its 3 × 3 averaged counterpart which is zero everywhere except: (22) Note that for large N, the modification for interactions between groups a and b vanish, but interactions within group b remain independent of N, since each cell in b shares the same artificially introduced spikes (see Fig 8B).

Having a closed expression for , we define the functional (23) where “∘” designates an element-wise (Hadamard) product and refers to the estimated network correlations using mean matrix and with and without spike-triggered stimulation (†) respectively. From Eq (13), F gives the expected average plastic increments for synaptic matrix M (also 3 × 3), over the epoch [tT, t]. Fig 9A shows for , a random sparse homogeneous matrix as in Fig 7, and as in Fig 8. It follows that F defines a discrete, state-dependent dynamical system on the space of 3 × 3 matrices describing the evolution of our network’s synapses from one activity epoch to the next: (24) For any initial matrix , iterating Eq (24) gives the synaptic evolution of at a temporal resolution T.

Fig 9.

(A) Plot of (with α, β ∈ {a, b, c}) where J(0) is sparse and homogeneous (see C) and is as in Fig 8. Colors represent the pair of pre- and post-synaptic groups αβ as in the upper left key. (B) Plot of as evolves over time indicated by the color scheme. Curved arrows illustrate the iteration scheme and culminate at the equilibrium . (C) Middle: evolution of synaptic weights over time, color-coded as in A. Thick solid lines show analytical predictions of order 4, thick dotted lines show averages from simulations and thin lines show examples of individual synapses Jij(t). On either sides: plots of and J(t) used for simulations for t = 0 (left) and t = 80 hours (right). (D) Scatter plot of and corresponding averages of simulated J(t) over all time points plotted in C. For both C and D, N = 60, p = 0.3, Jmin = 0, Jmax = 0.1, η = 10−8 and Jij(0) = Jmax/4.

For stationary external statistics (i.e. does not depend on t), a fixed point of the system described by Eq (24) is a matrix such that and defines an averaged synaptic equilibrium matrix. Furthermore, an equilibrium is said to be stable if nearby initial states () converge toward it. We are interested in the existence and stability of equilibria as they represent homeostatic steady states. However, classical tools of dynamical systems theory such as linearization prove to be inadequate because F is an integro-functional. Nevertheless, some basic geometric observation can provide valuable insights.

First, notice that since it is defined as the non-normalized cross-correlation. This means that for Mij close to the bounds and –where Wt, Mij) is positive and negative, respectively, for any ΔtF must be positive and negative respectively, for any matrix and any . Moreover, from the definition of f±(Mij) in Eq (5), it is straightforward to see that all entries of are monotonically decreasing with respect to each Mij over the interval [Jmin, Jmax]. Since F is clearly continuous in M, it follows from the mean value theorem that there exists a unique for any such that (25) This is illustrated in Fig 9A where each curve intersects the line exactly once. It follows that a matrix is a fixed point of system (24) if and only if it is also a fixed point of M*, i.e., in Eq (25). It is not clear that there exists a general condition that guarantees that M* admits a fixed point for finite values of η. Nevertheless we argue that in general, one can expect the dynamics of Eq (24) to converge toward a very small region of matrix space and remain there, as is illustrated in Fig 9B.

To see why this should be the case, let us assume without loss of generality that at some time t, . By construction, this means that so that (i.e. decreases). Assuming very small steps and by the continuity of F, we can expect M* to also show small changes as evolves. Thus, the only way the distance between and can grow is if also decreases, but this cannot go on forever since is bounded. It follows that beyond a certain time, which implies that converges toward .

This contraction argument would be sufficient to show the existence of a fixed point if the function depended only on the αβ component of , but this is not the case by construction of the functional F. Indeed, interactions throughout the network influence the cross-correlation between any two groups α and β which in turn, influence expected synaptic changes between these groups. However, if synapses converge toward their respective fixed points at a comparable rate, then their contributions to M* all attenuate at similar rates, leading to stable steady states. This is what we observe when we numerically iterate system (24).

Fig 9B illustrates this stability property by showing the “ba” coordinate of F throughout the iteration process Eq (24) along with a cartoon representation of iterates . We can clearly see that not only does the iterate push J(t) toward a value for which F = 0, but F also changes in a monotonic manner so that for small enough steps modulated by η, is bound to converge to a stable . The same is true for each coordinate leading to a global J*.

Fig 9C shows both spiking network simulations and averaged synaptic dynamics for a system with normal dynamics (i.e. no stimulation) starting from a sparse homogeneous matrix. Equilibria for averaged synaptic strengths across all functional groups are well captured by the reduced system. Note that while mean synaptic strengths are very stable once an equilibrium is reached, individual synapses are not; they continue to fluctuate as a result of irregular spiking activity and inhomogeneous connectivity.

Moreover, the reduced system (24) also captures the timescale at which group-averaged synapses converge to their respective equilibria since it estimates the expected synaptic changes over plasticity epochs, modulated by the learning rate η. To better appreciate this point, Fig 9D shows a scatter plot of from system (24) against spiking simulations for all time points plotted in panel C of the same figure: all points lie close to the diagonal, indicating a tight agreement throughout the convergence process. We will see in the Results section that capturing synaptic convergence timescales with our reduced model enables us to calibrate the learning rate η to fit experimental observations.

Finally, note that to obtain estimates for equilibria alone, a complete iteration procedure is not required. Using the contraction argument presented above, it is sufficient to evaluate from Eq (25) for any initial matrix , and repeat the process a few times (i.e. ). We find that two or three iterations of this process are enough to converge to the equilibrium within machine precision.

In summary, we showed that it is possible to obtain estimates for the network’s cross-correlation at a temporal resolution defined by plasticity epochs [tT, t], for both the normal condition (C(u; t)) and the spike-triggered condition (C(u; t)). These estimates can be truncated at any order (i.e. ignoring terms of higher powers of J) and depend only on the cross-correlation of external inputs over the same plasticity epoch (). We find through comparison with numerical simulations that truncating to third or fourth order is sufficient to capture network statistics almost perfectly as is shown in Fig 8. These estimates can then be used to create a dynamical system capable of estimating both the stable synaptic equilibria that emerge from spiking dynamics and the convergence timescales leading to them, for both the normal and spike-triggered stimulation paradigms.

Gaussian cross-correlations fit.

Suppose we use external inputs with a cross-correlation function that follows a Gaussian shape to describe the average interaction between neurons in the same group, and a flat shape otherwise, as illustrated in Fig 4B. It is easy to see that the cross-correlation function C(u) for our network, once it reached a synaptic equilibrium J* (see previous section), will have a similar shape as (i.e. Gaussian shaped within groups and almost flat otherwise). However, the height, baseline and width of the Gaussian components of Cαα(u) may differ from those in , α ∈ {a, b, c}. If and σ represent the width of these Gaussian peaks (i.e. standard deviations) respectively, as in Eq (1), our goal is to derive a relationship between the two. See Fig 10A for an illustrative example pair of and Cαα(u) (see also Fig 4B).

Fig 10.

(A) Cross-correlation functions for external commands (thin black line) and for network activity Cαα(u) (thick blue line) for an average pair of neurons in the same group (α ∈ {a, b, c}). Parameters are ms and σ = 55.76 ms. See also Fig 4B. (B) Numerically derived linear relationship . Blue dots are obtained from setting and deriving network synaptic equilibria. Red line shows linear the regression. Dotted black line shows the identity.

Let Φ(σ) return the width of the external cross-correlations that will produce network cross-correlations Cαα(u) with width σ. We numerically interpolate the function Φ(t) by repeatedly deriving network cross-correlations at equilibrium, using Eqs (16) and (24). We found a linear relationship between and σ, as plotted in Fig 10B, and define Φ(σ) to be the regression line, shown in red in the same figure.

Experimental procedure

In Results, we use experimentally obtained cross-correlation functions from spiking activity recorded in a monkey implanted with a Utah array in primary MC and performing an isometric 2D target-tracking task. Spikes from multiple electrodes of the array were recorded during task performance, sorted, and cross-correllograms were compiled with a resolution of 2 milliseconds. Recorded activity was modulated during task performance. We assume the neurons whose spikes are recorded from the same electrode (using spike sorting) were close enough to be part of the same neural group, in the context of our model. In contrast, neurons recorded from different electrodes, separated by at least 400 μm, are considered as part of different groups.


The authors thank Alain Vinet for insightful discussions, and Andrew Bogaard and Andrew Jackson for experimental data.

Author Contributions

  1. Conceptualization: GL NIK JFK ALF EEF.
  2. Data curation: GL EEF.
  3. Formal analysis: GL NIK.
  4. Funding acquisition: GL JFK EEF ALF.
  5. Investigation: GL NIK JFK ALF EEF.
  6. Methodology: GL NIK JFK ALF EEF.
  7. Software: GL NIK.
  8. Validation: GL NIK.
  9. Visualization: GL.
  10. Writing – original draft: GL.
  11. Writing – review & editing: GL NIK JFK ALF EEF.


  1. 1. Cheney PD, Fetz EE. Postspike facilitation of forelimb muscle activity by primate corticomotoneuronal cells. Journal of Neurophysiology. 1980;44(4):751–772. pmid:6253604
  2. 2. Fetz EE. Functional organization of motor and sensory cortex symmetries and parallels. In: Dynamic Aspects of Neocortical Function. J. Wiley; 1984. p. 453–473.
  3. 3. Schieber MH. Constraints on Somatotopic Organization in the Primary Motor Cortex. Journal of Neurophysiology. 2001;86(5):2125–2143. pmid:11698506
  4. 4. Mountcastle VB, Powell TP. Neural mechanisms subserving cutaneous sensibility, with special reference to the role of afferent inhibition in sensory perception and discrimination. vol. 32. Bulletin of the Johns Hopkins Hospital; 1959.
  5. 5. Jackson A, Gee VJ, Baker SN, Lemon RN. Synchrony between neurons with similar muscle fields in monkey motor cortex. Neuron. 2003;38(1):115–125. pmid:12691669
  6. 6. Monfils MH, Plautz EJ, Kleim JA. In search of the motor engram: motor map plasticity as a mechanism for encoding motor experience. The Neuroscientist: a review journal bringing neurobiology, neurology and psychiatry. 2005;11(5):471–483. pmid:16151047
  7. 7. Sanes JN, Donoghue JP. Plasticity and primary motor cortex. Annual review of neuroscience. 2000;23(1):393–415. pmid:10845069
  8. 8. Buonomano DV, Merzenich MM. Cortical plasticity: from synapses to maps. Annual review of neuroscience. 1998;21(1):149–186. pmid:9530495
  9. 9. Markram H, Muller E, Ramaswamy S, Reimann MW, Abdellah M, Sanchez CA, et al. Reconstruction and Simulation of Neocortical Microcircuitry. Cell. 2015;163(2):456–492. pmid:26451489
  10. 10. Jackson A, Mavoori J, Fetz EE. Long-term motor cortex plasticity induced by an electronic neural implant. Nature. 2006;444(7115):56–60. pmid:17057705
  11. 11. Rebesco JM, Stevenson IH, Koerding K, Solla SA, Miller LE. Rewiring Neural Interactions by Micro-Stimulation. Frontiers in Systems Neuroscience. 2010;4. pmid:20838477
  12. 12. Bi G, Poo M. Synaptic modification by correlated activity: Hebb’s postulate revisited. Annual review of neuroscience. 2001;24(1):139–166. pmid:11283308
  13. 13. Caporale N, Dan Y. Spike timing-dependent plasticity: a Hebbian learning rule. Annual Review of Neuroscience. 2008;31(1):25–46. pmid:18275283
  14. 14. Markram H, Lübke J, Frotscher M, Roth A. Physiology and anatomy of synaptic connections between thick tufted pyramidal neurones in the developing rat neocortex. The Journal of Physiology. 1997;500(2):409–440. pmid:9147328
  15. 15. Nishimura Y, Perlmutter SI, Eaton RW, Fetz EE. Spike-Timing-Dependent Plasticity in Primate Corticospinal Connections Induced during Free Behavior. Neuron. 2013;80(5):1301–1309. pmid:24210907
  16. 16. Kempter R, Gerstner W, Van Hemmen JL. Hebbian learning and spiking neurons. Physical Review E. 1999;59(4):4498–4514.
  17. 17. Burkitt AN, Gilson M, Van Hemmen JL. Spike-timing-dependent plasticity for neurons with recurrent connections. Biological Cybernetics. 2007;96(5):533–546. pmid:17415586
  18. 18. Gilson M, Burkitt AN, Grayden DB, Thomas DA, van Hemmen JL. Emergence of network structure due to spike-timing-dependent plasticity in recurrent neuronal networks V: self-organization schemes and weight dependence. Biological Cybernetics. 2010;103(5):365–386. pmid:20882297
  19. 19. Gilson M, Burkitt AN, Grayden DB, Thomas DA, Hemmen JLv. Emergence of network structure due to spike-timing-dependent plasticity in recurrent neuronal networks IV. Biological Cybernetics. 2009;101(5–6):427–444. pmid:19937070
  20. 20. Gilson M, Burkitt AN, Grayden DB, Thomas DA, van Hemmen JL. Emergence of network structure due to spike-timing-dependent plasticity in recurrent neuronal networks III: Partially connected neurons driven by spontaneous activity. Biological Cybernetics. 2009;101(5–6):411–426. pmid:19937071
  21. 21. Gilson M, Burkitt AN, Grayden DB, Thomas DA, Hemmen JLv. Emergence of network structure due to spike-timing-dependent plasticity in recurrent neuronal networks. II. Input selectivity—symmetry breaking. Biological Cybernetics. 2009;101(2):103–114. pmid:19536559
  22. 22. Gilson M, Burkitt AN, Grayden DB, Thomas DA, Hemmen JLv. Emergence of network structure due to spike-timing-dependent plasticity in recurrent neuronal networks. I. Input selectivity–strengthening correlated input pathways. Biological Cybernetics. 2009;101(2):81–102. pmid:19536560
  23. 23. Morrison A, Aertsen A, Diesmann M. Spike-timing-dependent plasticity in balanced random networks. Neural Computation. 2007;19(6):1437–1467. pmid:17444756
  24. 24. Gütig R, Aharonov R, Rotter S, Sompolinsky H. Learning input correlations through nonlinear temporally asymmetric Hebbian plasticity. The Journal of neuroscience: the official journal of the Society for Neuroscience. 2003;23(9):3697–3714. pmid:12736341
  25. 25. Meffin H, Besson J, Burkitt AN, Grayden DB. Learning the structure of correlated synaptic subgroups using stable and competitive spike-timing-dependent plasticity. Physical Review E. 2006;73(4):041911. pmid:16711840
  26. 26. Song S, Miller KD, Abbott LF. Competitive Hebbian learning through spike-timing-dependent synaptic plasticity. Nature Neuroscience. 2000;3:919–926. pmid:10966623
  27. 27. Song S, Abbott LF. Cortical development and remapping through spike timing-dependent plasticity. Neuron. 2001;32(2):339–350. pmid:11684002
  28. 28. Van Rossum M, Bi GQ. Stable Hebbian learning from spike timing-dependent plasticity. The Journal of Neuroscience. 2000;20:8812–8821. pmid:11102489
  29. 29. Rubin JE, Gerkin RC, Bi Gq, Chow CC. Calcium Time Course as a Signal for Spike-Timing–Dependent Plasticity. Journal of Neurophysiology. 2005;93(5):2600–2613. pmid:15625097
  30. 30. Gilson M, Fukai T. Stability versus neuronal specialization for STDP: long-tail weight distributions solve the dilemma. PLoS ONE. 2011;6(10):e25339. pmid:22003389
  31. 31. Billings G, van Rossum MCW. Memory Retention and Spike-Timing-Dependent Plasticity. Journal of Neurophysiology. 2009;101(6):2775–2788. pmid:19297513
  32. 32. Renart A, de la Rocha J, Bartho P, Hollender L, Parga N, Reyes A, et al. The Asynchronous State in Cortical Circuits. Science (New York, NY). 2010;327(5965):587–590.
  33. 33. Shadlen MN, Newsome WT. The variable discharge of cortical neurons: implications for connectivity, computation, and information coding. The Journal of neuroscience: the official journal of the Society for Neuroscience. 1998;18:3870–3896. pmid:9570816
  34. 34. Vreeswijk Cv, Sompolinsky H. Chaos in neuronal networks with balanced excitatory and inhibitory activity. Science (New York, NY). 1996;274(5293):1724. pmid:8939866
  35. 35. Smith MA, Ghazizadeh A, Shadmehr R. Interacting Adaptive Processes with Different Timescales Underlie Short-Term Motor Learning. PLoS Biology. 2006;4(6):e179–9. pmid:16700627
  36. 36. Truccolo W, Hochberg LR, Donoghue JP. Collective dynamics in human and monkey sensorimotor cortex: predicting single neuron spikes. Nature Publishing Group. 2010;13(1):105–111. pmid:19966837
  37. 37. Carmena JM, Lebedev MA, Crist RE, O’Doherty JE, Santucci DM, Dimitrov DF, et al. Learning to Control a Brain–Machine Interface for Reaching and Grasping by Primates. PLoS Biology. 2003;1(2):e42. pmid:14624244
  38. 38. Paninski L, Fellows MR, Hatsopoulos NG, Donoghue JP. Spatiotemporal Tuning of Motor Cortical Neurons for Hand Position and Velocity. Journal of Neurophysiology. 2004;91(1):515–532. pmid:13679402
  39. 39. Churchland MM, Shenoy KV. Temporal Complexity and Heterogeneity of Single-Neuron Activity in Premotor and Motor Cortex. Journal of Neurophysiology. 2007;97(6):4235–4257. pmid:17376854
  40. 40. Smith WS, Fetz EE. Synaptic Linkages Between Corticomotoneuronal Cells Affecting Forelimb Muscles in Behaving Primates. Journal of Neurophysiology. 2009;102(2):1040–1048. pmid:19515946
  41. 41. Smith WS, Fetz EE. Synaptic interactions between forelimb-related motor cortex neurons in behaving primates. Journal of Neurophysiology. 2009;102(2):1026–1039. pmid:19439672
  42. 42. Jackson A, Mavoori J, Fetz EE. Correlations Between the Same Motor Cortex Cells and Arm Muscles During a Trained Task, Free Behavior, and Natural Sleep in the Macaque Monkey. Journal of Neurophysiology. 2007;97(1):360–374. pmid:17021028
  43. 43. Ledochowitsch P, Yazdan-Shahmorad A, Bouchard KE, Diaz-Botia C, Hanson TL, He JW, et al. Strategies for optical control and simultaneous electrical readout of extended cortical circuits. Journal of neuroscience methods. 2015;256(C):220–231. pmid:26296286
  44. 44. Bazhenov M, Timofeev I, Steriade M, Sejnowski TJ. Model of Thalamocortical Slow-Wave Sleep Oscillations and Transitions to Activated States. The Journal of neuroscience: the official journal of the Society for Neuroscience. 2002;22(19):8691–8704. pmid:12351744
  45. 45. Lucas T, Fetz EE. Myo-cortical crossed feedback reorganizes primate motor cortex output. The Journal of Neuroscience. 2013;33(12):5261–5274.
  46. 46. Seeman S, Mogen BJ, Fetz EE, Perlmutter S. Paired stimulation induces spike-timing dependent plasticity of neural connections in primate sensorimotor cortex. In: Society for Neuroscience Annual Meeting proceedings; 2015.
  47. 47. Silversmith DB, Yazdan-Shahmorad A, Sabes PN. Targeted cortical reorganization using optogenetics in non-human primates. In: Society for Neuroscience Annual Meeting proceedings; 2016.
  48. 48. Carrillo-Reid L, Yang W, Bando Y, Peterka DS, Yuste R. Imprinting and recalling cortical ensembles. Science (New York, NY). 2016;353(6300):691–694. pmid:27516599
  49. 49. Potter SM, El Hady A, Fetz EE. Closed-loop neuroscience and neuroengineering. Frontiers in Neural Circuits. 2014;8.
  50. 50. Edwardson MA, Lucas TH, Carey JR, Fetz EE. New modalities of brain stimulation for stroke rehabilitation. Experimental Brain Research. 2013;224(3):335–358. pmid:23192336
  51. 51. Guggenmos DJ, Azin M, Barbay S, Mahnken JD, Dunham C, Mohseni P, et al. Restoration of function after brain damage using a neural prosthesis. Proceedings of the National Academy of Sciences of the United States of America. 2013;110(52):21177–21182. pmid:24324155
  52. 52. Lamsa KP, Kullmann DM. Spike-timing dependent plasticity in inhibitory circuits. Frontiers in Synaptic Neuroscience. 2010;2. pmid:21423494
  53. 53. Krishnan GP, Bazhenov M. Ionic Dynamics Mediate Spontaneous Termination of Seizures and Postictal Depression State. The Journal of neuroscience: the official journal of the Society for Neuroscience. 2011;31(24):8870–8882. pmid:21677171
  54. 54. Vogels TP, Sprekeler H, Zenke F, Clopath C, Gerstner W. Inhibitory plasticity balances excitation and inhibition in sensory pathways and memory networks. Science (New York, NY). 2011;334(6062):1569–1573. pmid:22075724
  55. 55. Litwin-Kumar A, Doiron B. Formation and maintenance of neuronal assemblies through synaptic plasticity. Nature Communications. 2014;5:5319. pmid:25395015
  56. 56. Pfister JP, Gerstner W. Triplets of Spikes in a Model of Spike Timing-Dependent Plasticity. The Journal of neuroscience: the official journal of the Society for Neuroscience. 2006;26(38):9673–9682. pmid:16988038
  57. 57. Froemke R, Dan Y. Spike-timing-dependent synaptic modification induced by natural spike trains. Nature. 2002;. pmid:11919633
  58. 58. Ocker GK, Shea-Brown E, Buice MA. Linking structure and activity in nonlinear spiking networks. arXivorg. 2016;.
  59. 59. Lindner B, Doiron B, Longtin A. Theory of oscillatory firing induced by spatially correlated noise and delayed inhibitory feedback. Physical Review E. 2005;72(6):61919–1179. pmid:16485986
  60. 60. Ocker GK, Litwin-Kumar A, Doiron B. Self-Organization of Microcircuits in Networks of Spiking Neurons with Plastic Synapses. PLoS Computational Biology. 2015;11(8):e1004458–40. pmid:26291697
  61. 61. Wang HX, Gerkin RC, Nauen DW, Bi Gq. Coactivation and timing-dependent integration of synaptic potentiation and depression. Nature Publishing Group. 2005;8(2):187–193. pmid:15657596
  62. 62. Bi GQ, Poo MM. Synaptic modifications in cultured hippocampal neurons: dependence on spike timing, synaptic strength, and postsynaptic cell type. The Journal of neuroscience: the official journal of the Society for Neuroscience. 1998;18(24):10464–10472. pmid:9852584
  63. 63. Matsumoto N, Okada M. Impact of deviation from precise balance of spike-timing-dependent plasticity. Neural Networks. 2004;17(7):917–924. pmid:15312835
  64. 64. Hawkes AG. Point Spectra of Some Mutually Exciting Point Processes. Journal of the Royal Statistical Society Series B (Methodological). 1971;33(3):438–443.