Figures
Abstract
The modular and hierarchical organization of the brain is believed to support the coexistence of segregated (specialization) and integrated (binding) information processes. A relevant question is yet to understand how such architecture naturally emerges and is sustained over time, given the plastic nature of the brain’s wiring. Following evidences that the sensory cortices organize into assemblies under selective stimuli, it has been shown that stable neuronal assemblies can emerge due to targeted stimulation, embedding various forms of synaptic plasticity in presence of homeostatic and/or control mechanisms. Here, we show that simple spike-timing-dependent plasticity (STDP) rules, based only on pre- and post-synaptic spike times, can also lead to the stable encoding of memories in the absence of any control mechanism. We develop a model of spiking neurons, trained by stimuli targeting different sub-populations. The model satisfies some biologically plausible features: (i) it contains excitatory and inhibitory neurons with Hebbian and anti-Hebbian STDP; (ii) neither the neuronal activity nor the synaptic weights are frozen after the learning phase. Instead, the neurons are allowed to fire spontaneously while synaptic plasticity remains active. We find that only the combination of two inhibitory STDP sub-populations allows for the formation of stable modules in the network, with each sub-population playing a distinctive role. The Hebbian sub-population controls for the firing activity, while the anti-Hebbian neurons promote pattern selectivity. After the learning phase, the network settles into an asynchronous irregular resting-state. This post-learning activity is associated with spontaneous memory recalls which turn out to be fundamental for the long-term consolidation of the learned memories. Due to its simplicity, the introduced model can represent a test-bed for further investigations on the role played by STDP on memory storing and maintenance.
Author summary
One of the most remarkable qualities of the brain is its capacity to learn and adapt. How the learning process imprints and maintains memories, by shaping the architecture of connectivity among neurons in a constantly changing and dynamic environment, is a major question of neuroscience. Here, we explore the idea that the segregation of inputs received by a neural network, with inputs targeting distinct populations, is a key factor for shaping the architecture of the network. We find that the presence of inhibitory neurons is necessary for the emergence and the long-term maintenance of modularity in spiking neural networks with plasticity. In particular, we show that two different inhibitory sub-populations, one subject to Hebbian and the other to anti-Hebbian plasticity, are required to promote the formation of feedback and feed-forward inhibition circuits controlling memory consolidation. On one side, these inhibitory circuits favour long-term memory consolidation by inducing spontaneous memory recalls in the asynchronous irregular resting phase. On another side, the number of inhibitory neurons controls the maximal memory capacity of the considered model.
Citation: Bergoin R, Torcini A, Deco G, Quoy M, Zamora-López G (2025) Emergence and maintenance of modularity in neural networks with Hebbian and anti-Hebbian inhibitory STDP. PLoS Comput Biol 21(4): e1012973. https://doi.org/10.1371/journal.pcbi.1012973
Editor: Alex Roxin, CRM: Centre de Recerca Matematica, SPAIN
Received: July 10, 2024; Accepted: March 19, 2025; Published: April 22, 2025
Copyright: © 2025 Bergoin et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: All code written in support of this publication is publicly available at https://github.com/rbergoin/QIF-neurons-with-3-synaptic-plasticities.
Funding: This work was supported (RB, GD and GZL) by the European Union’s Horizon 2020 Framework Programme for Research and Innovation under the Specific [Grant Agreement No. 945539 (Human Brain Project SGA3)] and by an EUTOPIA funding [EUTOPIA-PhD-2020-0000000066 - NEUROAI]. AT received financial support by the Labex MME-DII [Grant No. ANR-11-LBX-0023-01] (together with MQ), by the ANR Project ERMUNDY [Grant No. ANR-18-CE37-0014] and by CY Generations (Grant No. ANR-21-EXES-0008) all part of the French program “Investissements d’Avenir.” GD is supported by Grant PID2022-136216NB-I00 funded by MICIU/AEI/10.13039/501100011033 and by “ERDF A way of making Europe,” ERDF. MQ is also partially supported by CNRS through the IPAL lab in Singapore. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
Introduction
The brain’s connectivity follows a modular and a hierarchical organization at different spatial and functional scales [1–6]. From an operative point of view, this type of architecture is suggested to facilitate the coexistence of segregation and integration of information [7–9]: neuronal circuits or brain regions associated to a specific function are densely connected with each other [10–12], while long-range connections and network hubs allow for the integration (or binding) of different information [9,13,14]. A crucial open question in brain connectivity is to understand how such modular and hierarchical organization naturally emerges as a consequence of the functional needs of the nervous system.
This question has been addressed from various standpoints. On the one hand, it has been shown that spontaneous neural activity might be sufficient to shape network structure [15–18]. The underlying idea is that neural dynamics split brain connectivity into different populations via Hebbian-like adaptation mechanisms that reinforce links between partially synchronised neurons. Such mechanisms may take place, for example, during early development when cortical areas become organized in the absence of sensory stimuli [19]. The resulting complex connectivity—in turn—supports a combination of time-scales and this facilitates the onset of self-sustained metastable neural activity [15,20], e.g., the spontaneous switching of activity across neural populations [21,22].
On the other hand, the relation between learning and network organization is inherently associated with the notions of semantic memory and input selectivity. Working memory is characterised by synaptic changes leading to groups of neurons (assemblies or engrams) to sustain higher frequencies for a few seconds after stimulus presentation [23,24]. These changes also facilitate recognition if the stimulus is presented again shortly after. Sensory cortices contain neurons selectively firing for different features of the inputs, forming differentiated groups of neurons (assemblies) related to receptive fields [25]. Following these observations, computational models of spiking neurons have been proposed to investigate how memories could be imprinted into the neuronal architecture [17,21,26–38]. Starting from a random connectivity, their goal is to reproduce the formation of neuronal assemblies in response to various external stimuli (or memories), mediated by synaptic plasticity.
This process comes with several challenges and questions:
- (i) the correct formation of neural assemblies (modules) in the network architecture that are consistent with the learned stimuli,
- (ii) the long-term stability of these structures when the entrainment is finished, and
- (iii) the stability of the resulting network dynamics.
The mechanism for long-term memory maintenance and consolidation has been related to spontaneous memory recalls (or retrievals). During wake, short and random events of partial synchronous activation of neural sub-networks are related both to the spontaneous recalls [39] and to the consolidation [40–42] of learned memories. Another important aspect is then to clarify how the typical cortical dynamics, that is asynchronous and irregular, can coexist with these spontaneous recalls [43,44].
A crucial aspect for the success of the proposed neural networks is the selection of a model for the plasticity. The exact mechanisms underlying synaptic plasticity have been long debated but the pioneering experiments by Bi & Poo [45,46] on the hippocampus revealed that the plastic changes are driven by a temporally asymmetric form of Hebbian learning, induced by tight temporal correlations between the spikes of pre- and postsynaptic neurons. This phenomenon has been termed as spike-timing-dependent plasticity (STDP) [47–49]. However, previous attempts to model the storage of memory items via stimulations of selected neurons in random balanced networks have led to unstable behaviours impeding the formation of stimulus induced structures [29]. To avoid these pathological behaviours other models have been proposed which, instead, combine different types of voltage- and rate-dependent STDP, often in presence of homeostatic or other control mechanisms to prevent instabilities. In particular, in [31,38] neuronal assemblies were successfully formed which reflected the stored memory items, and their long-term maintenance was mediated by spontaneous recalls of the stored patterns. The main difference among these two studies is how the STDP rules were implemented. In Litwin-Kumar and Doiron [31] a voltage-based STDP rule [30] for the excitatory-excitatory synapses was employed to achieve a stable evolution, together with additional homeostatic mechanisms based on inhibitory plasticity and on non-local renormalization of the excitatory-excitatory synaptic weights. In Yang & La Camera [38] plastic synapses were considered only among excitatory neurons and no further homeostatic mechanisms. The STDP rule is similar to the one in Ref. [31] where a voltage-sensitive variable—depending on the post-synaptic membrane potential—is compared to a threshold in order to determine whether the synapse enters a long-term potentiation or depression state. The main difference is that the thresholds in Yang & La Camera [38] evolve dynamically as in the BCM rule [50], adapting to the single neuron post-synaptic activity in order to avoid instabilities in the dynamics.
In the present paper, we consider simpler STDP mechanisms based only on the local information associated with pre- and post-synaptic spike times analogous to the one discovered by Bi and Poo [45,46]. Contrary to the results in Morrison et al. [29] our network model successfully encodes and sustains stimulus-driven assemblies in the absence of any additional control mechanism. This is achieved by accounting for inhibitory STDP mechanisms alongside the excitatory one. Recent empirical studies have underlined the variety of functional roles played by different classes of interneurons with, for example, parvalbumin-expressing (PV) interneurons in the mouse being subject to symmetric Hebbian STDP and somatostatin-expressing (SOM) ones following asymmetric Hebbian STDP [42,51,52]. Accordingly, our model considers two inhibitory neuronal sub-populations which self-organise during the training phase into two differential functional roles: one sub-population subjected to Hebbian STDP (which controls the firing activity) and another population following anti-Hebbian STDP (which mediates for memory selectivity).
After the learning phase the model settles into an asynchronous irregular state, as typically observed during in-vivo recordings of the brain activity at rest [53]. This post-learning activity is characterized by the occurrence of transient events of partial synchrony, associated to the learned items as in [31–33,38]. These spontaneous memory recalls are crucial for the long-term consolidation of the stored memories by promoting the reinforcement of the underlying connectivity. Given that inhibitory plasticity takes an active role in the model, the memory capacity of the network is controlled by the number of inhibitory neurons. Finally, we also demonstrate the emergence of hub neurons displaying mixed selectivity, by training the network to overlapping memory items [17,34,36–38]. These excitatory hub neurons represent the seeds for hierarchical organization and integration.
Results
We consider networks of excitatory and inhibitory Quadratic Integrate-and-Fire (QIF) [54] neurons, pulse coupled via exponentially decaying post-synaptic potentials and in the presence of STDP. The connections involving pre-synaptic excitatory neurons are subject to Hebbian STDP, while those involving pre-synaptic inhibitory neurons can be subject to two types of STDP: either Hebbian or anti-Hebbian.
Firstly, we will investigate the necessary conditions for the emergence of modular assemblies induced by learning selective stimuli. For this purpose, we will analyze the role played by Hebbian and anti-Hebbian plasticity applied to inhibitory neurons in a network subject to two external stimuli. Secondly, we will investigate the role of spontaneous recalls in order to consolidate and maintain both the learned memory items and the underlying modular connectivity. To clarify this aspect we will perturb the synaptic connectivity matrix and we will examine to which extent spontaneous recalls are able to regenerate the original structure induced by the training. The robustness of this scenario will be validated by considering larger system sizes and random networks. Finally, we will generalize the model to account for an increasing number of stored memories and in the presence of overlapping assemblies, thus showing that more complex architectures can develop and be maintained over time.
Emergence of modular connectivity driven by learning selective stimuli
We begin by investigating the necessary conditions for the emergence of modular structure as induced by training with external stimuli. As a general set-up, we consider heterogenous globally coupled networks composed of N = 100 QIF neurons, each neuron receiving weak and independent Gaussian noise. The neurons with labels in the interval i ∈ [ 0 : 79 ] are excitatory and those labelled i ∈ [ 80 : 99 ] are inhibitory. We consider the neurons to be non-identical with their excitabilities normally distributed, leading to spontaneous firing frequencies in the range [0,8] Hz. Synaptic weights wij from the j-th pre-synaptic neuron to the i-th post-synaptic one are subject to STDP, with weights bounded in the interval for excitatory pre-synaptic neurons and in the interval
for inhibitory ones. See Methods for more details.
(A) Experimental protocol consists of the stimulation of two non-overlapping neuronal populations of QIF neurons with plastic synapses. Networks are made of 80% of excitatory and 20% of inhibitory neurons. Stimuli are presented in temporal alternation. Results of the performed numerical experiments are reported for a network with all anti-Hebbian inhibitory neurons (B); with all Hebbian inhibitory neurons (C); with 50% anti-Hebbian and 50% Hebbian inhibitory neurons (D). Raster plots display the firing times of excitatory (red dots) and inhibitory (blue dots) neurons during the simulations. Matrices represent the temporal evolution of the connection weights at different times: t = 0s (random initialization of the weights), t = 20s (middle of the learning phase) and t = 40s (end of the learning phase). The color denotes if the connection is excitatory (red), inhibitory (blue) or absent (white) and the color gradation the strength of the synaptic weight. The final configuration of the connection weights is shown schematically on the right in each case.
The stimulation protocol consists of three stages, as illustrated in Fig 1A : an initial resting phase followed by a learning period and a final consolidation phase. The simulations are performed by ensuring the model respects some biologically realistic conditions: (i) the networks are allowed to continue their spontaneous activity after the learning phase; (ii) the adaptation of the synaptic weights is always active throughout the whole simulation, i.e., before, during and after the learning.
Initially, despite the network being globally coupled the weights are randomly assigned with small positive (negative) values for excitatory (inhibitory) synapses. The system is left to relax for five seconds in the absence of external stimuli. During this stage the network stabilizes into an asynchronous state with the neurons firing irregularly at low frequencies, as shown in the raster plots corresponding to the time interval [ 0 : 5 ] seconds in Fig 1. The emergence of the asynchronous irregular dynamics in our globally coupled model is due to the heterogenous distribution of the excitabilities and of the weights, and the external noise. However, a similar dynamics can be observed in networks of identical neurons (same excitabilities) with random connectivity, as shown in S6 Text and in [31,38,44].
During the learning phase two independent stimuli are applied, each targeting a different (non-overlapping) neuronal population, as shown in Fig 1A. This is done in order to mimic the segregation of (sensory) information being projected to nearby but separate neuronal populations, and to study the role of this segregation for the emergence of modular neuronal architectures. In particular, one stimulus targets population P1 consisting of the first half of excitatory and inhibitory neurons labelled as E and
. The second stimulus targets population P2 consisting of the second half of excitatory and inhibitory neurons,
and
.
The two stimuli are applied for a time duration of one second, randomly alternating between the two populations P1 and P2. During each stimulation period, a constant external positive current is applied to the selected population for 800 ms. The stimulus triggers the target neurons to fire at about 50 Hz. After 800 ms the external current is turned off and the network is left to relax for other 200 ms in order to prevent temporal correlations when alternating between stimulated populations. This protocol is repeated 35 times for a total of 35 seconds. Once the training phase is finished, the network is allowed to evolve freely for 20 seconds in the absence of stimuli. During this phase, however, the synaptic adaptation remains active thus affecting the stabilization of the learned patterns.
Role of Hebbian and anti-Hebbian learning for the emergence of modularconnectivity.
To explore the role played by the inhibitory neurons in the learning process, we study three different scenarios: (a) all inhibitory neurons follow anti-Hebbian STDP, (b) all inhibitory neurons follow Hebbian learning, and (c) a mixed situation where 50% are Hebbian and 50% anti-Hebbian.
- (a) Anti-Hebbian inhibition. In the case with only anti-Hebbian inhibitory plasticity, the network develops into a winner-takes-all architecture promoting the competition of the two sub-populations P1 or P2 on the other one, as shown in Fig 1B. In order to describe the mechanism leading to this architecture, imagine that population P1 is stimulated, consequently all its neurons become active and fire with high frequency. Therefore all the connections E
are reinforced due to the Hebbian nature of the pre-synaptic excitatory neurons. At the same time, all the inhibitory synapses I
are weakened because they are anti-Hebbian. During the stimulation of P1, the neurons in populations P1 and P2 are far from being mutually synchronized, therefore the connections E
are weakened while I
reinforce. The random alternation of the stimuli to populations P1 and P2 induces a gradual emergence of a modular structure, as visible in the connectivity (weight) matrices at times t = 20 sec and t = 40 sec in Fig 1B.
The resulting architecture promotes the competition between the two excitatory populations E1 and E2 and the alternating prevalence of one of them. For example, stimulation to E2 would activate the companion inhibitory neurons through the feed-forward connections () which, in turn, would shut down all neurons in P1 via the strong I
connections. This is visible in the raster plot in Fig 1B : as the training progresses (t = 5 − 40 sec), the neurons of the stimulated population fire at high frequency but the neurons of the non-stimulated one are silenced.
- (b) Hebbian inhibition. In the presence of only Hebbian inhibitory neurons the network also develops a modular organization, however in this case the two populations become disconnected from each other, Fig 1C. Analogously to the previous case, the stimulation of one population (e.g., P1) results in the strengthening of the intra excitatory connections (increase of
synaptic weights) and the weakening of the inter excitatory connections across different populations (decrease of
weights). However, the Hebbian inhibitory plasticity now induces the strengthening of the internal inhibitory connections
and the weakening of the inhibitory synapses across populations. The disconnection of the two populations happens gradually during the learning phase. As shown in the raster plot in Fig 1C, during the initial stimulation epochs, the stimulated population shuts down the activity of the non-stimulated population. But as the training proceeds, the two populations detach one from the other and the non-stimulated population begins to display a low firing resting-state activity.
- (c) Mixed inhibition. In the last case with mixed anti-Hebbian and Hebbian inhibitory neurons, the network also develops two modules, but, as shown in Fig 1D, the resulting connectivity is a combination of the two configurations observed in the previous cases. The Hebbian neurons form self E–I loops within each population (i.e.
−
and
−
). This internal feedback inhibition avoids that the excitatory neurons fire at too large rates. Meanwhile, the anti-Hebbian inhibitory neurons form lateral, feed-forward connections which shut down the firing of the other population, in other terms the sub-population
(
) inhibits all neurons in P2 (P1).
Resting-state network dynamics after learning.
So far, we have shown that selective stimulation to distinct populations consistently gives rise to modular networks and that the resulting configuration depends on the type of plasticity affecting the inhibitory neurons. However, the relevance or plausibility of a learning model should also require that the network exhibits a biologically meaningful dynamical behaviour after training [27,28,31–34,38].
- (a) Anti-Hebbian inhibition. In the simulations with anti-Hebbian inhibitory STDP, the neuronal activity in the post-learning stage (t>40 sec) is dominated by one of the two populations. In Fig 1B, P2 is active and P1 is silent. But this randomly changes over realizations. In general, once the training stage is finished, all neurons start to fire spontaneously driven by the background Gaussian noise. The excitatory sub-population that attains a larger level of internal activity earlier is the one that wins. Due to the lack of internal feedback inhibition (
or
) the winning excitatory sub-population fires rapidly and the companion inhibitory neurons (strongly stimulated) suppress the activity of the other population: e.g. if the neurons in E2 fire faster than those in E1 the sub-population I2 inhibits both E1 and I1.
- (b) Hebbian inhibition. In the case with Hebbian inhibitory neurons, we found that the two populations P1 and P2 become independent, each forming a feedback E–I loop. The presence of the internal feedback inhibition allows the two populations to settle into a resting-state behaviour after training that is characterized by a low firing frequency of approximately 1.0 Hz, as visible in the raster plot of Fig 1C, for t>40 sec. Interestingly, brief events of internal partial synchrony of the two populations are also observed. While in the case with only anti-Hebbian inhibition such a synchronized event would trigger the excitatory neurons to permanently increase their firing, here the presence of the internal feedback inhibition (
and
) avoids the constant synchronization of excitatory neurons, while it keeps their activity at low frequency.
- (c) Mixed inhibition. The post-learning behaviour in the mixed scenario is very similar to the Hebbian case, as shown in Fig 1D. However, in this case populations P1 and P2 are not independent but they inhibit each other. As a consequence, an event of partial synchrony occurring in one of the two modules temporarily shuts down the other module, avoiding their mutual synchrony. Given that the plasticity remains active during this post-learning spontaneous activity, the connection weights continue to be updated. As we will discuss in the following, the occurrence of these spontaneous synchronization events plays a relevant role in the maintenance of the learned memories.
In summary, despite modular organization of the connectivity was found to emerge in the three considered scenarios, only the combination of anti-Hebbian and Hebbian inhibitory neurons resulted in a network satisfying all the desired biologically plausible properties. Anti-Hebbian inhibition alone led to a network with unrealistic post-learning dynamics. Hebbian inhibition alone gives rise to a biologically meaningful resting-state behaviour but at the cost of splitting the network into two disconnected populations. Therefore, in the following we will limit to consider the mixed scenario, with both anti-Hebbian and Hebbian inhibition.
The numerical experiments presented so far were repeated for a variety of special cases in order to validate the robustness of the results and the model: namely, (i) some of the neurons are not trained by stimuli (see S1 Text), (ii) at every stimulation epoch random subsets of neurons are targeted (see S2 Text), (iii) the intensity of the stimulation current is randomly fluctuating (see S3 Text), (iv) the network is composed of a large number of neurons, and (v) the neurons are randomly connected with an Erdös-Reniy distribution of the directed links (see S6 Text).
Spontaneous recalls support consolidation and long-term maintenance ofmemories
Given that in our model plasticity is not frozen after training—but it is left active afterwards—we now investigate the potential role of the spontaneous events observed during the post-learning resting-state for the consolidation and maintenance of the memories. We will refer to these events as memory recalls. First, we will show that the recalls facilitate the completion of imperfectly learned memories and, second, we will study their role for the regeneration of memories that have been partially lost.
(A) Temporal evolution of the connectivity matrix during spontaneous activity in the absence of stimulation. Initial connectivity with two unfinished modules at t = 0 is reinforced over time. The excitatory (inhibitory) connections are marked as in Fig 1. (B) Evolution of the distribution of link weights (probability density functions, PDFs, in a linear-logarithmic scale) in the connectivity matrices at t = 0s (light green), t = 400s (cyan) and t = 4000s (magenta). (C-J) Evolution of the network activity and various metrics in absence of stimulation for a sample of 30 seconds. Some spontaneous recalls are highlighted, for population P1 (green shade) and P2 (orange shade). Pink shadow marks an epoch of asynchronous irregular firing activity without recalls. (C) Raster plot with excitatory (inhibitory) neurons marked in red (blue). (D) PDFs of the coefficient of variations for all the neurons during the entire simulation (grey) and for a homogeneous Poisson process (yellow). (E) Instantaneous Kuramoto order parameters R for the networks (gray) and for neurons in population P1 (green) and P2 (orange), and their corresponding PDFs over the entire simulation (F). (G) Temporal evolution of the mean firing rates for populations P1 and P2, and (H) their corresponding frequency distributions (in linear-logarithmic scale) showing a peak at 2 Hz and long time tail. (I) Instantaneous change rates of synaptic weights in both populations P1 and P2 over time and their PDF (in linear-logarithmic scale) (J) over the entire simulation time, showing the prevalence of positive weight changes (reinforcement) with respect to negative ones (depression).
Memory consolidation.
In order to mimic a hypothetical scenario in which the training stage would stop before completion, we prepared an initial weight matrix representing an imperfectly learned connectivity structure. An example of this weight matrix is shown at t = 0 in Fig 2A. Specifically, the intra-modular synaptic weights are set to for excitatory and to
for inhibitory connections. The inter-modular excitatory (inhibitory) connections are chosen randomly from a Gaussian distribution with mean 0 and standard deviation 0.15 restricted to positive (negative) values.
Once fixed the initial weight matrix, the activity of the network is then left evolve spontaneously, driven only by the background Gaussian noise. However, we observe that the modular organization of the network is reinforced: the intra-modular synaptic weights of the two populations are strengthened and the inter-modular synapses are weakened, as shown by the weight matrices displayed in Fig 2A and the corresponding weight distributions in Fig 2B at t = 400 sec and t = 4000 sec.
The mechanism allowing for the completion of the connectivity pattern is summarized in Figs 2C-J. Panel C shows a 30 seconds sample of activity of the network. This is characterized by an asynchronous irregular evolution with low firing spontaneous activity joined to occasional spontaneous recalls occurring at random times. This observation is confirmed by the fact that the background activity is characterized by a distribution of the coefficient of variations CVi of the neurons in the interval [ 0 . 8 : 1 . 0 ] (panel D, grey distribution) lying close to a Poisson process (in yellow), thus resembling the irregular activity observed in the cortex in vivo [55]. The synchronization order parameter of the network R fluctuates in time with values around 0.2 (panel F, gray distribution), meaning that the network is far from being synchronized. As a matter of fact, for an asynchronous network of N = 100 neurons, due to finite size fluctuations the expected value of the order parameter will be not far from the one here measured, namely . Finally, the distribution of firing rates displays a main peak around 2 Hz and an exponential tail reaching at most 20 Hz (panels G and H).
During spontaneous recalls, the transient increase in synchrony and firing rates of a subset of the neurons (in one of the two clusters) triggers the activation of their synaptic adaptation (panel I), reshaping the synaptic weights and completing the formation of the modular patterns. The specific neurons participating in a recall changes from event to event but they tend to involve a majority of neurons of either population P1 or P2; see for example the events highlighted by green and orange shadows. The recalls coincide with transient peaks of the order parameter and of the mean firing rates of the populations involved (panels E and G, green lines). Consequently, P1 and P2 are internally more synchronized than the network average, with their order parameters fluctuating around 0.4 (panel F), and a broad distribution of firing rates (panel H). The similarity between the PDFs for populations P1 and P2 evidences that recalls occur—on average—with the same probability in both populations. Over time, the net effect of recalls is that the intra-modular synapses between neurons of the same population become reinforced while the inter-modular connections are weakened.
At this point, we remind that our model incorporates a natural, slow tendency to forget the value of the synaptic weights. Therefore, the question arises of how often should spontaneous recalls happen—and how strong should they be—in order to avoid the forgetting of the learned memories. In the example considered here, the weights between the neurons of a population undergo a natural depression during epochs of the resting-activity without recalls (pink shadow in Fig 2C). For the used parameters, a forgetting term of f = 0 . 1 and an average spontaneous firing rate of 2 Hz, we estimate that a period of ≃ 11 seconds of asynchronous firing would be required to forget the contribution of a single recall event (for more details on the calculus see S9 Text). The dominance of the reinforcement during recalls with respect to depression during the asynchronous irregular epochs is confirmed by the asymmetry in the distribution of positive weight change rates over negative ones, Fig 2J.
In summary, we can conclude that, in our model, spontaneous recalls happening during resting-state activity allow for the consolidation of imperfectly learned memories as well as for their long-term maintenance against natural forgetting [31,38].
Regeneration of damaged excitatory and inhibitory connectivity.
A Recovery experiment starting from randomized excitatory (A), inhibitory (B), or inhibitory plus excitatory-inhibitory (C) synaptic connections. In all the three cases, we study the partial recovery of the original modular organization of the synaptic weights over time. This recovery is mediated in all the cases by the emergence of spontaneous recalls, —transient events of partial synchronization between neurons associated with one of the two originally stored memories, highlighted in green for cluster 1 and orange for cluster 2.
Empirical observations have shown that excitatory synapses are more volatile than inhibitory ones [56,57], leading to the hypothesis that reorganization of excitatory connections might be associated with short-term plasticity while inhibitory adaptation might support the long-term maintenance of those memories [58–61]. We now test the plausibility of this idea in our model.
We consider a first case where the weights of the excitatory synapses ( and
) are initially randomized between 0 and 1, but the values of the weights of the inhibitory synapses coincide with those obtained as after the training stage. See the weight matrix for t = 0 in Fig 3A. The network is then let to evolve spontaneously for 24 hours, driven only by the background Gaussian noise. After one hour of simulated time, the excitatory connections have partially recovered the original modular pattern (see the weight matrix for t = 1h in Fig 3A). This recovery is clearly mediated by the spontaneous recalls as illustrated in the raster plot in Fig 3A. Whenever a sufficiently large group of excitatory neurons (associated to one of the previously learned memories) fire in a short time window, this event activates the inhibitory neurons associated to the corresponding memory items and triggers a partial recall via the feedback and feed-forward mechanism previously explained. Consequently, the excitatory synapses involved in the recalled memory are partially reinforced while all other connections are weakened. The structure of the inhibitory weights appears to be deteriorating due to the more disordered activity observed in the network. As a result, in the long-term (t = 12h and t = 24h), the modular connectivity structure encoding the stimuli is completely lost. Therefore, the maintenance of the inhibitory weight structure is sufficient for the (partial) recovery of the memory items over an intermediate time scale only.
The experiment was repeated but randomizing only the inhibitory weights ( and
, the corresponding weight matrix is shown at t = 0 in Fig 3B), the memorized pattern is fully recovered and maintained at all times. The main reason for this difference is related to the fact that when the excitatory synapses are preserved, the network displays a more sparse activity than with initially randomized excitatory connections. In the latter case, there are more interactions among excitatory neurons and this gives rise to a background activity involving most of the excitatory neurons. Therefore, one finally obtains a weight matrix with a more disordered structure (see in Fig 3A weight matrix at t = 1h), which becomes unstable after several hours of spontaneous activity. Another reason of this difference is related to the asymmetric 80–20% E ∕ I ratio, leading to notably different spontaneous firing frequencies—and recall probabilities—when either the excitatory or the inhibitory synapses are randomized.
Finally, the experiment is repeated by randomizing all weights except the excitatory to excitatory connections (), in Fig 3C. In the initial moments of the simulation, again we observe the spontaneous recall of excitatory neurons, but the inhibitory neurons due to the random distribution of the E → I connections are now randomly associated to one of the two clusters. This leads to a slightly different organization in the connectivity matrix than before, but with a similar architecture.
These results show that the conservation of the excitatory to excitatory E → E connections is indeed sufficient to maintain and reconsolidate the structure. Furthermore, our analysis suggests that memory items can be encoded in different types of connections with different effects on the intermediate or long-term storage and on the efficiency of recovering, which may echo the different types of memories present in the brain.
Generalization of the model to more complex situations
So far, we have illustrated the results for networks trained with two stimuli, applied at separate neuronal sub-populations. We will now show that the model can be generalized. Firstly, the model is trained to an increasing number of memories in order to estimate its memory capacity. Secondly, the stability of the model is validated for larger network sizes and more stimuli, and lastly, we demonstrate the emergence of hub neurons by training the network with overlapping memory items.
Memory capacity of the model.
(A) Training the network with M = 4 non-overlapping stimuli. The green, orange, pink and cyan brackets represent clusters 1, 2, 3 and 4. Results are qualitatively the same as for the example with two memory items. (B) Stability diagram of the model for a network of N = 100 neurons. The red line separating the stability (orange) and instability (purple) regions, represents an upper limit for the number of inhibitory neurons (
) needed to maintain M independent memory items. The number of excitatory neurons in each realization is
−
delimiting the non-accessible areas (white region) of the diagram. The star symbol marks the maximal memory capacity of the network, corresponding to
items with
inhibitory neurons (cyan lines, 33 anti-Hebbian and 33 Hebbian). Magenta and light-green lines highlight the ranges of number of inhibitory neurons (and therefore of the
ratios) that allow the stabilization of M = 20 and M = 10 memory items, respectively. (C) Table comparing the number of neurons and E ∕ I ratios of different parts of the human [62–74] and mouse [75–77] brain and their hypothetical memory capacity obtained by extrapolating the results of our model.
We begin by repeating the protocol in Fig 1D but now training the network to M = 4 non-overlapping stimuli, as shown in Fig 4A. We observe the same qualitative results as in the case with 2 stimuli, the only difference is that now four modules emerge in the connectivity matrix. Also in the present situation, the four neuronal assemblies display spontaneous recalls during the post-learning resting-state phase. A minor difference is that now the 20 inhibitory neurons split into 8 sub-populations, each made of 2 – 3 anti-Hebbian and Hebbian neurons. This opens the question of the memory capacity of the network, which seems limited by the number of inhibitory neurons.
In order to estimate the memory capacity of our model, we examine the ability of the network to store and recall an increasing number of memory items while varying the ratio of excitatory to inhibitory neurons. Specifically, connectivity matrices were initialized containing an arbitrary number M of memory items (modules), from M = 0 to M = 100, and by varying the number of inhibitory neurons from to
. Since the size of the network is maintained constant to N = 100, the number of excitatory neurons is varied consequently as
−
. A network storing M memory items is considered stable if (i) it displays asynchronous irregular firing and (ii) all the memory modules exhibit independent spontaneous recalls. A network with M memories is considered unstable if at least one of the modules does not exhibit recalls and therefore disappears in the long-term.
The results are summarized in the stability diagram in Fig 4B. The red line separating the stable and the unstable regimes evidences that at least 2M inhibitory neurons are needed to hold M memories. This result implies that the network can learn and stabilize a maximum of memories (marked by a cyan star in Fig 4B). In particular, this maximal capacity corresponds to the case with
inhibitory neurons, 33 of them anti-Hebbian and 33 Hebbian, and
excitatory neurons. A simulation for this limit case is shown in S7 Text. Therefore, we conclude that in our model, the minimal condition for a memory to be robustly encoded and recalled is that the memory is associated to a triplet, made of at least one excitatory neuron, one Hebbian and one anti-Hebbian inhibitory neuron. For example, to store M = 20 memories (magenta dashed line in Fig 4B) the network needs to contain at least 20 excitatory neurons and 40 inhibitory neurons (20 Hebbian and 20 anti-Hebbian).
In sub-optimal scenarios, we find that for a given number of memory items , different E ∕ I ratios can guarantee the long-term maintenance of the memories (connectivity patterns). All the possible E ∕ I ratios respecting these constraints can give rise to configurations in which the M = 20 memories are robustly stored. This amounts to have the excitatory to inhibitory ratio ranging from
up to 60 ∕ 40 = 1 . 5, where the total number of neurons is maintained equal to N = 100. For a case with M = 10 memories (green dashed line), the interval of acceptable ratios widens from
to 80 ∕ 20 = 4. In summary, given that each memory requires a triplet of neurons, the memory capacity of our model is limited by the size of the smaller population of neurons, i.e.
. For a network with the typical E to I ratio of 80 ∕ 20, it implies that its memory capacity is controlled by the number of inhibitory neurons instead of the number of excitatory neurons.
The analysis of a few test cases reported in S4 Text for M = 4 memory items confirms the validity of the upper limit . In particular, in S4 Text we studied the following situation: (i) each memory pattern is associated to one Hebbian and one anti-Hebbian inhibitory neuron, (ii) one memory pattern is missing an anti-Hebbian inhibitory neuron, and (iii) one memory item is missing one Hebbian inhibitory neuron.
The flexible relation between memory capacity and E ∕ I ratios found in our model invites to question why would nature favour the specific E ∕ I ratios observed in the mammalian brains, and whether these ratios might be related to their memory storage. To explore this question we computed the hypothetical memory capacities of the human and mouse brains, extrapolating the predictions from our model. Moreover, given that different parts of the brain are made of distinct number of neurons and E ∕ I ratios, we extended the calculations to distinguish the cortex, the cerebellum and a few subcortical regions. The results are shown in Fig 4C. The 80 ∕ 20 excitatory to inhibitory ratio—which is mostly representative for the cerebral and cerebellar cortices—implies a hypothetical memory capacity of M = 0 . 1N, well below the theoretical maximal capacity of . Still, given that the cortex and the cerebellum contain most of the neurons in the brain, in absolute terms this would imply that the human cortex and cerebellum have memory capacities of
and
respectively, and
×
and
×
in mice. Rather surprisingly, we find the lowest memory capacity in the Hippocampus, with M = 0 . 050N and M = 0 . 025N in humans and mice due to a reduced fraction of inhibitory neurons. The amygdala lies among the regions with largest memory capacity, M = 0 . 15N and M = 0 . 2N for humans and mice. This particularity of the amygdala could be related to the fact that this area is involved in the formation of emotional memories [78]. Nevertheless, the amygdala contains few neurons as compared to other regions, hence its absolute capacity would be definitely smaller.
Robustness of the model for larger network sizes trained with multiplestimuli.
(A) Post-learning raster plots of the neuronal activity at times: t=0h, t=2h and t=4h. In each plot, the spike count, normalized to the total number of neurons N and estimated over time bins of 0.2 sec, is displayed below the time axis. Selected spontaneous recalls are highlighted by colored shadows while a peak of global activity is highlighted by a grey shadow. Some of these events are zoomed on the right side of the panels. (B) Post-learning weights matrices at times: t=0h, t=2h and t=4h. (C) Post-learning evolution of mean intra- (solid lines) and inter- (dashed line) clusters weights for excitatory to excitatory (E-E dark red), excitatory to inhibitory (E-I red), anti-Hebbian inhibitory to excitatory (A-E dark blue), anti-Hebbian inhibitory to inhibitory (A-I cyan), Hebbian inhibitory to excitatory (H-E blue) and Hebbian inhibitory to inhibitory (H-I steel blue) connections.
The stability of a neuronal network can be compromised by the network size and the number of memories stored. Hence, to validate the robustness of our model we considered a network of 1000 neurons trained with 10 stimuli, Fig 5A. After training, the network was left to evolve spontaneously for a period of 4 hours. In Fig 5B, we observe the formation of 10 modules among the excitatory connections, each one associated with one group of Hebbian and one group of anti-Hebbian inhibitory neurons. The weight matrices remain practically unaltered over time up to 4 hours. This is additionally evidenced by the stability of the mean inter- and intra-cluster weights, see Fig 5C.
The stability of the learned patterns depends on a few ingredients. On the one hand, the modules remain stable without merging or drifting [17]. This is possible because the anti-Hebbian inhibitory neurons ensure that recalls only occur one module at a time, depressing the other modules. If two modules were to undergo a recall simultaneously, their mutual synapses would become reinforced, causing their merging over repeated simultaneous events. Indeed, the spike counts remain below 5% of the network size, meaning that more memories are never fully recalled at the same time and that the network is far from being fully synchronized. This despite the occurrence of sporadic peaks of activity spread over the whole network, which however appear to be quite sparse (see the zoom of the grey shadowed interval in Fig 5A). A closer look at various recall events (see the enlargements of the coloured intervals in Fig 5A) shows that, usually, only the neurons of a given memory fire within a time window of 0.2 seconds.
On the other hand, for the modules to be maintained, a sufficient frequency of recalls is necessary. As just mentioned, only one module at a time can display a recall. Thus, the more memories are stored, the less frequently each module will be recalled. In consequence, the long-term maintenance of the modules is possible as long as the forgetting term f is sufficiently small, and inversely proportional to the number of modules M, (see S9 Text for a more detailed discussion). This experiment has also been repeated for different network sizes and number of stimuli, see S8 Text.
Formation of hubs by training to overlapping memories.
(A) Experimental protocol for a network of N = 100 neurons trained with M = 2 stimuli that share 8 excitatory neurons. (B) Simulation and learning results. Connectivity matrices show the evolution of the synaptic weights leading to the emergence of two modules which overlap over 8 hub neurons. The raster plot shows the simulated neuronal activity of the network during the initial resting phase, the learning stage and the post-learning period. The activity during the post-learning phase is characterized by a variety of spontaneous recall events involving P1 neurons and the hubs (green shadow), P2 neurons without the hubs (orange shadow) and the hubs alone (pink shadow). (C) Schematic diagrams representing the formation of the synaptic connectivity with two stable populations of neurons and overlapping hubs. Each population is composed at least of a population of excitatory neurons Ex and a population of hub neurons H (in red), one population of Hebbian inhibitory neurons I and one population of anti-Hebbian inhibitory neurons I
(in blue) (x = 1 , 2). Dashed circles identify groups of neurons admitting synchronization events (memory recalls).
We finish the investigation of the model by considering the case in which two stimuli entrain an overlapping set of excitatory neurons and explore the possibility that these could encode for more than one stimulus [17,34,36–38]. This would correspond to a “mixed selectivity” scenario as it is often found in neurons of the prefrontal cortex [79,80]. During the training stage, populations P1 and P2 are now allowed to share eight excitatory neurons, see Fig 6A. Furthermore, stimulation to P1 and P2 is strictly alternated in order to facilitate the formation of the connections, instead of selecting randomly between P1 and P2, as done in the previous simulations.
The results in Fig 6B are similar to the ones before only that besides the formation of two clusters, now a set of structural hub neurons emerge in the connectivity matrix. These excitatory hubs are strongly connected to both modules. They are weakly affected by the feed-forward anti-Hebbian inhibition (since they belong to both populations), but are notably affected by the feedback Hebbian inhibition of the two populations. Fig 6C shows a schematic diagram of the initial and of the resulting connectivity.
Regarding the post-learning resting phase, see raster plot of Fig 6B, the network displays again a stable low-frequency asynchronous firing activity but with richer spatio-temporal patterns than in the previous examples. The spontaneous recalls are more varied now with events displaying: (i) synchronous spiking of one population including the hubs (event shaded in green), (ii) synchronous spiking of the population without the hubs (event shaded in orange), and (iii) synchronous spiking of the hub neurons alone (event shaded in pink). S5 Text shows similar results for a network trained with M = 4 overlapping stimuli.
In conclusion, the introduced model can be generalized to account for neurons that admit persistent mixed selectivity. This shows that, besides modular organization, the model can also incorporate centralized hierarchical organization which is a necessary ingredient for integration [1,3,8,9]. In a previous study, based on phase oscillator models coupled via gap junctions and in presence of phase-difference-dependent plasticity (PDDP), the mixed selectivity was only a transient phenomenon. Indeed in Ref. [36], the PDDP tends to merge the hubs to one of the shared memories in the long-term, while in the spiking network with STDP here considered, the hubs are stable features of the connectivity structure.
Summary and discussion
The architecture of brain’s connectivity, the dynamics of neural populations and the learning capacities of neural networks are three fundamental topics of computational neuroscience. However, these topics are too often investigated separately one from another. In line with the research presented in [31–33,38], our aim here was to develop a learning model that exhibits the emergence of relevant architectural features but which remains alive in all phases. Meaning that neither the dynamical activity nor the plasticity evolution were frozen once the training is finalized. This is also referred as continuous learning. The model displays realistic firing patterns after the learning phase while the synaptic plasticity remains active—as it is the case in the brain in-vivo.
To achieve these objectives, we have introduced a network of excitatory and inhibitory QIF neurons with plastic synapses following simple STDP rules based only on local information, i.e. the pre- and post-synaptic spike times [45]. By targeting stimuli to distinct sub-populations—mimicking the segregated projections of different features into early sensory layers—the network developed a stable modular connectivity. Moreover, by allowing the stimuli to target also neurons in overlapping sub-populations, we observed the emergence of hub-like neurons displaying mixed selectivity. Furthermore, the learned connectivity patterns were robust against catastrophic forgetting over time despite plasticity remained active after the learning phase, and in the absence of any homeostatic or control mechanisms, in contrast to previous analyses [31–33,38]. As a matter of fact, we found that the key factor to guarantee the reinforcement and the long-term maintenance of the memories was the spontaneous occurrence of transient memory recalls in the post-learning neural dynamics. These events occurred at random times on top of an otherwise asynchronous and irregular background neuronal activity. In particular, the recalls acted as short, punctuated boosts to the synaptic adaptation which allowed the memory patterns to persist against natural forgetting.
The role of feedback and feed-forward inhibition
Inhibition has often been considered to play a mere control function by avoiding pathological situations characterized by extremely high firing activity [81,82]. In our model, instead, inhibition also plays an active role in the formation and stabilization of the memories. Consequently, the memory capacity of the model not only depends on the number of excitatory neurons, but also on the number of inhibitory ones. Although inhibitory neurons account for only around 20% of neural cells in the human brain, GABAergic interneurons represent the majority of sub-classes present in the brain [83]. There is substantial evidence that inhibitory cells are regulated by different types of synaptic plasticity [84,85] promoting, for example, the creation of feed-forward [51,86] and feedback inhibitory circuits [52,86]. However, whether a direct relation exists between the type of plasticity and the various functionalities of inhibitory neurons is yet largely to be clarified [87]. Accordingly, we opted for exploring the behaviour of the model under Hebbian and anti-Hebbian inhibition.
During the training, the Hebbian inhibitory neurons formed internal feedback loops (with the excitatory neurons targetted by the same stimulus, Figs 1C and 1D) while the anti-Hebbian neurons formed lateral feed-forward connections (across the populations that were targetted by different stimuli, Figs 1B and 1D). As a consequence, the Hebbian feedback inhibition turned responsible for controlling the firing rate of the populations, which is crucial for the stabilization of the network activity into a realistic asynchronous irregular dynamics [40,88] and to prevent abnormal behaviour, e.g., the pathological high frequency firing observed in Fig 1B. Meanwhile, the anti-Hebbian feed-forward inhibition mediates the competition and selective activation across populations by, e.g., allowing the stimulated population (the stored memory item) to silence the activity of the other populations, representing other memory items.
These results resonate with recent empirical and modeling observations. Lagzi, et al. [42] showed via in-vitro experiments that parvalbumin-expressing (PV) and somatostatin-expressing (SOM) interneurons of the mouse frontal cortex follow symmetric and asymmetric Hebbian STDP respectively. Subsequent modeling suggested PV neurons to mediate homeostasis in excitatory activity, while SOM neurons build lateral inhibitory connections providing competition between excitatory assemblies. Guy et al. [87] performed in-vivo measurements of the responses of PV, SOM and vasoactive intestinal peptide-expressing (VIP) interneurons at the mouse barrel cortex and found differentiated angular selectivity functionality for each neuron type. In particular, whisker stimulation evoked direction-selective inhibition in a majority of SOM interneurons.
Previous models have compared the function of Hebbian and anti-Hebbian plasticities but these were investigated either separately [40,89–91] or assuming that one inhibitory neuron can be susceptible for both plasticity types [32]. Here, we considered two populations of inhibitory neurons—one population subject to Hebbian plasticity and the other to anti-Hebbian—and thus, we studied their joint impact on the adaptation and the dynamical behaviour of a neuronal network, allowing us to naturally associate both types of plasticity with given functions. In our model the feedback and feed-forward inhibitory circuits emerged spontaneously—through adaptation—and were not imposed a priori. In the light of our results, we conclude that the coexistence of Hebbian and anti-Hebbian inhibition is decisive for the formation of stable memory-related structural modules (assemblies) and for the onset of spontaneous memory recalls in the neuronal activity.
The relevance of spontaneous recalls
The consolidation and long-term preservation of memories involve neurochemical changes that are modulated by the dynamical activity of the neurons and their neighbours [92–94]. Cortical and hippocampal activity during REM and non-REM sleep is characterized by low and high frequency firing events [95,96] that are relevant for the consolidation of daytime experiences [97–103]. The cortical activity during resting awake is typically characterised by asynchronous irregular dynamics, facilitating the integration of various sensory inputs and the formation of new associations between different information sources [104,105]. Additionally, short and random events of partial synchronous activation—occurring on top of the irregular firing background—are associated to spontaneous memory recalls (or retrieval) [39] and to the consolidation [40–42] of learned memories.
Once the learning phase has finalized, the dynamical behaviour of our model resembles that of awake resting-state in-vivo (Figs 2C-J). It shows both a persistent asynchronous irregular activity with low firing ( ≈ 0 . 0 − 5 Hz) and punctuated events of transient synchrony of higher rates ( ≈ 8 − 15 Hz). The STDP rules are effective models of the relation between the dynamical activity of neurons and the underlying biochemical processes leading to synaptic plasticity [47]. STDP stresses that these biochemical changes—either for depression or potentiation—take effect primarily when neurons fire concurrently within short time windows. In consequence, if the activity of a group of neurons is uncorrelated over time, the memories they form tend to be slowly forgotten. In our model, this slow forgetting is compensated by the occurrence of spontaneous (partial or complete) recalls of the previously learned memories, happening at random times. These recalls act as short boosts reinforcing the patterns of synaptic connectivity formed during the learning, and thus constitute an autonomous mechanism for memory consolidation during rest. These results emphasize the importance of considering dynamical models of learning which remain alive after training [31–33,38]—instead of freezing their activity and plasticity—in order to understand the processes leading to memory consolidation and long-term maintenance.
It shall also be stressed that the coexistence of asynchronous irregular firing and memory recalls is an emergent dynamical property of our model, instead of being an enforced behaviour by the addition of internal mechanisms for frequency adaptation or by the application of external modulation. The asynchronous irregular activity emerges in our globally coupled model thanks to the three following: presence of non-identical neurons mimicking different neural properties, distributed synaptic weights, and external noise. However, analogous results can be obtained in randomly connected deterministic networks due to internal self-generated fluctuations as shown in S6 Text, as well as in previous analyses [31,32,38]. In a previous study with phase oscillators [36], the post-learning activity at rest showed a partially synchronous state similar to a limit cycle. Here the network remains stable in its asynchronous regime while the recalls spontaneously pop out at random times and order, allowing more numerous and complex information to be encoded and maintained. This behaviour naturally results from the interplay between the underlying structural organization—shaped by the learning process—and the background activity eliciting the transitions, somehow similar to the noise-driven state switching found in neural activity [106]. These results evidence the benefit of studying connectivity, dynamics and learning simultaneously, as they represent different faces of interdependent phenomena.
Memory capacity
Empirical and theoretical studies have associated inhibitory plasticity with the long-term storage of memories [58–61]. From these indications one could deduce the idea that the (long-term) memory capacity of the brain might be related to the quantity of inhibitory neurons. We have shown that the memory capacity of our model depends on the number of both excitatory and inhibitory neurons. The maximal storage capacity of our model is memory items (Fig 4B), where N is the total number of neurons. This represents more than twice the memory capacity of the classical Hopfield model ( ≃ 0 . 14N [107,108]) which does not respect the Dale’s principle—it allows the connections to take either positive or negative values, thus ignoring the separation of neurons into excitatory and inhibitory classes. It should be stressed that the value
should be considered as a theoretical limit. It arises from the fact that, in our model, each memory needs to be associated to at least three neurons: a triplet formed by one excitatory, one Hebbian inhibitory and one anti-Hebbian inhibitory neuron. While in correspondence of this limit values the stability of all the memories is guaranteed, achieving this capacity might be rather implausible for biological neural networks. For example, a real brain operating at this limit would be very fragile since the lesion of a single neuron would be sufficient to erase a memory. Also, this limit requires the E ∕ I ratio to be 33 ∕ 66, while in the human cortex this is known to be 80 ∕ 20. For this ratio, the maximal capacity of our model is indeed 0.1N.
An overview of the known variability of E ∕ I ratios across different parts of the human brain (Table in Fig 4C) reveals that smaller regions such as the amygdala, thalamus, striatum and globus pallidus tend to have a higher proportion of inhibitory neurons than the cortex or the cerebellum. Extrapolating the results for the memory capacity from our model, it allows us to speculate that regions with fewer resources for memory storage (in terms of total number of neurons), may compensate by allowing for a more favourable E ∕ I ratio. Similarly, the mouse admits a more favourable ratio than humans in some areas such as the basal ganglia, striatum, amygdala or the hippocampus. While this interpretation is at the moment largely hypothetical, it may reflect a biological strategy for resource optimisation. Furthermore, it shall be noted that memory capacity is further affected by the complexity of the memories; with simple ones requiring less neurons to be stored and complex memories each requiring more neurons. Hence, the empirically observed E ∕ I ratios may also reflect a compromise between the number of memories and the complexity of the information that needs to be stored.
Real neurons generally respond to—and encode for—multiple inputs and memories [109]. This mixed selectivity plays a crucial role in complex cognitive tasks allowing the brain to simultaneously represent and integrate multiple sources of information [79,110]. In addition to retaining segregated memories, our model also exhibits the possibility of learning and recalling complex memories admitting mixed selectivity, see Fig 6. The neurons supporting mixed selectivity can be considered as hub neurons [111] connecting between the stored clusters and facilitating the ability to transmit and integrate information [1,112]. The presence of hub neurons does not increase the memory capacity of a network but instead, they allow for more complex information to be encoded by establishing associations between memories.
Limitations and outlook
Despite the model was developed to respect several biological constrains and to satisfy some realistic behavioural aspects, other choices and limitations could be implemented in future refinements. We omitted possible evolutionary aspects and assumed that only synaptic plasticity to be responsible for shaping the structure of the connectivity. Sensory systems organize information spatially with neighbouring neurons representing similar response properties and forming, e.g., retinotopic, tonotopic or somatotopic maps [113–115]. However, in the present model the neurons form a graph with no spatial embedding. It would be thus a natural step to extend the model in the future with spatially distributed neurons, following biologically representative spatial constraints.
The long-range white matter connections between distant cortical regions are mainly formed by axonal bundles of excitatory pyramidal neurons [116,117], although some GABAergic cells have also been found to project across different brain regions [111,118]. When our model was trained with stimuli targetting separate neuronal populations, excitatory neurons formed local connections and only the anti-Hebbian inhibitory neurons developed “long-range” projections to other clusters. While this architecture is consistent with the lateral inhibition found in circuits for selectivity and decision-making [87,119,120], it is not representative of the excitatory long-range white matter fibers in the cortex. However, in the model, cross-modular excitatory connections appeared when the stimuli acted on overlapping populations, leading to the formation of excitatory hubs. While the reasons for the organization of excitatory and inhibitory cross-modular connectivities requires further investigation, extensions of the model here presented might be relevant to explore those mechanisms. In fact, both the organization of micro-/macroscopic connectivities and the development of short-/long-term memories, might be governed by distinct rules which could be separately implemented in the model.
Methods
This section describes the spiking neuronal network model governing the dynamics of the neurons and the learning rules used for the adaptation of synaptic weights, as well as the microscopic and macroscopic indicators employed to characterize the network states and dynamics in the paper.
Spiking neuronal network model
We consider a network of QIF neurons, pulse coupled via exponentially decaying post-synaptic potentials and in the presence of STDP. Unless otherwise specified, the network will be composed of 80% (20%) excitatory (inhibitory) neurons as usually observed in the human cortex [121]. Depending on the plasticity rules controlling the synaptic strengths of the connections, three neural sub-populations can be identified depending on the nature of the pre-synaptic neurons:
- excitatory neurons subject to asymmetric Hebbian STDP;
- inhibitory neurons subject to symmetric Hebbian STDP;
- inhibitory neurons subject to symmetric anti-Hebbian STDP.
The evolution of the membrane potential of each neuron (i = 1 , . . . , N) is described by the following ordinary differential equation:
where ms (with
) is the membrane time constant;
are the neuronal excitabilities, chosen to have an average neuronal firing rate at rest of around 1 Hz,
are the external DC currents, leading the neurons to fire around 50 Hz whenever stimulated and
is a Gaussian white noise term tuned to induce a firing variability of ≃ 4 Hz. Whenever the membrane potential
reaches infinity, a spike is emitted and
is reset to − ∞. In the absence of synaptic coupling, external DC current and noise, the QIF model displays excitable dynamics for
, while for positive
, it behaves as an oscillator with period
[54].
The synaptic dynamics is mediated by the global synaptic strengths: ,
and
for the excitatory, the Hebbian and anti-Hebbian inhibitory neurons, respectively. Finally, the evolution of the excitatory, Hebbian inhibitory and anti-Hebbian inhibitory synaptic currents
,
and
is given by
where ms (
ms) are the exponential time decay for the excitatory (inhibitory) post-synaptic potentials;
are the plastic coupling weights from neuron j towards neuron i, whose dynamics is described in the following and
is the n-th spike time of the j-th pre-synaptic neuron, and δ ( t ) is the Dirac delta function.
We consider a network of N all-to-all connected neurons without self-connections and +
+
, where
,
and
are the number of excitatory, Hebbian inhibitory and anti-Hebbian inhibitory neurons, respectively.
We integrate Eqs. (1), (2), (3), and (4) via a stochastic Euler scheme [122] with time step dt (see Table 1 for its value). Whenever overcomes a threshold value
, we approximate the n-th firing time of neuron i as
+
, where
corresponds to the time needed to reach + ∞ from the value
. Furthermore, the neuron is reset to
and held to such value for a time interval
seconds corresponding to the time needed for the neuron to reach
from
and to return to
after being reset to − ∞ [82].
The choice of parameters sets the neurons into a low firing state with a relatively large variability ranging from 0.0 to 8 Hz, analogous to those observed in the cortex at rest. The external drives turn the neuronal activity into high frequency oscillations in the γ range of approximately 50 Hz, typical of high-order perceptual activity [123,124].
STDP rules
The time evolution of the synaptic weights is controlled by STDP rules which depend on the time difference
between the last spikes of the post-synaptic neuron i and the pre-synaptic neuron j [45,47–49].
The potentiation of the synapses is controlled by the plasticity function
while their depression is controlled by
where Λ ( Δt ) entering in Eqs. (5) and (6) depends on the nature of the pre-synaptic neuron.
Plasticity functions: (A) Hebbian asymmetric STDP (Eq. (7)); (B) Hebbian symmetric STDP
(Eq. (8)); (C) anti-Hebbian symmetric STDP
(Eq. (9)). Soft bound functions for excitatory (D) and inhibitory (E) neurons. The bounds for potentiation are shown in red and for depression in blue. The displayed functions refer to M = 2 encoded stimuli, i.e. to f = 0 . 1.
For the pre-synaptic excitatory neurons we use an asymmetric Hebbian STDP function [45]. This function affects the evolution of the weights in a causal way: when the pre-synaptic neuron j emits a spike before (after) the post-synaptic neuron i, the synaptic weight
increases (decreases) [125]. The function
is depicted in Fig 7A, and its explicit expression reads as
with the time constants sec and
sec, the amplitudes
and
.
The term f models a natural slow forgetting of the memories [126,127] by causing a small but constant depression of the weights. We assume that f is inversely proportional to the number M of encoded items: namely, , with
. This in order to allow all the stored memory items to be randomly recalled during the post-learning resting phase (for more details, see S9 Text). Apart from the term f, all the other parameters are not modified in the simulations.
For pre-synaptic Hebbian inhibitory neurons, we use a symmetric Hebbian STDP function [40,51,86,90,91,128], which takes the form of a Ricker wavelet function (or Mexican hat), potentiating (depressing) weights of neurons spiking in a correlated (uncorrelated) way.
is shown in Fig 7B and it reads as
with the time constant τ = 0 . 1 sec, the amplitude A = 3 and the forgetting term f = 0 . 1.
For pre-synaptic anti-Hebbian inhibitory neurons we use a symmetric anti-Hebbian STDP function [52,85,86,89,91,129] which corresponds to a reverse Ricker wavelet function (or reverse Mexican hat), potentiating (depressing) weights of neurons spiking in a uncorrelated (correlated) way. The function
is shown in Fig 7C and it takes the following expression:
with the time constant τ = 0 . 1 sec and the amplitude A = 3. In this case, the forgetting term f = 0 . 1 allows to have a constant small potentiation (the rule being anti-Hebbian) of the weights whatever the spike timing difference.
Adaptation of the synaptic weights
In this sub-section we explain in details the evolution of the synaptic weights. In particular, when the pre-synaptic neuron j or the post-synaptic neuron i spikes at time t, the weight is updated according to the following equations:
with
where q denotes if the pre-synaptic neuron is excitatory q = e or Hebbian (anti-Hebbian) inhibitory q = hi (q = ai). For excitatory (inhibitory) neurons we set (
) and
(
), thus ensuring that the excitatory (inhibitory) couplings are defined within the interval
(
). Moreover,
and
while x ( e ) = y ( hi ) = y ( ai ) = + and y ( e ) = x ( hi ) = x ( ai ) = − , thus, for inhibitory synapses, the plasticity functions
and
are exchanged and multiplied by –1 since potentiation (depression) of inhibitory weights makes them converge towards –1 (0). Furthermore,
is the dimensionless learning rate for the adaptation, while the parameter λ = 100 controls the slope of the soft bound function tanh ( λx ) .
These non-linear functions, depicted in Figs 7D and 7E, model the saturation of the synaptic weights between 0 and 1 [130]. The saturation is obtained by allowing for synaptic depression (potentiation) dominating the potentiation (depression) term for large (small) values of the weights [47,131]. The steep slope of these functions (controlled by the parameter λ) guarantees a dynamical evolution of the synaptic weights even for large values of
[132]. In this sense, the functions here used are at the limit between a soft and a hard bound.
We note that the firing activity of the neurons directly impacts the adaption rate and that the synaptic weights are always subject to adaptation, unlike in conventional artificial neural networks. This differs from the previous study with phase oscillators and PDDP [36], where two different learning timescales were needed to allow for the storing and maintenance of the memory items.
All network parameters employed throughout this study are summarized in Table 1.
Microscopic and macroscopic indicators
In this sub-section, we define the indicators employed to characterize the network dynamics at a microscopic and macroscopic level.
Microscopic Indicators.
The behaviour of single neurons was quantified by their instantaneous firing rates defined as
Here, is the number of spikes emitted by neuron j (i.e. the spike count) in the time interval [ t : t + T ] , with T = 0 . 05 sec.
For a population of neurons, their instantaneous activity can be characterized as follows
The firing rate variability has been measured in terms of the coefficient of variation where
is the mean interspike interval (ISIs) of the neuron j and
its standard deviation. For a perfectly periodic firing
, while for a Poissonian process
.
Macroscopic indicators.
The degree of synchronization in the network was quantified by the complex Kuramoto order parameter [133]
where R(t) (Φ ( t ) ) represents the modulus (phase) of the macroscopic indicator. The modulus R is employed to characterize the level of phase synchronization in the network: R>0 (R = 1) for a partially (fully) synchronized network, while for an asynchronous dynamics due to finite size effects.
To associate a continuous phase to the spiking activity of neuron j, we proceed in the following way:
with the n-th firing time of neuron j.
The mean rate of variation of the synaptic weights was estimated as
where and
+ Δt ) are the synaptic coupling weights from neuron j to i at times t and t + Δt respectively, Δt = 0 . 1 sec. The normalization term is N*(N–1) since we consider an all-to-all connected network without autapses. The parameter K(t) takes positive (negative) values for an overall increase (decrease) in the weight connectivity.
Supporting information
S1 Text. Learning of 2 stimuli with an untrained population.
https://doi.org/10.1371/journal.pcbi.1012973.s001
(PDF)
S2 Text. Learning of 2 stimuli with randomly stimulated neurons.
https://doi.org/10.1371/journal.pcbi.1012973.s002
(PDF)
S3 Text. Learning of 2 stimuli with random amplitude frequencies.
https://doi.org/10.1371/journal.pcbi.1012973.s003
(PDF)
S4 Text. Evolution of a network initially made of four structural modules in absence of any stimulation.
https://doi.org/10.1371/journal.pcbi.1012973.s004
(PDF)
S6 Text. A simulation for a network composed of N = 20000 neurons and a simulation for a network composed of N = 1000 neurons and sparse connections.
https://doi.org/10.1371/journal.pcbi.1012973.s006
(PDF)
S7 Text. Learning of 33 stimuli in a network composed of N = 100 neurons.
https://doi.org/10.1371/journal.pcbi.1012973.s007
(PDF)
S8 Text. Post-learning network connections stability for networks of size N = 2000, N = 1000, N = 500 and N = 200 neurons, trained to M = 20, M = 10, M = 5 and M = 2 memories.
https://doi.org/10.1371/journal.pcbi.1012973.s008
(PDF)
S9 Text. Estimation of the time needed on average to forget the reinforcement of the synaptic weights induced by a single recall event.
https://doi.org/10.1371/journal.pcbi.1012973.s009
(PDF)
References
- 1. Zamora-López G, Zhou C, Kurths J. Cortical hubs form a module for multisensory integration on top of the hierarchy of cortical networks. Front Neuroinform 2010; 4:1. pmid:20428515
- 2. Varshney LA, Chen BL, Paniagua E, Hall DH, Chklovskii DB. Structural properties of the Caenorhabditis elegans neuronal network. PLoS Comput Biol 2011;7(2):e1001066. pmid:21304930
- 3. Sporns O. Network attributes for segregation and integration in the human brain. Curr Opin Neurobiol 2013;23(2):162–71. pmid:23294553
- 4. Hilgetag CC, Goulas A. ’Hierarchy’ in the organization of brain networks. Philos Trans R Soc Lond B Biol Sci. 2020;375(1796):20190319. pmid:32089116
- 5. Suarez LE, Yovel Y, van den Heuvel M, Sporns O, Assaf Y, Lajoie G, et al. A connectomics-based taxonomy of mammals. Elife 2022; 11:e78635. pmid:36342363
- 6. Lin A, Yang R, Dorkenwald S, Matsliah A, Sterling AR, Schlegel P, et al. Network statistics of the whole-brain connectome of Drosophila. Nature 2024; 634:153–65. pmid:39358527
- 7. Sporns O, Tononi GM. Classes of network connectivity and dynamics. Complexity 2001;7(1):28–38.
- 8.
Shanahan M. Broadcast and the network. In: Embodiment and the inner life: Cognition and consciousness in the space of possible minds. Oxford University Press; 2010.
- 9. Zamora-López G, Zhou C, Kurths J. Exploring brain function from anatomical connectivity. Front Neurosci 2011; 5:83. pmid:21734863
- 10. Scannell JW, Blakemore C, Young MP. Analysis of connectivity in the cat cerebral cortex. J Neurosci 1995;15(2):1463–83. pmid:7869111
- 11. Hilgetag CC, Burns GA, O’Neill MA, Scannell JW, Young MP. Anatomical connectivity defines the organization of clusters of cortical areas in the macaque and the cat. Philos Trans R Soc Lond B Biol Sci 2000;355(1393):91–110. pmid:10703046
- 12. Meunier D, Lambiotte R, Bullmore ET. Modular and hierarchically modular organization of brain networks. Front Neurosci 2010; 4:200. pmid:21151783
- 13. Senden M, Deco G, De Reus MA, Goebel R, Van Den Heuvel MP. Rich club organization supports a diverse set of functional network configurations. Neuroimage 2014; 96:174–82. pmid:24699017
- 14. Bertolero MA, Yeo BTT, D’Esposito M. The modular and integrative functional architecture of the human brain. Proc Natl Acad Sci U S A 2015;112(49):E6798–807. pmid:26598686
- 15. Rubinov M, Sporns O, van Leeuwen C, Breakspear M. Symbiotic relationship between brain structure and dynamics. BMC Neurosci 2009; 10:55. pmid:19486538
- 16. Damicelli F, Hilgetag CC, Hütt MT, Messé A. Topological reinforcement as a principle of modularity emergence in brain networks. Network Neuroscience. 2019;3(2):589–605. pmid:31157311
- 17. Manz P, Memmesheimer RM. Purely STDP-based assembly dynamics: Stability, learning, overlaps, drift and aging. PLoS Comput Biol 2023;19(4):e1011006. pmid:37043481
- 18. Li J, Bauer R, Rentzeperis I, van Leeuwen C. Adaptive rewiring: a general principle for neural network development. Front Netw Physiol 2024; 4:1410092. pmid:39534101
- 19. Kirkby LA, Sack GS, Firl A, Feller MB. A role for correlated spontaneous activity in the assembly of neural circuits. Neuron 2013;80(5):1129–44. pmid:24314725
- 20. Shanahan M. Metastable chimera states in community-structured oscillator networks. Chaos 2010; 20:013108. pmid:20370263
- 21. Shanahan M. A spiking neuron model of cortical broadcast and competition. Conscious Cogn 2007; 17:288–303. pmid:17317220
- 22. Schaub MT, Bileh YN, Anastassiou CA, Koch C, Barahona M. Emergence of Slow-Switching Assemblies in Structured Neuronal Networks. PLoS Comput Biol 2015;11(7):e1004196. pmid:26176664
- 23. Fuster J. Unit activity in prefrontal cortex during delayed- response performance: neural correlates of transient memory. J Neurophysiol 1973; 36:61. pmid:4196203
- 24. Miyashita Y, Hayashi T. Neural representation of visual objects: encoding and top-down activation. Curr Opin Neurobiol 2000; 10:187–94. pmid:10753793
- 25. Hubel DH, Wiesel TN. Receptive fields, binocular interactions and functional architecture in the cat’s visual cortex. J Physiol 1962; 160:106–54. pmid:14449617
- 26. Del Giudice P, Mattia M. Long and short-term synaptic plasticity and the formation of working memory: a case study. Neurocomputing. 2001;38:1175–80.
- 27. Del Giudice P, Fusi S, Mattia M. Modelling the formation of working memory with networks of integrate-and-fire neurons connected by plastic synapses. J Physiol Paris 2003;97(4-6):659–81. pmid:15242673
- 28. Amit DJ, Mongillo G. Spike-driven synaptic dynamics generating working memory states. Neural Comput 2003;15(3):565–96. pmid:12620158
- 29. Morrison A, Aertsen A, Diesmann M. Spike-timing-dependent plasticity in balanced random networks. Neural Comput 2007;19(6):1437–1467. pmid:17444756
- 30. Clopath C, Büsing L, Vasilaki E, Gerstner W. Connectivity reflects coding: a model of voltage-based STDP with homeostasis. Nat Neurosci 2010;13(3):344–352. pmid:20098420
- 31. Litwin-Kumar A, Doiron B. Formation and maintenance of neuronal assemblies through synaptic plasticity. Nat Commun 2014; 5:5319. pmid:25395015
- 32. Zenke F, Agnes EJ, Gerstner W. Diverse synaptic plasticity mechanisms orchestrated to form and retrieve memories in spiking neural networks. Nat Commun 2015; 6:6922. pmid:25897632
- 33. Ocker GK, Doiron B. Training and spontaneous reinforcement of neuronal assemblies by spike timing plasticity. Cerebral Cortex 2019;29(3):937–51. pmid:29415191
- 34. Fauth MJ, van Rossum MCW. Self-organized reactivation maintains and reinforces memories despite synaptic turnover. eLIFE 2019; 8:e43717. pmid:31074745
- 35. Miehl C, Onasch S, Festa D, Gjorgijeva J. Formation and computational implications of assemblies in neural circuits. J Physiol 2023;601(15):3071–90. pmid:36068723
- 36. Bergoin R, Torcini A, Deco G, Quoy M, Zamora-López G. Inhibitory neurons control the consolidation of neural assemblies via adaptation to selective stimuli. Sci Rep 2023;13(1):6949. pmid:37117236
- 37.
Bergoin R. The role of inhibitory plasticity in the formation and the long-term maintenance of neural assemblies and memories. CY Cergy Paris Université; Universitat Pompeu Fabra; 2023.
- 38. Yang X, La Camera G. Co-existence of synaptic plasticity and metastable dynamics in a spiking model of cortical circuits. PLoS Comput Biol 2024;20(7):e1012220. pmid:38106233
- 39. Gu Y, Gong P. The dynamics of memory retrieval in hierarchical networks. J Comput Neurosci 2016;40(3):247–68. pmid:26922679
- 40. Vogels TP, Sprekeler H, Zenke F, Clopath C, Gerstner W. Inhibitory plasticity balances excitation and inhibition in sensory pathways and memory networks. Science 2011;334(6062):1569–73. pmid:22075724
- 41. Carrillo-Reid L, Yang W, Bando Y, Peterka DS, Yuste R. Imprinting and recalling cortical ensembles. Science 2016;353(6300):691–4. pmid:27516599
- 42. Lagzi F, Bustos MC, Oswald AM, Doiron B. Assembly formation is stabilized by Parvalbumin neurons and accelerated by Somatostatin neurons. BioRxiv. preprint. 2021; pp. 2021–09. http://dx.doi.org/10.1101/2021.09.06.459211
- 43. Van Vreeswijk C, Sompolinsky H. Chaos in neuronal networks with balanced excitatory and inhibitory activity. Science 1996;274(5293):1724–6. pmid:8939866
- 44. Brunel N. Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. J Comput Neurosci 2000; 8:183–208. pmid:10809012
- 45. Bi G, Poo M. Synaptic modifications in cultured hippocampal neurons: dependence on spike timing, synaptic strength, and postsynaptic cell type. J Neurosci 1998;18(24):10464–72. pmid:9852584
- 46. Bi Gq, Poo Mm. Synaptic modification by correlated activity: Hebb’s postulate revisited. Annu Rev Neurosci 2001;24(1):139–66. pmid:11283308
- 47. Sjöström J, Gerstner W. Spike-timing dependent plasticity. Scholarpedia 2010;5(2):1362.
- 48. Caporale N, Dan Y. Spike timing-dependent plasticity: a Hebbian learning rule. Annu Rev Neurosci 2008;31(1):25–46. pmid:18275283
- 49. Mikkelsen K, Imparato A, Torcini A. Emergence of slow collective oscillations in neural networks with spike-timing dependent plasticity. Phys Rev Lett 2013;110(20):208101. pmid:25167453
- 50. Bienenstock EL, Cooper LN, Munro PW. Theory for the development of neuron selectivity: orientation specificity and binocular interaction in visual cortex. J Neurosci 1982;2(1):32–48. pmid:7054394
- 51. Lamsa K, Heeroma JH, Kullmann DM. Hebbian LTP in feed-forward inhibitory interneurons and the temporal fidelity of input discrimination. Nat Neurosci 2005;8(7):916–24. pmid:15937481
- 52. Lamsa KP, Heeroma JH, Somogyi P, Rusakov DA, Kullmann DM. Anti-Hebbian long-term potentiation in the hippocampal feedback inhibitory circuit. Science. 2007;315(5816):1262–1266.
- 53. Softky WR, Koch C. The highly irregular firing of cortical cells is inconsistent with temporal integration of random EPSPs. J Neurosci 1993;13(1):334–50. pmid:8423479
- 54. Ermentrout GB, Kopell N. Parabolic bursting in an excitable system coupled with a slow oscillation. SIAM J Appl Math 1986;46(2):233–53.
- 55. Perkel DH, Gerstein GL, Moore GP. Neuronal spike trains and stochastic point processes: II. Simultaneous spike trains. Biophys J 1967;7(4):419–40. pmid:4292792
- 56. De Felipe J, Marco P, Fairén A, Jones E. Inhibitory synaptogenesis in mouse somatosensory cortex. Cereb Cortex 1997;7(7):619–34. pmid:9373018
- 57. Rubinski A, Ziv NE. Remodeling and tenacity of inhibitory synapses: relationships with network activity and neighboring excitatory synapses. PLoS Comput Biol 2015;11(11):e1004632. pmid:26599330
- 58. Karunakaran S, Chowdhury A, Donato F, Quairiaux C, Michel CM, Caroni P. PV plasticity sustained through D1/5 dopamine signaling required for long-term memory consolidation. Nat Neurosci 2016;19(3):454–64. pmid:26807952
- 59. Barron HC, Vogels TP, Behrens TE, Ramaswami M. Inhibitory engrams in perception and memory. Proc Natl Acad Sci U S A 2017;114(26):6666–74. pmid:28611219
- 60. Mongillo G, Rumpel S, Loewenstein Y. Inhibitory connectivity defines the realm of excitatory plasticity. Nat Neurosci 2018;21(10):1463–70. pmid:30224809
- 61. Giorgi C, Marinelli S. Roles and transcriptional responses of inhibitory neurons in learning and memory. Front Mol Neurosci 2021; 14:689952. pmid:34211369
- 62.
Arcelli P, Frassoni C, Regondi M, De Biasi S, Spreafico R. GABAergic neurons in mammalian thalamus: a marker of thalamic complexity? Brain Res Bull. 1997;42(1):27–37. https://doi.org/10.1016/s0361-9230(96)00107-4 pmid:8978932
- 63. Avino TA, Barger N, Vargas MV, Carlson EL, Amaral DG, Bauman MD, et al. Neuron numbers increase in the human amygdala from birth to adulthood, but not in autism. Proc Natl Acad Sci U S A 2018;115(14):3710–5. pmid:29559529
- 64. D’Angelo E. Physiology of the cerebellum. Handb Clin Neurol 2018; 154:85–108. pmid:29903454
- 65. Gandolfi D, Mapelli J, Solinas SM, Triebkorn P, D’Angelo E, Jirsa V, et al. Full-scale scaffold model of the human hippocampus CA1 area. Nat Comput Sci 2023;3(3):264–76. pmid:38177882
- 66. Hardman CD, Henderson JM, Finkelstein DI, Horne MK, Paxinos G, Halliday GM. Comparison of the basal ganglia in rats, marmosets, macaques, baboons, and humans: volume and neuronal number for the output, internal relay, and striatal modulating nuclei. J Comp Neurol 2002;445(3):238–55. pmid:11920704
- 67. Herrero MT, Barcia C, Navarro J. Functional anatomy of thalamus and basal ganglia. Childs Nerv Syst 2002;18(8):386–404. pmid:12192499
- 68. Karlsen AS, Pakkenberg B. Total numbers of neurons and glial cells in cortex and basal ganglia of aged brains with Down syndrome—a stereological study. Cereb Cortex 2011;21(11):2519–24. pmid:21427166
- 69. Pelkey KA, Chittajallu R, Craig MT, Tricoire L, Wester JC, McBain CJ. Hippocampal GABAergic inhibitory interneurons. Physiol Rev 2017;97(4):1619–747. pmid:28954853
- 70. Sah P, Faber EL, Lopez de Armentia M, Power J. The amygdaloid complex: anatomy and physiology. Physiol Rev 2003;83(3):803–34. pmid:12843409
- 71.
Schröder K, Hopf A, Lange H, Thörner G. Morphometrical-statistical structure analysis of human striatum, pallidum and subthalamic nucleus. J Hirnforsch. 1975;16(4):333–50 pmid:1214057
- 72. Singh-Bains MK, Waldvogel HJ, Faull RL. The role of the human globus pallidus in H untington’s disease. Brain Pathol 2016;26(6):741–51. pmid:27529459
- 73. Spampanato J, Polepalli J, Sah P. Interneurons in the basolateral amygdala. Neuropharmacology 2011;60(5):765–73. pmid:21093462
- 74. Von Bartheld CS, Bahney J, Herculano-Houzel S. The search for true numbers of neurons and glial cells in the human brain: A review of 150 years of cell counting. J Comp Neurol 2016;524(18):3865–95. pmid:27187682
- 75. Erö C, Gewaltig MO, Keller D, Markram H. A cell atlas for the mouse brain. Front Neuroinform 2018; 12:84. pmid:30546301
- 76. Rodarie D, Verasztó C, Roussel Y, Reimann M, Keller D, Ramaswamy S, et al. A method to estimate the cellular composition of the mouse brain from heterogeneous datasets. PLoS Comput Biol 2022;18(12):e1010739. pmid:36542673
- 77. Roussel Y, Verasztó C, Rodarie D, Damart T, Reimann M, Ramaswamy S, et al. Mapping of morpho-electric features to molecular identity of cortical inhibitory neurons. PLoS Comput Biol 2023;19(1):e1010058. pmid:36602951
- 78. McGaugh JL. The amygdala modulates the consolidation of memories of emotionally arousing experiences. Annu Rev Neurosci 2004; 27:1–28. pmid:15217324
- 79. Rigotti M, Barak O, Warden MR, Wang XJ, Daw ND, Miller EK, et al. The importance of mixed selectivity in complex cognitive tasks. Nature 2013;497(7451):585–90. pmid:23685452
- 80. Parthasarathy A, Herikstad R, Bong JH, Medina FS, Libedinsky C, Yen SC. Mixed selectivity morphs population codes in prefrontal cortex. Nat Neurosci 2017;20(12):1770–9. pmid:29184197
- 81. Mi Y, Katkov M, Tsodyks M. Synaptic correlates of working memory capacity. Neuron 2017;93(2):323–330. pmid:28041884
- 82. Taher H, Torcini A, Olmi S. Exact neural mass model for synaptic-based working memory. PLoS Comput Biol 2020;16(12):e1008533. pmid:33320855
- 83. Petilla terminology: nomenclature of features of GABAergic interneurons of the cerebral cortex. Nat Rev Neurosci. 2008;9(7):557–68. pmid:18568015
- 84. Kullmann DM, Moreau AW, Bakiri Y, Nicholson E. Plasticity of inhibition. Neuron 2012;75(6):951–62. pmid:22998865
- 85. Koch G, Ponzo V, Di Lorenzo F, Caltagirone C, Veniero D. Hebbian and anti-Hebbian spike-timing-dependent plasticity of human cortico-cortical connections. J Neurosci 2013;33(23):9725–33. pmid:23739969
- 86. Wu YK, Miehl C, Gjorgjieva J. Regulation of circuit organization and function through inhibitory synaptic plasticity. Trends Neurosci 2022;45(12):884–98. pmid:36404455
- 87. Guy J, Möck M, Staiger JF. Direction selectivity of inhibitory interneurons in mouse barrel cortex differs between interneuron subtypes. Cell Rep 2023;42(1):111936. pmid:36640357
- 88.
Politi A, Torcini A. A robust balancing mechanism for spiking neural networks. Chaos 2024;34(4):041102. https://doi.org/10.1063/5.0199298
- 89. Földiák P. Forming sparse representations by local anti-Hebbian learning. Biol Cybern 1990;64(2):165–170. pmid:2291903
- 90. Luz Y, Shamir M. Balancing feed-forward excitation and inhibition via Hebbian inhibitory synaptic plasticity. PLoS Comput Biol 2012;8(1):e1002334. pmid:22291583
- 91. Kleberg FI, Fukai T, Gilson M. Excitatory and inhibitory STDP jointly tune feedforward neural circuits to selectively propagate correlated spiking activity. Front Comput Neurosci 2014; 8:53. pmid:24847242
- 92. Bliss TV, Collingridge GL. A synaptic model of memory: long-term potentiation in the hippocampus. Nature 1993;361(6407):31–9. pmid:8421494
- 93. McGaugh JL. Memory–a century of consolidation. Science 2000;287(5451):248–51. pmid:10634773
- 94. Malenka RC, Bear MF. LTP and LTD: an embarrassment of riches. Neuron 2004;44(1):5–21. pmid:15450156
- 95. Torao-Angosto M, Manasanch A, Mattia M, Sanchez-Vives MV. Up and down states during slow oscillations in slow-wave sleep and different levels of anesthesia. Front Syst Neurosci 2021; 15:609645. pmid:33633546
- 96. Wang JY, Heck KL, Born J, Ngo HVV, Diekelmann S. No difference between slow oscillation up-and down-state cueing for memory consolidation during sleep. J Sleep Res 2022;31(6):e13562. pmid:35166422
- 97. Squire LR, Alvarez P. Retrograde amnesia and memory consolidation: a neurobiological perspective. Curr Opin Neurobiol 1995;5(2):169–77. pmid:7620304
- 98. Steriade M. Impact of network activities on neuronal properties in corticothalamic systems. J Neurophysiol 2001;86(1):1–39. pmid:11431485
- 99. Stickgold R. Sleep-dependent memory consolidation. Nature 2005;437:1272-–8.
- 100. Marzano C, Ferrara M, Mauro F, Moroni F, Gorgoni M, Tempesta D, et al. Recalling and forgetting dreams: theta and alpha oscillations during sleep predict subsequent dream recall. J Neurosci 2011;31(18):6674–83. pmid:21543596
- 101. Dudai Y. The restless engram: consolidations never end. Annu Rev Neurosci 2012; 35:227–47. pmid:22443508
- 102. Theodoni P, Rovira B, Wang Y, Roxin A. Theta-modulation drives the emergence of connectivity patterns underlying replay in a network model of place cells. Elife 2018; 7:e37388. pmid:30355442
- 103. Eichenlaub JB, Nicolas A, Daltrozzo J, Redouté J, Costes N, Ruby P. Resting brain activity varies with dream recall frequency between subjects. Neuropsychopharmacology 2014;39(7):1594–602. pmid:24549103
- 104. Buzsaki G, Draguhn A. Neuronal oscillations in cortical networks. Science 2004;304(5679):1926–9. pmid:15218136
- 105. Fries P. A mechanism for cognitive dynamics: neuronal communication through neuronal coherence. Trends Cogn Sci 2005 Oct;9(10):474–80. pmid:16150631
- 106.
Rolls ET, Deco G. The noisy brain: stochastic dynamics as a principle of brain function. Oxford University Press; 2010.
- 107. Hopfield JJ. Neural networks and physical systems with emergent collective computational abilities. Proc Natl Acad Sci U S A 1982;79(8):2554–8. pmid:6953413
- 108. Amit DJ, Gutfreund H, Sompolinsky H. Storing infinite numbers of patterns in a spin-glass model of neural networks. Phys Rev Lett 1985;55(14):1530–3. pmid:10031847
- 109. Quiroga RQ, Reddy L, Kreiman G, Koch C, Fried I. Invariant visual representation by single neurons in the human brain. Nature 2005;435(7045):1102–7. pmid:15973409
- 110. Johnston WJ, Palmer SE, Freedman DJ. Nonlinear mixed selectivity supports reliable neural computation. PLoS Comput Biol 2020;16(2):e1007544. pmid:32069273
- 111. Bonifazi P, Goldin M, Picardo MA, Jorquera I, Cattani A, Bianconi G, et al. GABAergic hub neurons orchestrate synchrony in developing hippocampal networks. Science 2009;326(5958):1419–24. pmid:19965761
- 112. Van Den Heuvel MP, Sporns O. Rich-club organization of the human connectome. J Neurosci 2011;31(44):15775–86. pmid:22049421
- 113. Mountcastle VB. Modality and topographic properties of single neurons of cat’s somatic sensory cortex. J Neurophysiol 1957;20(4):408–34. pmid:13439410
- 114. Kaas JH. Topographic maps are fundamental to sensory processing. Brain Res Bull 1997;44(2):107–12. pmid:9292198
- 115. Kaas JH, Collins CE. The organization of sensory cortex. Curr Opin Neurobiol 2001;11(4):498–504. pmid:11502398
- 116. Petreanu L, Mao T, Sternson SM, Svoboda K. The subcellular organization of neocortical excitatory connections. Nature 2009;457(7233):1142–5. pmid:19151697
- 117. Zhang S, Xu M, Kamigaki T, Hoang Do JP, Chang WC, Jenvay S, et al. Long-range and local circuits for top-down modulation of visual cortex processing. Science 2014;345(6197):660–5. pmid:25104383
- 118. Caputi A, Melzer S, Michael M, Monyer H. The long and short of GABAergic neurons. Curr Opin Neurobiol 2013;23(2):179–86. pmid:23394773
- 119.
Gerstner W, Kistler WM, Naud R, Paninski L. Neuronal dynamics: from single neurons to networks and models of cognition. Cambridge University Press; 2014.
- 120. Maass W. On the computational power of winner-take-all. Neural Comput 2000;12(11):2519–35. pmid:11110125
- 121.
Abeles M. Corticonics: neural circuits of the cerebral cortex. Cambridge University Press; 1991.
- 122.
Toral R, Colet P. Stochastic numerical methods: an introduction for students and scientists. John Wiley & Sons; 2014.
- 123. Tallon-Baudry C, Bertrand O, Delpuech C, Pernier J. Stimulus specificity of phase-locked and non-phase-locked 40 Hz visual responses in human. J Neurosci 1996;16(13):4240–9. pmid:8753885
- 124. Herrmann CS. Human EEG responses to 1–100 Hz flicker: resonance phenomena in visual cortex and their potential correlation to cognitive phenomena. Exp Brain Res 2001;137(3–4):346–53. pmid:11355381
- 125.
Carlson KD, Richert M, Dutt N, Krichmar JL. Biologically plausible models of homeostasis and STDP: stability and learning in spiking neural networks. In: The 2013 International Joint Conference on Neural Networks (IJCNN). IEEE; 2013. pp. 1–8.
- 126. Wixted JT. The psychology and neuroscience of forgetting. Annu Rev Psychol 2004; 55:235–69. pmid:14744216
- 127. Hardt O, Nader K, Nadel L. Decay happens: the role of active forgetting in memory. Trends Cogn Sci 2013;17(3):111–20. pmid:23369831
- 128. Perez Y, Morin F, Lacaille JC. A hebbian form of long-term potentiation dependent on mGluR1a in hippocampal inhibitory interneurons. Proc Natl Acad Sci U S A 2001;98(16):9401–6. pmid:11447296
- 129. Plumbley MD. Efficient information transfer and anti-Hebbian neural networks. Neural Networks 1993;6(6):823–33.
- 130.
Jin Y, Li P. AP-STDP: A novel self-organizing mechanism for efficient reservoir computing. In: 2016 International Joint Conference on Neural Networks (IJCNN). IEEE; 2016. pp. 1158–65.
- 131. Van Rossum MC, Shippi M, Barrett AB. Soft-bound synaptic plasticity increases storage capacity. PLoS Comput Biol 2012;8(12):e1002836. pmid:23284281
- 132. Gilson M, Fukai T. Stability versus neuronal specialization for STDP: long-tail weight distributions solve the dilemma. PLoS One 2011;6(10):e25339. pmid:22003389
- 133.
Kuramoto Y. Chemical oscillations, waves, and turbulence. Courier Corporation; 2003.