Figures
Abstract
Understanding the intricate interplay between neural dynamics and metabolic constraints is crucial for unraveling the mysteries of the brain. Despite the significance of this relationship, specific details concerning the impact of metabolism on neuronal dynamics and neural network architecture remain elusive, creating a notable gap in the existing literature. This study employs an energy-dependent neuron and plasticity model to analyze the role of local metabolic constraints in shaping both the dynamics and structure of Spiking Neural Networks (SNN). Specifically, an energy-dependent version of the leaky integrate-and-fire model is utilized, along with a three-factor learning rule that incorporates postsynaptic available energy as the third factor. These models allow for fine-tuning sensitivity in the presence of energy imbalances. Analytical expressions predicting the network’s activity and structure are derived, and a fixed point analysis reveals the emergence of attractor states characterized by neuronal and synaptic sensitivity to energy imbalances. Analytical findings are validated through numerical simulations using an excitatory-inhibitory network. Furthermore, these simulations enable the study of SNN activity and structure under conditions simulating metabolic impairment. In conclusion, by employing energy-dependent models with adjustable sensitivity to energy imbalances, our study advances the understanding of how metabolic constraints shape SNN dynamics and structure. Moreover, in light of compelling evidence linking neuronal metabolic impairment to neurodegenerative diseases, the incorporation of local metabolic constraints into the investigation of neuronal network structure and activity opens an intriguing avenue for inspiring the development of therapeutic interventions.
Author summary
Our study explores the complex relationship between brain activity and energy use, an area crucial for understanding how the brain functions. Despite its importance, the impact of energy availability on brain cell dynamics and network structure is not well understood. To address this, we developed a model that integrates energy constraints into the behavior of neurons and synapses. Using an energy-dependent neuron model and a synaptic plasticity rule that considers energy availability, we analyzed how these constraints affect the brain’s activity and structure. Our model allows us to adjust sensitivity to energy imbalances, offering detailed predictions about network behavior. Through analytical and numerical methods, we discovered that energy constraints can lead to distinct stable states in neural networks. We also simulated scenarios of metabolic impairment, providing insights into how energy deficiencies might affect brain function. This work enhances our understanding of the role of metabolism in brain activity and suggests new directions for research into neurodegenerative diseases, where metabolism is often impaired.
Citation: Jaras I, Orchard ME, Maldonado PE, Vergara RC (2025) Unveiling the role of local metabolic constraints on the structure and activity of spiking neural networks. PLoS Comput Biol 21(6): e1013148. https://doi.org/10.1371/journal.pcbi.1013148
Editor: Fleur Zeldenrust, RU Nijmegen Donders Institute: Radboud Universiteit Donders Institute for Brain Cognition and Behaviour, NETHERLANDS, KINGDOM OF THE
Received: October 31, 2023; Accepted: May 20, 2025; Published: June 13, 2025
Copyright: © 2025 Jaras et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: The source code associated with this study is available on GitHub. Researchers and readers can access, download, and collaborate using the provided code to replicate or extend our analyses. The specific repository link is as follows: https://github.com/Wiss/edsnn.
Funding: This work was partially supported by the National Center for Artificial Intelligence CENIA, FB210017, BASAL, ANID, awarded to P.E.M. and R.C.V., by FONDECYT Grant 1250036, and by the Advanced Center for Electrical and Electronic Engineering, BASAL Project AFB240002, awarded to M.E.O. The authors also acknowledge support from ANID-PFCHA/Doctorado Nacional/2019-21190330 for funding Ismael Jaras’s doctoral studies.
Competing interests: The authors have declared that no competing interests exist.
Introduction
Neural tissue consumes a higher amount of energy resources compared to other somatic tissues [1], where electric activity corresponds up to 75% of neuron expenses [2, 3]. This amount of resource consumption is not trivial as neurons are extremely sensitive to resource deprivation, particularly to decrease of oxygen and glucose levels on blood or cerebrospinal fluid (CSF) [4]. Furthermore, many neurodegenerative diseases, such as Parkinson’s disease, Leigh, amyotrophic lateral sclerosis (ALS), and Alzheimer’s disease have been proposed to occur due to energy management impairment [5–10]. Despite the recognized relevance of energy management in the nervous system [11, 12], energy management is rarely considered as a relevant factor in neuroscience and in neural modeling.
Energy management also appears relevant in common behavioral activities. Neural activity elicits local increases in blood flow (neurovascular coupling), glucose uptake, and oxygen consumption [13], being this same energy management mechanism the one used in fMRI to map behavior into the brain [14]. As such, fMRI evidence could also be interpreted as a correlate of behavior with energy management in neural tissue. However, the energy management of active neurons is even more finely tuned. For instance, neurons outsource glycolytic activity to astrocytes leaving energy levels fairly constant during time [15]. Besides the energy substrates given by astrocytes during neuronal activity, neurons are provided with intrinsic mechanisms that buffer ATP levels. Accordingly, neuron mitochondria raise ATP synthesis in response to an increment in synaptic stimuli [10, 16–19]. Together, these arguments present robust evidence that supports the importance of energy administration for maintaining the brain’s proper function, thereby leading to the development of modeling strategies that account for metabolic dynamics.
Attempting to establish the significance of energy management as a pivotal factor influencing neural systems, the Energy Homeostasis Principle (EHP) was introduced in [11, 12]. The rationale behind this principle posits that neurons, in meeting their metabolic needs, inadvertently address behavioral challenges as an epiphenomenon. EHP allows the reinterpretation of several physiological mechanisms. One of them is the synapses, which is reinterpreted as an energetic control mechanism. While neurons lack direct control over presynaptic stimulation, they exert influence over the “weight" or strength of such activity. Therefore, synaptic weights can be used to regulate incoming energy expenses from previous neurons. Critically, synaptic weight modifications, imply changes in neural network dynamics and ultimately behavior. Consequently, the core hypothesis derived from the EHP posits that energy management significantly shapes synaptic weights, consequently impacting the broader dynamics of neural systems.
While the Energy Homeostasis Principle (EHP) has made strides in addressing certain gaps in understanding, a comprehensive grasp of the effects of local energy constraints at a network level remains elusive. To advance our comprehension of the impact of metabolic constraints on the brain, an effective strategy involves formalizing neuronal and synaptic models. This formalization should encompass experimentally observed energy dependencies, affording the opportunity to analytically scrutinize the implications of metabolic constraints as well as simulate their effects.
This work fills the gap by committing to the previous approach. Here we create an energy-dependent Spike-timing-dependent plasticity rule and, jointly with a previously introduced energy-dependent neuronal model [20], we study the role of local metabolic constraints on the structure and activity of spiking neural networks, particularly in Excitatory-Inhibitory (E-I) networks. Our exploration employs two complementary methods to assess the impact of metabolic constraints at the network level. Initially, we conduct a mathematical analysis to understand the consequences of incorporating energy dependencies into both neurons and synapses. Subsequently, we undertake simulations to further analyze the effects of metabolic constraints, providing a basis for comparison against the analytical results.
Through our mathematical analysis, we find the conditions under which the network converges toward a fixed point. Then, we compare theoretical predictions with numerical simulations, observing a good agreement between both. Regarding different simulation cases, we divide the experiments into three main scenarios. Firstly, we study the dynamics and structure of the E-I networks focusing solely on synaptic sensitivity to energy imbalances. Following this, we extend our simulations to include neuronal sensitivity to energy imbalances. Finally, motivated by the evidence linking neurodegenerative diseases to metabolic impairments, we investigate the effect of neuronal impaired metabolic productions on the dynamics and structure of the network.
This study aims to advance our understanding of how local metabolic constraints influence the structure and activity of spiking neural networks. To achieve this, we introduce a novel energy-dependent plasticity rule and investigate the impact of local metabolic constraints through both analytical methods and numerical simulations. In doing so, we have focused on simulating in vitro neural network conditions, as those experimental settings are the most accessible to explore the interrelation between metabolism and plasticity.
Materials and methods
Energy dependent leaky integrate-and-fire (EDLIF)
The Leaky integrate-and-fire (LIF) model is a single-compartment model describing the subthreshold dynamics of the neuron membrane potential. Despite its simplicity, the LIF model it is widely used for modeling neuronal dynamics. The LIF model does not take into account neuronal energetics affecting his dynamics, however, ultimately, the function of the neuron rests on its ability to balance its energy expenditure and production. Within this balance, the sodium-potassium pump is a key actor, considering that it relies on available ATP to restore the resting potential allowing action potentials to occur [21]. Consequently, neuronal behavior depends on the adequate balance between energy production and expenditure. The main objective of the EDLIF [20] model is to include the energy dependence in the neuronal dynamics while maintaining the LIF model’s simplicity. To accomplish that, the EDLIF model makes the neuronal behavior explicitly dependent on the available neuronal energy through the energy-dependent partial repolarization mechanism.
As in the LIF model, in the EDLIF model the membrane’s voltage of the neuron is described as follows:
where Cm is the membrane capacitance, the equilibrium potential of the leak channel,
is the conductance associated with the current leakage, and I(t) the input current. Unlike the LIF model, in the EDLIF model the reset potential after an action potential occurs is metabolically dependent. This dependency is achieved by the inclusion of a partial repolarization mechanism [22], which works by resetting the potential of the capacitor to
after an action potential occurs, where
is the firing threshold and
is called the reset parameter. In the EDLIF model, the reset parameter (
) depends on ATP as follows:
where , AH is the homeostatic ATP level, and A(t) the neuron’s ATP level. This relationship represents the membrane repolarization voltage values after an action potential occurs as a function of the available ATP in the neuron. Given that the precise curve of the repolarization membrane voltage depending on ATP is unknown, a sensitivity parameter
is introduced. This parameter provides the flexibility to adjust the intensity at which the repolarization membrane voltage value is affected by ATP level changes.
When a constant suprathreshold stimuli () is applied to the neuron’s model with initial condition
, the membrane’s potential satisfies the following trajectory:
where . The time it takes for a spike to be generated (i.e., the interspike interval, s0) is obtained by imposing
[23]:
Therefore, the corresponding ATP-dependent spiking-rate for constant available energy in the neuron is described by the following equation:
where is a time constant accounting for neuronal refractoriness. In our brains, the available energy in neurons changes through time, so it is necessary to include this time-dependence in the available energy to estimate the firing rate of the neuron. Fig 1 shows how the neuron’s firing rate response to an injected constant current changes when the neuronal sensitivity to energy imbalances (
) is modified and the available energy changes through time.
The current-frequency mapping depends on the neuronal sensitivity to energy imbalances. If is higher the neuron’s firing rate saturates before (with a smaller current) in contrast to a neuron with lower neuronal sensitivity to energy imbalances.
Neuronal ATP dynamics.
In the EDLIF model, the ATP dynamics are characterized by considering two processes: those that supply ATP to the neuron (As(t)), and those that consume ATP (C(t)). The available ATP in the neuron (A(t)) can be formalized as follows [11]:
and the neuron’s homeostatic mechanisms attempt to keep ATP levels close to the homeostatic ATP level (AH). The ATP production dynamics is mathematically formalized by Eq 7. Thus, ATP production depends on the actual ATP level with respect to a homeostatic one (K(AH−A(t))) and a basal production term (AB) accounting for resting potential and housekeeping activities:
where the parameter K ((1/ms) units) is the rate at which ATP is produced.
Regarding energy consumption, we consider four energy-consuming activities in this study. Namely, the energy related to the neuron’s housekeeping activities Ehk, the energy associated with maintaining the resting potential Erp, the total energy expended by the neuron’s action potentials Eap, and the total energy associated with receiving postsynaptic potentials due to a presynaptic neighbor neurons having action potentials (Esyn). Both rate consumptions Ehk and Erp are measured in energy per time units. On the other hand, both consumption activities Eap and Esyn refer to the total amount of energy consumed by those activities. To obtain the corresponding rate consumption for Eap and Esyn we use temporal kernels. In this manner, the amount of energy expended per time unit associated with action potentials (or postsynaptic potentials) is described as (or
), where
is the kernel associated to the process x and
.
Originally, in the EDLIF model the production and consumption of ATP is measured in mM/ms, while the available ATP A(t) is measured in mM. However, experimentally, it is difficult to precisely measure ATP concentrations in single neurons or populations. To overcome this difficulty and to define a more general framework, here we will use percentage units to quantify available energy (A(t)), homeostatic energy (), and activity-related energy consumption such as action potential energy consumption (Eap), house-keeping (Ehk), resting potential (Erp), and postsynaptic energy consumption (Esyn).
Static synapses and energy consumption in the neuron.
To understand in detail the energy perturbations induced by presynaptic neurons in postsynaptic energy levels through synapses, let us analyze the neuron’s energy dynamics when several presynaptic neurons are connected to it. In a network, each neuron’s energy evolves following Eq 6, where , and the energy consumption C can be divided into the following terms:
where CB represents the basal energy consumption associated with maintaining the resting potential and general housekeeping activities (which is equal to the basal production AB in this work). Cap is the energy consumption accounting for the neuron action potentials, and Csyn is the energy consumption related to receiving action potentials from other neurons through the synapses. Please realize that Cap(t) (Csyn(t)) is different than Eap (Esyn). The former is the action potential (postsynaptic) energy consumption through time, and the latter is the total amount of energy related to that activity. Thus, Cap (Csyn) is obtained using Eap (Esyn) and a kernel function , which describes how the total energy consumption is expended through time.
The solution to Eq 6 with an arbitrary energy consumption C(t) is described by Eq 9:
where is the rate at which ATP can be produced in the neuron (see Eq 7).
For one presynaptic action potential propagating through a synapse with fixed strength w, the synaptic consumption on the postsynaptic neuron can be described by:
where denotes the Heaviside step function, Esyn is the total energy expended for an incoming presynaptic action potential,
is the absolute value of the normalized synaptic strength (
, where wmax is the maximum allowed weight strength) and
is a kernel describing how the synaptic energy expenditure is expended through time (
). If we choose a kernel that can be linearly summed up, then the synaptic energy expenditure in the postsynaptic neuron due to several presynaptic cells with their respective spike times is:
Similarly, for each action potential in the postsynaptic neuron, the energy expenditure can be described by:
where Eap is the energy expenditure related to one action potential and is analogous to
, but for the postsynaptic action potentials (
). If
can be linearly summed up, then Eq 12 reads:
To find closed-form expressions for the available energy in the postsynaptic neuron, it is possible to define and
kernels and plugin Eqs 11 and 13 into Eq 6. In particular, if exponential kernels are used for
and
, Eq 11 reads:
where is the energy time constant of the kth synapse and describes how fast is the change in the postsynaptic neuron’s energy given the incoming action potential through the kth synapse. Note that if
the total energy consumption in the postsynaptic neuron produced by the presynaptic neuron sending one action potential through synapse kth is
. Thus, Eq 14 guarantees that for the maximum allowed synaptic weight, the total energy consumption perceived by the postsynaptic neuron is exactly
and a fraction (
) of
otherwise. Similarly, using an exponential kernel, Eq 13 can be expressed as:
where is the time constant defining how fast is Eap expended through time. Eq 15 guaranties that the neuron will consume a total amount of Eap for each action potential.
Therefore, plugin Eqs 15 and 14 into Eq 9, we have a closed-form expression for the neuron’s energy dynamics under exponential kernels ( and
) for energy consumption:
Eq 16 describes the energy dynamics of a single neuron within a network, given all incoming synapses strengths and spike-times. Thus, it allows to precisely quantify and analyze the energy dynamics of each neuron within a network with arbitrary architecture.
Energy dependent Spike-timing-dependent plasticity
We aim to investigate how local energy limitations influence network attributes. While having an energy-dependent single-neuron model is crucial, it’s not the only consideration. We must also factor in the metabolic impact on synapses. The energy administration of the neuron is essential for its adequate operation, and considering the significant effect of these restrictions on synaptic activity [9, 10, 24, 25], the need to have synaptic transmission rules that depend on the energy management of the cell naturally emerges.
Energy dependent Spike-timing-dependent plasticity model.
Here we leverage previously defined plasticity models to introduce an energy-dependent synaptic plasticity rule.
Particularly, we introduced an energy-dependent plasticity rule for synaptic strength. We follow the formalization of STDP introduced in [26]:
where the synaptic efficacy w is normalized to [0,1]. The learning rate (
) scales the magnitude of the individual weight change,
is a temporal filter
, and
is defined as follows:
with accounting for the asymmetry between the scales of potentiation and depression.
In general, neurons harbor energetic/metabolic sensors such as AMP-activated protein kinase, which tends to restore ATP concentration by decreasing energy consumption while increasing energy production after different energetic challenges [27–29]. This same mechanism has been linked to plasticity [29], where they were able to show that energetic stress can suppress LTP, while treatments targeting energetic pathways can rescue it by removing the energetic stress. This findings inspired our LTP energy-dependent plasticity rule. Specifically, when glycolysis is pharmacologically inhibited (thus ATP production from glucose is inhibited), LTP is suppressed [29]. Therefore, our energy-dependent STDP rule should suppress LTP when there is a low energy level. Following this observation, we proceed to extend Eq 17 to account for energetics, while keeping in mind that this is a practical simplification of a more complex biological phenomena, but which allows deepening the understanding of metabolism and plasticity interactions, and its impact on neural network dynamics.
To account for energetics in STDP, it is possible to extend f + (w) in Eq 18 to include the energy level A of the postsynaptic neuron, obtaining f + (w,A). There are several ways to mathematically formalize the inclusion of postsynaptic energy level in f + (w), but we follow a similar rationale as the one used in EDLIF neuron, namely, we want energy-dependent STDP to be sensitive to energy imbalance. Hence, we can investigate not only how energy affects STPD, but also how energy imbalance affects STDP given a certain level of energy imbalance sensitivity in that synapse’s plasticity. Using the rationale mentioned above we extend f + (w) to f + (w,A) as follows:
where is the sensitivity of the synapse’s plasticity to energy imbalances, AH is the homeostatic energy level, and A is the postsynaptic energy level. STDP effects can be thought as a change in postsynaptic receptor density. Given that the postsynaptic neuron has to modify its receptor’s density, their energy level is affecting receptor density modifications. Please note that if the synapse is not sensitive to energy imbalances (i.e.,
), then we recover the original STDP rule.
The postsynaptic energy level can be interpreted as a plasticity moderator. Thus, our energy-dependent STDP rule is a special case of a three-factor learning rule, with postsynaptic available energy A as modulator.
For the depression updating function f−(w) we keep the original formulation (, see Eq 18) because we don’t have conclusive evidence justifying the depression’s dependence on postsynaptic available energy. However, energy may be also affecting depression.
Plugging Eq 19 into Eq 17, it is possible to describe weight’s update accounting for pre- and postsynaptic spike-time and postsynaptic energy level:
In principle, Eq 20 allows to include weight w dependencies in the updating rule if . In those cases, the updating rule is called a multiplicative rule, while if
the updating rule is additive, because weight update is independent of the current weight value. Multiplicative rules generate uni-modal weight distributions, while additive rules generate bi-modal weight distributions [26]. Bi-modal weights distributions are observed in biology, and because the mathematical tractability of STDP additive rules is easier than multiplicative ones, in what follows we decided to focus on the additive update rule for weights update (
). However, there is also significant biological evidence supporting multiplicative STDP as well. For this reason, we define Eq 20 in a general form, allowing the exploration of multiplicative energy-dependent STDP in future works.
Network model
There is biological evidence suggesting the existence of excitatory-inhibitory (E-I) balanced networks in different parts of the brain, such as the CA3 region of the hippocampus, basal ganglia, and the primary visual cortex [30–33]. Thus, E-I networks appear as a suitable model to explore neural networks’ dynamics and structure. However, in general, energy or metabolic constraints are not considered when simulating and studying E-I networks.
To mathematically describe an E-I network with nE excitatory and nI inhibitory neurons, we introduce the network’s weight matrix w of dimension:
where the element of w correspond to the synapse from the
th presynaptic neuron to the ith postsynaptic neuron.
is the
matrix containing all the excitatory-excitatory connections,
is the
matrix containing all the excitatory-inhibitory connections,
is the
matrix containing all the inhibitory-excitatory connections, and
is the
matrix containing all the inhibitory-inhibitory connections. Because both LTP and LTD depend on the activation of NMDA receptors and are absent when the postsynaptic neurons are GABAergic [34], and that LTP and LTD mechanisms in inhibitory neurons are widely diverse in their physiology [35], only excitatory-excitatory connections (
) are plastic in our E-I network. The main reason for this is that we cannot generalize the energy dependence observed for excitatory neurons, given that inhibitory neurons have relevant physiological differences compared to excitatory neurons.
The mean firing rate of each neuron in the interval is described by the
vector
:
where is the
dimension vector containing the excitatory mean firing rate, whereas
contains the inhibitory mean firing rates in a
dimension vector.
Network’s firing-rate.
To improve our analytical understanding of the E-I network, it is practical to have an approximation of the network’s firing rate . In principle, using the LIF model (or the EDLIF model with no energy sensitivity to energy imbalances, i.e.,
) and knowing the model’s parameters, the firing rate of a neuron excited by a constant current can be easily calculated. In a more general way, the firing rate of a neuron can be calculated by defining a function
, mapping stimulation current to firing rate. If the neuron is insensitive to energy imbalance (i.e.,
) and is excited with a supra-threshold current I, then the function
can be found utilizing Eq 21:
where is the neuron’s refractory period,
is the neuron’s time constant, and
. In this case,
, where I is the constant incoming current to the neuron. Thus, the firing rate of each neuron under a constant current stimulation can be calculated knowing the neuron’s parameters as well as the incoming constant current and the
function, which is monotonically increasing in I. Therefore, if we can approximate the mean incoming current
to a neuron in a small time-window, it is possible to approximate the neuron’s firing rate
, where
. Hence, to approximate each neuron’s firing rate, let us approximate the mean incoming current
to each neuron:
Therefore, the mean incoming current to a neuron is the mean incoming synaptic current, plus the constant external incoming stimulation current injected into the neuron. Eq 22 describes the mean incoming current to a neuron utilizing an
interaction kernel. If we use a delta Dirac interaction kernel (i.e.,
), Eq 22 reads:
If we define the constant incoming stimulation current to each neuron as the dimension vector
, and the mean incoming synaptic current as the
dimension vector
, Eq 23 can be expressed in matrix form as follows:
It is possible to apply point-wise to obtain the approximated network’s firing rate:
Eq 25 imposes a current-frequency constraint on the network. In general, defines a nonlinear relationship between incoming current and the neuron’s firing rate. However, we can Taylor expand Eq 25 around
:
where is the identity matrix. Moreover, if
is a linear transformation such as
, the following relationship holds:
Eqs 26 and 27 shows that, if the weights are static on average (), then the firing rate of the network converges towards an equilibrium. This observation is important because it highlights the fact that, in general, constant weights are needed to achieve a constant firing rate.
Eq 25 defines a relation determining plausible weight-frequency values, which in principle, are independent of the network’s energy dynamics. In the Results section, we will explore how these constraints interact with the metabolic constraints at the network level.
Simulation details.
To numerically test our theoretical predictions, the network is simulated utilizing the Neural Simulation Tool program (NEST) [36] and the EDLIF neuronal model as well as the ED-STDP synaptic model are specified using NESTML [37], the domain-specific language tailored for the spiking neural network simulator NEST. Models and code are available on GitHub (https://github.com/Wiss/edsnn).
The simulated network has n = 500 neurons, and the excitatory-inhibitory ratio is , following biologically realistic excitatory-inhibitory ratio values [38].
Given the need for a mathematically tractable framework, the network architecture is defined with an all-to-all structural connectivity. In addition, following in vitro measured weights strength in neuronal cell assemblies [39], initial synaptic strength values for the simulated network are drawn from an exponential distribution (see Eq 28)
where is the probability density function of the exponential distribution. This means that the distribution of the weights w is described by an exponential distribution with scale parameter
. Therefore, the expected value of the weights is
and the variance
. For the simulation we used
.
To emulate the incoming activity to each neuron from other brain areas, we inject into each neuron a constant current drawn from a Gaussian distribution with mean
and standard deviation
. By assuming that all neurons share the same parameters values we study the homogeneous parameter scenario (neuronal parameters are in S1 Table).
Results
Energy-dependent plasticity and equilibrium
While studying static synapses and energy consumption in the neuron, closed-form expressions for available energy level in the cell were introduced. In Eq 16 the neuron’s energy expenditure due to static synaptic activities are pondered by the normalized synaptic strength , however strength between neurons change over time (i.e.,
). Here we analytically explore how the available energy in the neuron jointly with the synaptic sensitivity to energy imbalances
impact the synaptic plasticity between neurons. Specifically, we try to understand how metabolic constraints dictate synaptic plasticity equilibrium points, where synaptic potentiation and depression balance each other on average.
As we previously explained, our energy-dependent STDP rule uses postsynaptic available energy as a neuromodulator. Therefore, for low postsynaptic energy levels, potentiation is suppressed and the suppression depends on the sensitivity parameter . For the following explanation let us assume equal synaptic time constants (i.e.,
in Eq 20). Intuitively, given Eq 20 and A(t = 0) = AH initial postsynaptic available energy, if peak potentiation is stronger than peak depression (i.e.,
) then, for uniformly random pre- and postsynaptic spike times, in a short-time window simulation, weights tend to increase in average (see Fig 2A).
Potentiation suppression depends on postsynaptic energy level A. If potentiation is favored (a), while for
depression is favored (b). (c) The weight drift Eq 32 is shown for varying levels of postsynaptic available energy A and different synaptic energy imbalances sensitivity
. The points of intersection between the horizontal gray line and the colored curves indicate the zero drift and equilibrium energy level
for different
values.
Higher weights induce more energy consumption in the postsynaptic neuron (because of the increased energy expenditure related to postsynaptic potentials and, also, because postsynaptic neurons will integrate more current, potentially increasing firing rate, which also contributes to energy consumption). Therefore, postsynaptic available energy decreases, suppressing potentiation. If potentiation is strongly suppressed then, for uniformly random pre- and postsynaptic spike times, in a short-time window simulation, depression will have a greater effect, generating a net depression effect on weights (see Fig 2B). This net depression force weights to decrease on average, thus decreasing the postsynaptic firing rate and also diminishing postsynaptic energy consumption. Consequently, postsynaptic available energy increases, favoring potentiation over depression, producing a potentiation depression loop (see Fig 2C). However, it is also possible that the postsynaptic available energy reaches an equilibrium level where potentiation and depression balance each other. Let us formally explain this intuition. If we assume random uniformly distributed inter-spike intervals :
Given the postsynaptic available energy A, the net effect, also called drift, experienced by weights is the integral of from Eq 20 over all possible
:
If , Eq 31 reads:
Consequently, if , then potentiation is favored (i.e.,
). Whereas if
depression is favored. Similarly, by imposing
we can find the update weight fixed point (i.e.,
), which gives the energy level
value for which potentiation and depression mutually balance each other:
Eq 33 predicts the available energy level on the postsynaptic neuron for which, on average, potentiation and depression are compensated.
Moreover, the equilibrium level for postsynaptic available energy is inversely proportional to the sensitivity parameter
. As a consequence, if the plasticity rule is highly sensitive to energy imbalances, then Eq 33 guarantees that the postsynaptic energy level stay close to AH (see Fig 3).
Postsynaptic energy equilibrium level as a function of the sensitivity parameter
and depression scaling factor
.
.
As we will explain, this fixed point is crucial for the network dynamics, because for the weights are static (on average), thus the average incoming current to the neuron should stay approximately constant if the average is calculated over a time period Tw greater than membrane and synaptic time constant (i.e.,
). Thus, we should expect a fixed point for the firing rate.
Network’s energy dynamic and fixed points under metabolic constraints
Built on the energy-dependent single-neuron model and the energy-dependent synaptic plasticity model, now we formalize, simulate, and investigate the dynamic and structure of E-I networks including energy constraints in both, the single-neuron model as well as the synaptic plasticity. The developed models allow investigation on how different sensitivities to energy imbalances at single neuron and synaptic level affect the network’s dynamics and structure.
Energy dynamic and fixed point analysis.
When studying the network’s dynamics, we are particularly interested in knowing if the network converges or oscillates toward a specific state. If the network evolves toward an specific state independently of the initial conditions, those states are attractors of the network. Therefore, to find fixed points in the network dynamics, we start by trying to find if there are energy attractors in the network dynamics, when all the neurons share the same parameters (the homogeneous case). Accordingly, we analyze stable fixed points for energy states of an arbitrary neuron in the network.
The total energy change of any neuron in the network in a time interval T is calculated by summing the contributions of energy production and consumption occurring in the time interval
. From Eq 6 we obtain:
Previously, we formalized the synaptic energy consumption Csyn (see Eq 11) as well as the neuron’s own action potential energy consumption Cap (see Eq 13), for arbitrary and
kernels. To simplify the mathematical tractability, now we will choose the delta kernel
for
and
. Thus, Eq 34 reads:
Because weights change slowly (i.e., in Eq 20), for small T we can assume they are constant in the
interval (i.e.,
). Formally, we are assuming that time scales of learning and neuronal spike dynamics can be separated [40]. In addition, because of the linearity of the integral operator:
where is the mean energy level of the neuron in the
interval, and in the second and third terms we obtain the neuron self and connected neighbor’s firing rates, respectively. Now, if we impose
in Eq 36, it is possible to find neuron’s firing rates, incoming weight, and energy expenditure values that satisfy the energy fixed point constraints:
which in matrix form reads:
where . Eq 37 holds if the neuron is in an energy fixed point and, to achieve the energy fixed point, the excitatory-excitatory connections to the neuron under study also need to achieve a fixed point (the other connections are static). Otherwise, the synaptic energy consumption (Csyn) will stay varying, impeding to achieve an energy fixed point in the postsynaptic excitatory neuron. Please realize that, if the excitatory-excitatory connections achieve a fixed point as well as the presynaptic neuron’s firing rate, then the firing rate of the postsynaptic neuron also achieves a fixed point (i.e.,
).
Excitatory-excitatory connections stabilize at a fixed point when the postsynaptic excitatory neuron’s energy level matches . However, for silent neurons incoming weights remain unchanged, leading to a constant energy level in the neuron (
), regardless of whether
. Therefore, silent neurons (neurons with firing rate
equal to zero) conform to Eq 37. If there is no rise in their incoming current, they will stay at the fixed point where
, even without needing to meet the
condition. Thus, for non-silent neurons in the fixed point, we can approximate
, obtaining:
By rearranging Eq 39, we arrive to the following relationship constraining weights and firing rates of each neuron in the network:
If all the neurons in the network converge towards a fixed point, they must satisfy Eq 37 and, in particular, non-silent excitatory neurons must satisfy Eq 40. In consequence, in this state, all the excitatory-excitatory connections in the network remain constant (i.e., ) in average as well as the firing rates (i.e.,
). As a result, Eq 40 allow us to predict the energy consumption of each non-silent excitatory neuron in the network after converging towards a fixed point.
The energy consumption induced by action potentials can be expressed in terms of the energy consumption induced by synapses i.e. (experimental findings indicate that m is approximately 1/3 [41]). Thus, Eq 40 can be rewritten as:
Consequently,
Eq 37 reveals a metabolic constraint over incoming synapse strengths and their respective presynaptic neuronal firing rates, affecting non-silent excitatory neurons in the energy fixed point. In particular, Eq 40 shows that, to achieve a metabolic fixed point in the excitatory postsynaptic neurons, there is a trade-off between synaptic strength and the corresponding presynaptic neuronal firing rate. Interestingly, the metabolic constraint in Eq 40 dictates an inverse relationship between weights and firing rates, which is a completely new constraint in the system, given that previous models generally do not account for metabolic activity. Moreover, the constraint mentioned above enables the emergence of an attractor state to the network, as a consequence of the interception between the metabolic and the physically plausible states of the network is given by the previously introduced metabolic (see Eq 40) and weight-frequency relations (see Eq 25).
On the one hand, Eq 25 gives a weight-rate relationship where increasing excitatory-excitatory weights generates higher excitatory firing rates
, due to higher mean incoming synaptic currents to postsynaptic neurons. On the other hand, if either excitatory-excitatory weights or excitatory firing rates increase, the postsynaptic energy levels drop (see Eq 38). In addition, at the metabolic fixed point and under previously described assumptions, non-silent excitatory neurons fulfill Eq 42. As a consequence, if excitatory firing rates (excitatory-excitatory weight) increase, then excitatory-excitatory weights (excitatory firing rates) must decrease. Thus, Eq 42 imposes an inverse relationship between excitatory-excitatory weights and excitatory firing rates, contrary to Eq 25, where the relationship between the two variables is direct. Fig 4 shows a conceptualization of the previous reasoning.
The gray curve describes excitatory firing rate magnitude as a function of excitatory-excitatory weights magnitude (see Eq 25), whereas the red curve conceptualizes the inverse relation between excitatory firing rates and excitatory-excitatory weights magnitudes given by metabolic constraints (see Eq 42) affecting non-silent excitatory neurons in the energy fixed point. On the blue region, the postsynaptic energy level drops below the energy level fixed point . Thus, in the blue region,
experience a net depression drift (see Eq 31), whereas in the white region, the postsynaptic energy level is above
. Consequently, in the white region, there is a net potentiation drift. Therefore, if the network state is in the blue region, energy-dependent plasticity rules push the network state toward the intersection between the two curves (following the green arrows). Likewise, if the network is in the white region, energy-dependent plasticity pushes the network state toward the intersection point between the two curves (following the blue arrows). Finally, in the intersection between the two curves (the system’s metabolic fixed point) the postsynaptic energy level is
and, as a consequence,
on average. For a detailed fixed point stability analysis, please refer to S1 Text.
With an analytical understanding of how metabolic constraints affect the structure and dynamics of the network, let us now explore the impact of metabolic constraints on the network through numerical simulations.
Network activity and structure without energy constraints.
Before analyzing the network’s structure and dynamics under energy constraints on simulations, we first simulate the network without energy constraints (i.e., ). Thus, the current-rate relation from Eq 25 as well as the available energy in the neuron dictated by Eq 38 holds, but Eq 40 does not hold, because Eq 40 holds for the excitatory population only if energy constraints are affecting
plasticity (i.e.,
).
Simulating the network without metabolic constraints is useful as a base case for comparisons before including metabolic constraints (all the simulations presented in the Results section have the same initial conditions). Fig 5 shows the network’s dynamics and structure when no metabolic constraints are included, with panels (b) and (c) included for direct comparison to simulation with metabolic constraints. As expected, firing rates increase until saturation due to favored synaptic potentiation (). In this regard, the refractory period for each neuron in the network is
. Thus, the maximum theoretical firing rate for an infinite stimulation current is
(see Eq 21). Therefore, the network’s firing rate is close to the saturation state. Moreover, the mean energy level of the excitatory population (Fig 5) decreases until stabilizing at
. The energy stabilization point occurs at the firing rate and excitatory-excitatory strength saturation.
Network simulation when there are no metabolic constraints in the network. Mean available energy in the excitatory population keeps dropping until neuronal firing rates are saturated. (a) shows mean energy level and mean firing rate for the excitatory population, while (b) shows final incoming synaptic strength per neuron () and mean firing rate per neuron. The mean firing rates in (b) and (c) are calculated considering the last
of the simulation.
The previous observation is supported by realizing that if firing rates and excitatory-excitatory weights are saturated, the energy consumption in each neuron can be approximated by:
in agreement with what is observed in Fig 5A. However, in principle, without energy constraints, each neuron’s energy level can decrease until achieving A = 0. This is the case if we, for instance, sufficiently increase the number of neurons in the network, or increase the energy consumption related to postsynaptic potentials Esyn. Eq 37 describes which variables affect energy consumption and the available energy in the neuron.
However, we do not analyze the limit in this work, as it is outside the scope of our study.
Exploring synaptic energy imbalances sensitivity
.
From our previous theoretical developments, for higher synaptic energy imbalance sensitivities we expect higher energy level equilibrium values
(Eq 33). To achieve
closer to AH, energy consumption needs to decrease, and postsynaptic energy consumption decreases if the presynaptic rate decreases, or if incoming synaptic strengths decrease (Eq 41). Therefore, as we increase
, we expect to observe both lower firing rates and weaker excitatory-to-excitatory synaptic strengths. This trend is conceptually illustrated in Fig 4 and is numerically validated by the data presented in the first row of S3 Fig. In addition, to satisfy Eq 41 an inverse relation between
and
should be observable in the simulations.
Fig 6 shows the available energy per neuron and mean neuronal firing rate through the simulation, for the excitatory population with different synaptic energy imbalance sensitivity (). Consistently with Eq 33, while synaptic energy imbalance sensitivity
increases, the energy level equilibrium point increases and stays close to our analytical prediction (dashed line). Also, while the energy fixed point
increases, the mean neuronal firing rate decreases, which aligns with the theoretical description represented in Fig 4.
(a) Available energy per neuron for different synaptic sensitivities . (b) Mean firing rate for the excitatory population for different synaptic sensitivities
. (c) Mean firing rate versus incoming synaptic strength per neuron. (d) Mean energy consumption per neuron due to presynaptic excitatory and inhibitory neurons, and the dashed lines represent the theoretical relationship described in Eq 41. In all simulation there is no neuronal sensitivity to energy imbalances (i.e.,
). In (c) and (d), only the last
of the simulation is considered.
Fig 6C shows neuronal firing rate as a function of incoming synaptic strength, for different synaptic sensitivities values. In agreement with Eq 25, as the incoming synaptic strengths increases, the firing rate increases. Also, when
increases, neuronal rates and excitatory-excitatory weights decrease, as can be observed from Fig 6C with
(blue) having higher firing rates and stronger excitatory-excitatory synapses than the ones observed when
(purple). The previous observation is also valid when contrasting Fig 6C for
against
(red). Thus, when synaptic sensitivity
increases, excitatory firing rates tend to decrease.
Regarding the inverse relation between and
dictated by Eq 41, Fig 6D shows the relationship between these two quantities for simulations with increasing
. For a low synaptic sensitivity to energy imbalance (
(blue)), there seems to be an inverse relation, although not very clear. However, if synaptic sensitivity
increases, the inverse relationship between the two variables become more evident, as shown in the
case (purple). Interestingly, when the synapses are highly sensitive to energy imbalance (the
case (red) in Fig 6C and Fig 6D), it is not possible for all the neurons to achieve the metabolic fixed point and some neurons remain silent (neurons with firing rate equal to zero in Fig 6C).
It is not surprising that silent neurons have higher inhibitory weights. This happens because just before the separation of the network into two subpopulations (), there is a slight decrease in available energy coexisting with an increase in the firing rate, followed by an increase in available energy joined with a reduction in the firing rate. This last increment in available energy must be accompanied by a general decrease in excitatory-excitatory weights (
implies a net negative drift in excitatory-excitatory weights, Eq 32). If there is a general decrement in excitatory-excitatory weights, then the first neurons to become silent are those with higher inhibitory incoming weights. After entering the silent state, their excitatory-excitatory weights remain constant, so the only possibility for them to stop being silent is to receive higher incoming current, but this is not possible, because the silent neurons allow a decrease in the energy consumption in the non-silent neurons, enabling the non-silent neurons to satisfy the metabolic fixed point where
. As a consequence, the silence of those neurons enables the network to converge toward a stable fixed point, with one subpopulation staying in the metabolic fixed point, while another subpopulation remains in the silent fixed point. In this scenario, one group of neurons still tries to satisfy the
energy constraints, and their the energy equilibrium point is close to the one predicted by Eq 40, as shown in red in Fig 6A. The second group of silent neurons has a higher available energy, but their energy consumption is not zero. This is not the only case in which there must be silent neurons to achieve a global fixed point. For instance, if the neuronal population is large, then this phenomenon also occurs (although there is no need for extreme synaptic sensitivity to energy imbalances in this case), or if a metabolic impairment is simulated, as we will show later.
Under certain conditions, the separation of excitatory populations into two subpopulations can be interpreted as a specialization. Surprisingly, under the developed framework, silent neurons emerge as a consequence of local energy constraints in the network. If these silent neurons are not present, then it is not possible to achieve a global fixed point in the network.
Including neuronal sensitivities to energy imbalances
.
So far, the developed theoretical analysis of the energy-dependent network does not account explicitly for neuronal energy imbalance sensitivity . However, we know that neuronal sensitivity to energy imbalances
affects neuronal firing rate. Thus, we can include the neuronal sensitivity parameter in the current-rate mapping
. In particular, if the energy level in a neuron is below the homeostatic energy level (i.e.,
), for the same incoming input current, the neuron’s firing rate is higher for higher
, as demonstrated by Eq 5 and shown in Fig 1. Consequently, if two networks (A and B) are equal (in particular, have the same initial excitatory-excitatory weights), but the neurons in network A have higher
sensitivity than neurons in network B, then network A must have higher or equal excitatory firing rates than network B. As a consequence, to achieve an energy equilibrium point, network A needs to decrease excitatory-excitatory synaptic strengths. This idea is conceptualized in Fig 4 with the gray dashed line representing the current-rate mapping when
is increased. Following the previous explanation, for higher
we expect the network to have weaker excitatory-excitatory synaptic strengths as well as higher firing rates.
To numerically explore the effect of modifying , we maintain synaptic sensitivity
fixed and vary the neuronal sensitivity to energy imbalances
. Because the synaptic sensitivity
is constant for all cases in Fig 7 and the energy level fixed point is independent of the neuronal sensitivity parameter
, the theory predicts that networks with different neuronal sensitivity
(and all other parameters being equal) should converge towards the same energy level. The previous prediction is confirmed by observing Fig 7, where for different
values, the energy fixed points are almost the same. However, as already anticipated by the theory, in the metabolic fixed point, higher
values are associated with higher mean excitatory firing rates
. This prediction is confirmed by observing how increasing
shifts the dots to the upper-left corner in Fig 7, thus numerically demonstrating that the energy sensitivity parameter
affects the population’s firing rate, but does not change the energy equilibrium point
.
(a) Available energy per neuron for different synaptic sensitivities . (b) Mean firing rate for the excitatory population for different synaptic sensitivities
. (c) Mean firing rate versus incoming synaptic strength per neuron. (d) Mean energy consumption per neuron due to presynaptic excitatory and inhibitory neurons, and the dashed lines represent the theoretical relationship described in Eq 41. In all simulation there is a constant synaptic sensitivity to energy imbalances (
). In (c) and (d), only the last
of the simulation is considered.
From Fig 7D, it is clear that the relationship defined by Eq 41 is satisfied, thus respecting the inverse relationship between and
.
Regarding excitatory-excitatory synaptic strength, simulations show that excitatory-excitatory weight slightly decreases as neuronal sensitivity increases. This observation is supported by realizing that when
increases, the energy fixed point does not change, but the firing rates increases (Fig 7A and 7B). This is only possible because the excitatory-excitatory weights decrease, compensating for the increased firing rates and thus allowing to keep the same energy consumption. Thus, the theoretical predictions regarding the effect of increasing
while keeping
fixed is proved and in well agreement with numerical experiments.
Impaired metabolic production.
Regarding biology, our theoretical and simulation framework allows us to study what happens when impaired metabolic production affects neurons. This question is relevant because evidence suggests that metabolic impairments are a common cause for various neurodegenerative diseases [5–10]. Therefore, improving the understanding of how metabolic impairments affect network dynamics and structure is potentially valuable for clinical purposes.
From our theoretical developments, if the ATP rate production controlled by parameter K decreases, we expect:
- Given that the energy level equilibrium point
guaranteeing
on average is independent of K (see Eq 33), if there is a mean energy level fixed point for non-silent neurons, it should stay constant as K varies.
- By analyzing Eq 41, for non-silent neurons, if K decrease,
or
needs to decrease in order to satisfy the aforementioned metabolic relation. The only plastic synapses in the network are
. Therefore, if the metabolic production is impaired, we expect lower firing rates as well as weaker excitatory-excitatory synaptic strengths.
- In neurons with impaired metabolism (lower energy production rate K), slower responses to energy consumption create a delay between production and consumption, leading to oscillatory behavior in energy levels. When the initial available energy starts at the homeostatic level (A(t = 0) = AH), but the energy fixed point is below AH (
), energy dynamics undergo a cyclic process: excitatory-excitatory weights and firing rates initially rise, increasing energy consumption and causing available energy to drop. As energy falls below the fixed point, weights and firing rates decrease until energy production surpasses consumption, raising available energy above the fixed point. This feedback loop continues until energy production and consumption align, stabilizing energy levels near the equilibrium. When neurons’ energy production is more acutely impaired, the mismatch between energy production and consumption increases, resulting in more pronounced oscillations in the system. Our analysis further shows that for sufficiently small K, the eigenvalues of the network around the fixed point become complex conjugates, generating oscillatory dynamics (see S2 Fig).
For the metabolically impaired simulations we vary K, but keep the synaptic sensitivity to energy imbalances and the neuronal sensitivity
(purple traces in Fig 7 show the healthy base case for comparison, where there is no metabolic impairment in energy production). In Fig 8 it is possible to observe that for a metabolic production rate of K = 0.7, the energy fixed point is the same as the one obtained in the healthy case (Fig 7 (purple)), confirming that if there is an energy fixed point for non-silent neurons, it is independent of the ATP rate production K, in agreement with our first theoretical prediction.
(a) Available energy per neuron for different ATP production rates K. (b) Mean firing rate for the excitatory population for different ATP production rates K. (c) Mean firing rate versus incoming synaptic strength per neuron. (d) Mean energy consumption per neuron due to presynaptic excitatory and inhibitory neurons, and the dashed lines represent the theoretical relationship described in Eq 41. In all simulation there is a constant neuronal and synaptic sensitivity to energy imbalances (). In (c) and (d), only the last
of the simulation is considered.
Moreover, comparing Fig 7D () with Fig 8D (K = 0.7), both simulations satisfy the energy relationship defined by Eq 41 (dashed lines). However, in Fig 7D (K = 1) the mean excitatory energy consumption (
) is much higher than in Fig 8D (K = 0.7), aligning with the second theory prediction. Additionally, contrasting Fig 7B with Fig 8B, we observe a decrease in mean firing rate for the excitatory population when ATP production rate K decreases, also supporting the second theory prediction. The decrease in excitatory population firing rate with decreasing K is also evident when comparing Fig 7C (
, purple. Firing rates ranging from approximately 90 to 110 Hz) to Fig 8C (K = 0.7, blue. Firing rates ranging from approximately 50 to 90 Hz). Therefore, if energy production is impaired in any neuron, then the energy consumption of that neuron needs to decrease to achieve a fixed point.
Hence, if we keep decreasing K (more severe production impairment), energy consumption needs to drop to be consistent with the theory. When K = 0.5 (purple in Fig 8), the mean firing rate and excitatory-excitatory weights decrease compared to the K = 0.7 case. However, the network can still converge to a global fixed point if a subpopulation of neurons becomes silent. This behavior aligns with the energy constraints of the network. As the ATP production parameter K decreases, energy consumption must also decrease. To achieve this, excitatory-excitatory weights and mean excitatory firing rates decrease, resulting in a subpopulation of silent neurons to reduce energy consumption.
This phenomenon is similar to what happens when there is high synaptic sensitivity to energy imbalances ( in Fig 7), but the cause is different. With high synaptic energy imbalance sensitivity (
), the energy level fixed point
is close to the homeostatic energy level AH. This requires a reduction in energy consumption to reach a global fixed point.
In the case of metabolic impairment, the energy level fixed point is not necessarily close to AH. Here, energy consumption needs to drop because impaired energy production cannot meet high energy demands. This balance between energy consumption and production is necessary for the network to reach a fixed point. Therefore, the constraint in the metabolic impairment case is imposed by the neurons’ energy production capability, not by synaptic energy sensitivity. Despite the different causes, the outcome is similar: silent neurons in both scenarios are those with higher inhibitory synapses.
This is because, before the network divides into two subpopulations, an increase in available energy is followed by a decrease in excitatory-excitatory weights. The first neurons to become silent are those with higher inhibitory weights as they receive less incoming excitatory current.
If we continue aggravating the metabolic production impairment and the network is able to achieve a fixed point, theoretically, we expect even weaker excitatory-excitatory synaptic strengths as well as lower firing rates in the equilibrium state. Accordingly, we simulate the network with K = 0.1, emulating the case where neurons can produce energy at a rate with respect to the healthy case (i.e., K = 1). Fig 8A and 8B show that the energy production is so impaired that the network is not able to achieve an equilibrium point and stay oscillating, exploring excitatory-excitatory weights strengths which allow for achieving an equilibrium state. The problem appears to be that there are no excitatory-excitatory weights values that allow a firing rate and energy consumption which can be compensated by the impaired energy production. Specifically, an abrupt change in firing rate due to a small variation in excitatory-excitatory strengths seem to be part of the problem. Thus, small excitatory-excitatory weight variations generate energy consumption that cannot be compensated by on-demand energy production. This is coherent with the third prediction and our S1 Text (S2 Fig), highlighting an extreme case where energy production is so impaired that it is not possible to converge towards a homeostatic balance, where energy production and consumption match each other.
To understand this intuitively, it is useful to remember that neurons with impaired metabolism (small K) have slower responses to energy consumption, causing a higher delay between consumption and production. Also, excitatory-excitatory weight changes depend on the available energy in the neurons (see Eqs 32 and 2).
If we start the simulation with all neurons at the homeostatic energy level A(t = 0) = AH, and the energy fixed point is lower than the initial energy (), then initially, weights will increase, and firing rates will rise due to the current-rate relationship. This leads to increased energy consumption, but because energy production is delayed, the available energy will tend to decrease (
).
As available energy continues to drop, eventually it will be lower than the energy fixed point (). When this happens, excitatory-excitatory weights will decrease, reducing firing rates. This process will continue until energy consumption decreases enough for energy production to surpass it. When energy production surpasses consumption, the available energy in neurons starts to increase (
) until it matches the energy equilibrium point (
).
Because energy production is always active below the fixed point (), this allows the available energy to rise above
. When the available energy exceeds the equilibrium value, the cycle repeats: weights and firing rates increase, leading to higher energy consumption than the neuron can handle, causing another drop in available energy (
).
This cyclical behavior stops when energy production and consumption match, maintaining available energy near the fixed point (). Achieving this equilibrium is easier when on-demand energy production can compensate for consumption in neurons. For further exploration of the impact of K on network stability, please refer to S1 Text and S2 Fig.
Accordingly to our understanding of how energy constraint affects the networks, one possible solution to solve the non-converging system due to a dramatic ATP production impairment (K = 0.1), is decreasing the synaptic sensitivity to energy imbalances . In this manner, synapses experience softer strengths updates transitions when they are close to the analytical fixed point
. Thus, diminishing the oscillations due to changes in the peak energy-dependent potentiation described in Eq 20. To test this hypothesis, we simulate the network (Fig 9) with the same initial conditions and parameters as the one shown in Fig 8 with K = 0.1, but decreasing the synaptic sensitivity to energy imbalances to
. The proposed solution allows for alleviating the oscillations in the system. However, the equilibrium is achieved by silencing a subpopulation of excitatory neurons. The logic behind this phenomenon is the same as the one explained in the
case
Simulations are obtained with parameters. Figure shows results for the excitatory population.
Fig 9 shows that non-converging networks due to a dramatic metabolic impairment can be alleviated by modifying other parameters in the network. However, there are other solutions to this problem that might alleviate the oscillations. For instance, a simple solution is to decrease the energy expenditure due to postsynaptic potentials Esyn, thus forcing a decrease in energy consumption. Another possible solution could be modifying the neuron’s time-constant . In particular, if
decreases, each neuron has a slower dynamic. Thus, decreasing the slope of the current-rate relation. As a consequence, new weights and rates combinations allow to satisfy the required energy constraints, but with lower rates, thus with slower weights updates (weights update when pre and postsynaptic spikes are present. Thus, lower rates imply fewer spikes and, therefore, slower weight updates). In addition, if the slope of the current-rate mapping decreases, weights modifications should produce softer transitions in rates, thus helping to decrease the oscillations due to the sensitivity of the firing rates to weights modifications.
Discussion
In this work, inspired by spike-timing-dependent plasticity models, we develop a phenomenological energy-dependent plasticity model that allows us to qualitatively replicate wet-biology experimental results, namely, the suppression of long-term potentiation when the available energy level on the postsynaptic neuron drops [29]. The model conditions resemble those observed in studies such as those from [29], and mainly in vitro experimental setups where metabolism is usually evaluated. Additionally, we derived a general mathematical expression for the available energy in every neuron in a network with arbitrary architecture.
Energy-dependent plasticity model
The energy-dependent STDP model can be interpreted as a three-factor plasticity rule, where the postsynaptic available energy is modulating plasticity. The newly introduced model has a general structure that allows the implementation of both multiplicative and additive learning rules. However, we focus on the additive learning rules (i.e., ) and found a closed-form expression predicting the available energy level equilibrium point
on the postsynaptic neuron.
Our model focuses on the effect of postsynaptic available energy on the excitatory-excitatory synaptic potentiation, however, available energy possibly also affects depression. The mathematical formulation of ED-STDP in Eq 20 allows for extending the proposed model to account for energy dependencies on depression plasticity, and this is part of future work related to the energy homeostatic principle.
Network dynamics under metabolic constraints
Based on the EDLIF model [20] and the ED-STDP plasticity rule introduced in this work, we mathematically formalized and simulated spiking neuronal networks under metabolic constraints. The work focuses on the emergence of dynamics, structure, and the study of attractors under energy constraints for homogeneous neuronal populations in E-I networks. In general terms, the developed theory allows us to predict behaviors observed in numerical experiments. Moreover, the introduced neuronal and synaptic energy constraints generate a new attractor in E-I networks due to the intersection of classic physical constraints (neuronal current-rate relations Eq 25) and the new constraints that emerge from the local energy constraint imposed on each neuron in the network. We mathematically describe this phenomenon and conceptualize it in Fig 4.
Regarding the network’s fixed points, synaptic sensitivity to energy imbalances is the main parameter affecting the available energy fixed point for each neuron. In particular, the synaptic sensitivity () has a greater impact than the neuronal sensitivity (
) on the network dynamics and fixed point. In fact, neuronal sensitivity
does not affect the energy equilibrium point (see Eq 33). However, because
modifies the neuron’s activation function, if everything remains equal, different values of
will induce different weights and firing rates combinations to achieve the same energy equilibrium (Fig 4). This is shown in Fig 7, where different values of
do not change the energy equilibrium point (Fig 7A), while the specific firing rates and weight values change (Figs 7C–7D, and S3C–S3F) to achieve that equilibrium point. On the other hand, if we only change the synaptic sensitivity to energy imbalances (
), then the energy equilibrium point, as well as the weights and firing rates, changes (Figs 6 and S3A–S3C). This shows that the synaptic sensitivity (
) parameter has a greater impact on networks dynamics and fixed point compared to the neuronal sensitivity parameter (
).
In our model, through energy-dependent plasticity, synaptic strengths change until non-silent neurons achieve the energy equilibrium point . When the weights change, the neurons’ firing rates change accordingly. We mathematically describe the required constraints on the postsynaptic neuron between inhibitory and excitatory presynaptic energy consumptions to achieve the energy equilibrium point (Eq 41). Surprisingly, if synaptic sensitivity to energy imbalances (
) is too high, to achieve a global fixed point in the network, the excitatory population divides into two subpopulations, with one of them composed of silent neurons.
The occurrence of this phenomenon is consistent with biology [42], shedding light on why silencing excitatory neurons could be practical to diminish energy consumption in neuronal networks, thus allowing the network’s convergence towards a fixed point. Moreover, this phenomenon can be interpreted as a specialization in the excitatory population. The appearance of a silent subpopulation of neurons is interesting because silent neurons are present in our brains, and it is not clear why we would have them. In fact, silent neurons have been referred to as “dark neurons" in analogy to the astrophysical observation that much of the matter in the universe is undetectable, or dark [42]. In our framework, silent neurons play an important role from a metabolic point of view. One hypothesis is that given local energy constraints in biological neural networks, the price to pay for having redundant neurons in the brain is to silence some of them to achieve a network energy equilibrium.
Impact of metabolic impairment and neurodegenerative diseases
Regarding neurodegenerative diseases and given the biological evidence suggesting their relation with metabolic impairment, we study how neurons with impaired energy production affect the network dynamics and structure. Our theoretical developments predict that energy consumption needs to drop in cases of impaired energy production. Consequently, lower firing rates and excitatory-excitatory synapses are expected. These predictions are confirmed by numerical experiments (Fig 8). However, there are other important details in the metabolic impairment scenario. For instance, if the metabolic impairment is high enough, it is also possible that the excitatory population divides into two subpopulations, with one of them composed of silent neurons, although the source of this phenomenon (neuronal incapacity to produce enough energy on demand) is different from the one explained previously when too high synaptic energy sensitivity is present in the network (the energy fixed point must be close to AH, forcing low energy consumption). Even more dramatic energy production impairments may produce constant oscillations in the network, preventing the convergence toward an attractor (K = 0.1 case in Fig 8 and S2 Fig). We show how these oscillations can be alleviated by modifying other parameters of the simulations. Trying to alleviate the oscillations present in dramatic metabolic impairment scenarios is important because it could help in developing new treatments for neurodegenerative diseases. A detailed study of how to alleviate pathological behaviors due to metabolic impairments is out of the scope of this work. However, the proposed theory and the simulation framework could be valuable in deepening the knowledge about the relationship between neurodegenerative diseases and metabolic impairments at the neuronal, synaptic, and network levels.
It is critical to note that neurodegeneration is usually linked to mitochondrial dysfunction or oxidative stress [43]. Thus, neurodegeneration has been treated by increasing mitochondrial energy production or affecting activity through neurotransmission (e.g., pregabalin or levodopa to modulate activity). This study proposes an additional avenue for neurodegeneration treatment through synaptic weights. The challenge is to finely regulate energy expenditure and production rather than increase or decrease it in general terms, as currently done. The rationale is that neurons have a mainly aerobic metabolism, with anaerobic activity delegated to astrocytes during increased synaptic activity [15]. Neurons are highly oxidative and some neuronal populations can be vulnerable to oxidative stress [44]. To our knowledge, they maintain a constant availability regardless of activity level [45] when applying stimulations that do not exceed physiological levels. Reducing synaptic weight is an important way for a neuron to regulate incoming activity from presynaptic neurons. Synaptic scaling experiments demonstrate this phenomenon by increasing activity through the addition of a GABAergic inhibitor (Bicuculline) to cultured cortical neurons for long periods (48 hours) [46, 47]. Under this protocol, neurons transiently increase their mean firing rate but later return to a set point value by modulating synaptic weights [48]. This suggests that exploring potential drugs that boost synaptic weight update velocity may contribute to controlling neurodegenerative processes derived from oxidative stress. Also, our study stresses the need for a better understanding of the fined-tuned energy regulation systems in neurons for the development of new therapeutic avenues.
Limitations and future directions
It is important to mention some of the limitations of this work. Firstly, we use a simple single-neuron model that neglects the neuron’s morphology and the effect of some relevant ions, such as calcium kinetics, on the neuron’s activity. Regarding the decision to use the EDLIF model, in our opinion, EDLIF is the simplest model including energy dependencies that allows for a detailed analytical exploration of the models’ dynamics at the single-neuron and network level. However, there are other simple models that also include energy dependencies (eLIF and mAdEx [49], and Model 2 [50]). EDLIF, eLIF, and mAdEx share the same fundamental rationale, in the sense that a decrease in the available energy inhibits the Na/K pump function, thus leading to a sodium accumulation inside the cell. However, a different mechanism is used to achieve this behavior in EDLIF, compared to eLIF and mAdEx. One drawback of EDLIF compared to eLIF, mAdeX and Model 2, is the lack of a richer behavior repertoire at the neuron level. Despite its simplicity, EDLIF has an interesting feature that is not present in eLIF, mAdEx and Model 2. In addition to the “energetic health" parameter (K in EDLIF, in eLIF and mAdEx, and
in Model 2), EDLIF includes a parameter controlling the sensitivity of the neuron to an energy imbalance (
in Eq 2). This parameter allows us to understand the impact of an energy imbalance under different neurons’ sensitivity, thus it also allows us to differentiate how the same metabolic neuronal impairment (or energetic perturbation) affects two different neurons. Another distinction between EDLIF and eLIF, mAdEx, and Model 2 models, is how energy consumption is modeled. Under the eLIF, mAdEx and Model 2, a spike produces an instantaneous decrease in the available energy, thus the dynamic of spikes’ energy consumption is reduced to an instant. On the other hand, under the EDLIF model, energy consumption has its own dynamic (Eqs 10 and 12). This is an important distinction because energy consumption is not an instantaneous process and, in general, the time constants associated with neuronal energy consumption are much larger than the membrane time constants, and the rate at which energy is consumed has an impact on neurons’ behavior. This is particularly important when studying neuronal metabolic production impairment, as the same total energy consumption per activity could lead to different behaviors in the neuron if the rate at which that energy is consumed differs. Therefore, including energy consumption dynamics is closer to biological reality and is crucial for studying how energy dynamics affect neuronal behavior. These dynamics could also be included in the eLIF, mAdEx and Model 2 and, given their richer repertoire of behaviors, this inclusion could improve our understanding of the impact of energy dynamics on neuronal behavior.
Another limitation of our model is the inclusion of only one type of plasticity, which modifies excitatory-excitatory connections. Thus, we are neglecting other types of plasticity acting at different time scales, such as short-term plasticity or synaptic scaling. In addition, all the connections in the model, except excitatory-excitatory connections, are static. This is also an important limitation because different types of plasticity acting on other connections could have a significant impact on the network’s activity and structure [51, 52]. Finally, our model was not contrasted with experimental data. To the best of our knowledge, no available data simultaneously records STDP metabolism and electrophysiology. This is a relevant caveat, as the temporal dynamics of metabolism are significantly slower than those of neural activity. This kind of experimental data would be of great value in better understanding synaptic-energy dependence. To address this issue and ensure our simulations are more representative of biological conditions, we used in our simulations. This aligns our numerical experiments with the energy consumption ratio presented in [41], where the ratio between the total energy consumed by synaptic transmission and action potential generation is
. By ensuring that our models are consistent with established energy expenditure ratios,we aim to provide more biologically relevant insights into the network dynamics.
While most neuronal and synaptic models do not account for energy dependencies, a few notable exceptions address this aspect (see [21, 49, 50, 53–56]). Accordingly, the developed framework, analysis of dynamics, and structure in spiking neural networks under metabolic impairment, as well as the analysis of the effect of energy constraints in E-I networks, are the main contributions of this work.
Regarding future work, there are a few avenues along which to continue and extend the work presented here. Firstly, while our focus was on integrating energy-dependent long-term plasticity within excitatory-excitatory connections, other connection types could similarly undergo energy-dependent plastic alterations, warranting broader model inclusion [51]. Additionally, our emphasis on long-term synaptic modifications might benefit from an expanded scope that captures plasticity across diverse temporal scales. For instance, incorporating short-term plasticity rules, especially with insights from established models such us the formalization introduced by Tsodyks [57], could offer a richer understanding of synaptic dynamics, particularly when energy dependencies in processes like synaptic vesicle recycling are considered [9, 10]. Moreover, our study’s homogeneous approach, which presumed uniform neuron parameters, might be broadened to encompass the inherent variability of biological systems by investigating heterogeneous neural networks under metabolic constraints. Lastly, a more detailed exploration into the coupling phenomena among neurons, given our plasticity mechanism’s reliance on spike timings, could shed light on its structural implications for networks. This would involve both analytical studies and simulations to discern the overarching effects on network architecture and function.
Supporting information
S1 Text. Supporting information for ‘Unveiling the role of local metabolic constraints on the structure and activity of spiking neural networks’ including stability analysis.
https://doi.org/10.1371/journal.pcbi.1013148.s001
(PDF)
S2 Text. Supporting information for ‘Unveiling the role of local metabolic constraints on the structure and activity of spiking neural networks’ including average synaptic weight evolution under different parameter conditions.
https://doi.org/10.1371/journal.pcbi.1013148.s002
(PDF)
References
- 1. Shulman RG, Rothman DL, Behar KL, Hyder F. Energetic basis of brain activity: implications for neuroimaging. Trends Neurosci. 2004;27(8):489–95. pmid:15271497
- 2. Harris JJ, Jolivet R, Attwell D. Synaptic energy use and supply. Neuron. 2012;75(5):762–77. pmid:22958818
- 3. Attwell D, Laughlin SB. An energy budget for signaling in the grey matter of the brain. J Cereb Blood Flow Metab. 2001;21(10):1133–45. pmid:11598490
- 4. Ames A 3rd. CNS energy metabolism as related to function. Brain Res Brain Res Rev. 2000;34(1–2):42–68. pmid:11086186
- 5. Lin MT, Beal MF. Mitochondrial dysfunction and oxidative stress in neurodegenerative diseases. Nature. 2006;443(7113):787–95. pmid:17051205
- 6. Kawamata H, Manfredi G. Mitochondrial dysfunction and intracellular calcium dysregulation in ALS. Mech Ageing Dev. 2010;131(7–8):517–26. pmid:20493207
- 7. Maruszak A, Żekanowski C. Mitochondrial dysfunction and Alzheimer’s disease. Prog Neuropsychopharmacol Biol Psychiatry. 2011;35(2):320–30. pmid:20624441
- 8. Park J-S, Davis RL, Sue CM. Mitochondrial dysfunction in Parkinson’s disease: new mechanistic insights and therapeutic perspectives. Curr Neurol Neurosci Rep. 2018;18(5):21. pmid:29616350
- 9. Pathak D, Shields LY, Mendelsohn BA, Haddad D, Lin W, Gerencser AA, et al. The role of mitochondrially derived ATP in synaptic vesicle recycling. J Biol Chem. 2015;290(37):22325–36. pmid:26126824
- 10. Rangaraju V, Calloway N, Ryan TA. Activity-driven local ATP synthesis is required for synaptic function. Cell. 2014;156(4):825–35. pmid:24529383
- 11. Vergara RC, Jaramillo-Riveri S, Luarte A, Moënne-Loccoz C, Fuentes R, Couve A, et al. The energy homeostasis principle: neuronal energy regulation drives local network dynamics generating behavior. Front Comput Neurosci. 2019;13:49. pmid:31396067
- 12. Vicencio-Jimenez S, Villalobos M, Maldonado PE, Vergara RC. The energy homeostasis principle: a naturalistic approach to explain the emergence of behavior. Front Syst Neurosci. 2022;15:782781. pmid:35069133
- 13. Sokoloff L. The physiological and biochemical bases of functional brain imaging. Cogn Neurodyn. 2008;2(1):1–5. pmid:19003468
- 14. Schulz K, Sydekum E, Krueppel R, Engelbrecht CJ, Schlegel F, Schröter A, et al. Simultaneous BOLD fMRI and fiber-optic calcium recording in rat neocortex. Nat Methods. 2012;9(6):597–602. pmid:22561989
- 15. Weber B, Barros LF. The astrocyte: powerhouse and recycling center. Cold Spring Harb Perspect Biol. 2015;7(12):a020396. pmid:25680832
- 16. Connolly NMC, Düssmann H, Anilkumar U, Huber HJ, Prehn JHM. Single-cell imaging of bioenergetic responses to neuronal excitotoxicity and oxygen and glucose deprivation. J Neurosci. 2014;34(31):10192–205. pmid:25080581
- 17. Jekabsons MB, Nicholls DG. In situ respiration and bioenergetic status of mitochondria in primary cerebellar granule neuronal cultures exposed continuously to glutamate. J Biol Chem. 2004;279(31):32989–3000. pmid:15166243
- 18. Lange SC, Winkler U, Andresen L, Byhrø M, Waagepetersen HS, Hirrlinger J, et al. Dynamic changes in cytosolic ATP levels in cultured glutamatergic neurons during NMDA-induced synaptic activity supported by glucose or lactate. Neurochem Res. 2015;40(12):2517–26. pmid:26184116
- 19. Toloe J, Mollajew R, Kügler S, Mironov SL. Metabolic differences in hippocampal “Rett” neurons revealed by ATP imaging. Mol Cell Neurosci. 2014;59:47–56. pmid:24394521
- 20. Jaras I, Harada T, Orchard ME, Maldonado PE, Vergara RC. Extending the integrate-and-fire model to account for metabolic dependencies. Eur J Neurosci. 2021;54(4):5249–60. pmid:34109698
- 21. LeMasson G, Przedborski S, Abbott LF. A computational model of motor neuron degeneration. Neuron. 2014;83(4):990.
- 22. Bugmann G, Christodoulou C, Taylor JG. Role of temporal integration and fluctuation detection in the highly irregular firing of a leaky integrator neuron model with partial reset. Neural Comput. 1997;9:985–1000.
- 23. Burkitt AN. A review of the integrate-and-fire neuron model: I. Homogeneous synaptic input. Biol Cybern. 2006;95(1):1–19. pmid:16622699
- 24. Engl E, Attwell D. Non-signalling energy use in the brain. J Physiol. 2015;593(16):3417–29. pmid:25639777
- 25. Todorova V, Blokland A. Mitochondria and synaptic plasticity in the mature and aging nervous system. Curr Neuropharmacol. 2017;15(1):166–73. pmid:27075203
- 26. Gütig R, Aharonov R, Rotter S, Sompolinsky H. Learning input correlations through nonlinear temporally asymmetric Hebbian plasticity. J Neurosci. 2003;23(9):3697–714. pmid:12736341
- 27. Hardie DG. Sensing of energy and nutrients by AMP-activated protein kinase. Am J Clin Nutr. 2011;93(4):891S – 6. pmid:21325438
- 28. Hardie DG, Ross FA, Hawley SA. AMPK: a nutrient and energy sensor that maintains energy homeostasis. Nat Rev Mol Cell Biol. 2012;13(4):251–62. pmid:22436748
- 29. Potter WB, O’Riordan KJ, Barnett D, Osting SMK, Wagoner M, Burger C, et al. Metabolic regulation of neuronal plasticity by the energy sensor AMPK. PLoS One. 2010;5(2):e8996. pmid:20126541
- 30. Rockel AJ, Hiorns RW, Powell TP. The basic uniformity in structure of the neocortex. Brain. 1980;103(2):221–44. pmid:6772266
- 31. Hendry SH, Jones EG, DeFelipe J, Schmechel D, Brandon C, Emson PC. Neuropeptide-containing neurons of the cerebral cortex are also GABAergic. Proc Natl Acad Sci U S A. 1984;81(20):6526–30. pmid:6149547
- 32. Parnavelas JG, Barfield JA, Franke E, Luskin MB. Separate progenitor cells give rise to pyramidal and nonpyramidal neurons in the rat telencephalon. Cereb Cortex. 1991;1(6):463–8. pmid:1822752
- 33. Contreras D. Electrophysiological classes of neocortical neurons. Neural Netw. 2004;17(5–6):633–46. pmid:15288889
- 34. Bi GQ, Poo MM. Synaptic modifications in cultured hippocampal neurons: dependence on spike timing, synaptic strength, and postsynaptic cell type. J Neurosci. 1998;18(24):10464–72. pmid:9852584
- 35. Capogna M, Castillo PE, Maffei A. The ins and outs of inhibitory synaptic plasticity: neuron types, molecular mechanisms and functional roles. Eur J Neurosci. 2021;54(8):6882–901. pmid:32663353
- 36.
Spreizer S, Mitchell J, Jordan J, Wybo W, Kurth A, Vennemo SB, et al. NEST 3.3; 2022. 10.5281/zenodo.6368024
- 37.
Linssen CAP, Babu PN, Benelhedi A, De Schepper R, Fardet T, Eppler JM. NESTML 5.0.0. 2022. 10.5281/zenodo.5784175
- 38. Sahara S, Yanagawa Y, O’Leary DDM, Stevens CF. The fraction of cortical GABAergic neurons is constant from near the start of cortical neurogenesis to adulthood. J Neurosci. 2012;32(14):4755–61. pmid:22492031
- 39. Bettencourt LMA, Stephens GJ, Ham MI, Gross GW. Functional structure of cortical neuronal networks grown in vitro. Phys Rev E Stat Nonlin Soft Matter Phys. 2007;75(2 Pt 1):021915. pmid:17358375
- 40. Kempter R, Gerstner W, van Hemmen JL. Hebbian learning and spiking neurons. Phys Rev E. 1999;59(4):4498–514.
- 41. Howarth C, Gleeson P, Attwell D. Updated energy budgets for neural computation in the neocortex and cerebellum. J Cereb Blood Flow Metab. 2012;32(7):1222–32. pmid:22434069
- 42. Shoham S, O’Connor DH, Segev R. How silent is the brain: is there a “dark matter” problem in neuroscience?. J Comp Physiol A Neuroethol Sens Neural Behav Physiol. 2006;192(8):777–84. pmid:16550391
- 43. Cenini G, Lloret A, Cascella R. Oxidative stress in neurodegenerative diseases: from a mitochondrial point of view. Oxid Med Cell Longev. 2019;2019:2105607. pmid:31210837
- 44. Wang X, Michaelis EK. Selective neuronal vulnerability to oxidative stress in the brain. Front Aging Neurosci. 2010;2:12. pmid:20552050
- 45. Baeza-Lehnert F, Saab AS, Gutiérrez R, Larenas V, Díaz E, Horn M, et al. Non-canonical control of neuronal energy status by the Na+ pump. Cell Metab. 2019;29(3):668-680.e4. pmid:30527744
- 46. Turrigiano GG. The self-tuning neuron: synaptic scaling of excitatory synapses. Cell. 2008;135(3):422–35. pmid:18984155
- 47. Turrigiano G. Homeostatic synaptic plasticity: local and global mechanisms for stabilizing neuronal function. Cold Spring Harb Perspect Biol. 2012;4(1):a005736. pmid:22086977
- 48. Ibata K, Sun Q, Turrigiano GG. Rapid synaptic scaling induced by changes in postsynaptic firing. Neuron. 2008;57(6):819–26. pmid:18367083
- 49. Fardet T, Levina A. Simple models including energy and spike constraints reproduce complex activity patterns and metabolic disruptions. PLoS Comput Biol. 2020;16(12):e1008503. pmid:33347433
- 50. Chhabria K, Chakravarthy VS. Low-dimensional models of “neuro-glio-vascular unit” for describing neural dynamics under normal and energy-starved conditions. Front Neurol. 2016;7.
- 51. Froemke RC. Plasticity of cortical excitatory-inhibitory balance. Annu Rev Neurosci. 2015;38:195–219. pmid:25897875
- 52. Hennequin G, Agnes EJ, Vogels TP. Inhibitory plasticity: balance, control, and codependence. Annu Rev Neurosci. 2017;40:557–79. pmid:28598717
- 53. Jolivet R, Coggan JS, Allaman I, Magistretti PJ. Multi-timescale modeling of activity-dependent metabolic coupling in the neuron-glia-vasculature ensemble. PLoS Comput Biol. 2015;11(2):e1004036. pmid:25719367
- 54. Wei Y, Ullah G, Schiff SJ. Unification of neuronal spikes, seizures, and spreading depression. J Neurosci. 2014;34(35):11733–43. pmid:25164668
- 55. Knowlton C, Kutterer S, Roeper J, Canavier CC. Calcium dynamics control K-ATP channel-mediated bursting in substantia nigra dopamine neurons: a combined experimental and modeling study. J Neurophysiol. 2018;119(1):84–95. pmid:28978764
- 56. Yao C, He Z, Nakano T, Shuai J. Spiking patterns of a neuron model to stimulus: rich dynamics and oxygen’s role. Chaos. 2018;28(8):083112. pmid:30180647
- 57. Tsodyks MV, Markram H. The neural code between neocortical pyramidal neurons depends on neurotransmitter release probability. Proc Natl Acad Sci U S A. 1997;94(2):719–23. pmid:9012851