Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Extreme value statistics of nerve transmission delay

Abstract

Delays in nerve transmission are an important topic in the field of neuroscience. Spike signals fired or received by the dendrites of a neuron travel from the axon to a presynaptic cell. The spike signal then triggers a chemical reaction at the synapse, wherein a presynaptic cell transfers neurotransmitters to the postsynaptic cell, regenerates electrical signals via a chemical reaction through ion channels, and transmits them to neighboring neurons. In the context of describing the complex physiological reaction process as a stochastic process, this study aimed to show that the distribution of the maximum time interval of spike signals follows extreme-order statistics. By considering the statistical variance in the time constant of the leaky Integrate-and-Fire model, a deterministic time evolution model for spike signals, we enabled randomness in the time interval of the spike signals. When the time constant follows an exponential distribution function, the time interval of the spike signal also follows an exponential distribution. In this case, our theory and simulations confirmed that the histogram of the maximum time interval follows the Gumbel distribution, one of the three forms of extreme-value statistics. We further confirmed that the histogram of the maximum time interval followed a Fréchet distribution when the time interval of the spike signal followed a Pareto distribution. These findings confirm that nerve transmission delay can be described using extreme value statistics and can therefore be used as a new indicator of transmission delay.

Introduction

The neurological system connects multiple neurons. Spike signaling between two neurons is a complex physiological process. A schematic representation of a single neuron is shown in Fig 1(a), and a summary of the nerve transmission mechanism has been provided previously [13]. Each neuron has a physiological parallel resistor–capacitor circuit, in which K+ and Na+ ion channels and K+ leak channels maintain the membrane potential at the resting potential in a non-stimulated state. However, an influx of external current causes the depolarization of neurons, producing a sudden and sharp increase in potential, termed a spike. During the spike, the membrane potential exceeds a threshold value, which is followed by repolarization and hyperpolarization, and subsequently a period of preparation for the next spike generation. During this time, the Na+ ion channels enter a temporally inactive state termed the refractory period, which prevents neurotransmission backflow and ensures that the generated spike signal is transmitted in one direction in the neuron. The spike signal is then transmitted to the synapse, that is, a structure that allows a neuron to communicate with another neuron. Each synapse is composed of a pair of separate presynaptic and postsynaptic cells. Transmission begins with the release and diffusion of neurotransmitters into the synaptic cleft in the presynaptic cell, followed by the reception of the transmitters by receptors in the membranes of the postsynaptic cell, and subsequently by ion channels in the postsynaptic membrane that convert the transmitters into electrical signals that can then be transmitted to neighboring neurons. In this way, a neural transmission between two neurons (hereafter referred to as a “nerve transmission unit”, or “NTU”) occurs in a finite amount of time. The time required by an NTU fluctuates around its average value because of the physiological and chemical reactions of several minute particles, such as the transport of K+ and Na+ ions through the membrane or numerous neurotransmitters at the synapse. Thus, the time required for the signal transmission at the NTU can be described as a statistical variable rather than a deterministic constant [47]. In the view of queueing theory, the time between two spike signals corresponds to a waiting time in the transmission processes of the entire nervous system called a chain set of NTUs. Hereinafter, we refer to this waiting time as the “nerve transmission period” or “NTP,” defined as the total transmission time taken by a spike signal to pass through the dendrites and axons (the nerve part or NP) and the “synaptic delay” as defined previously literature [4].

thumbnail
Fig 1. Schematic representation of a systematic description of a single neuron.

(a) illustrates a single neuron and (b) represents a Petri net model graphically describing a state transition system. The first place in (b) corresponds to part of a neuron comprising an NP and a presynaptic cell in (a), and the second place in (b) corresponds to a postsynaptic cell in (a), with unidirectional transmissions from the presynaptic to the postsynaptic cell. A comparison of (a) and (b) shows that we can interpret NTU as a two-state transition problem.

https://doi.org/10.1371/journal.pone.0306605.g001

To date, several studies have reported that a delay or disruption in spike signals can cause diseases such as schizophrenia [8, 9]. Nevertheless, investigations regarding the detailed relationship between NTP and these nerve diseases are limited. Elucidating the statistical behavior of NTUs may be one way to gain deeper insight into the behavioral aspects of NTUs. Estimating the frequency distribution of the maximum delays and the upper limits from a set of time-series data for NTPs could further allow the metrics and scale parameters of the distribution to be developed as new indicators for evaluating the statistical properties of the nervous system. The impact of a disease on the nervous system can further be quantitatively evaluated by comparing the degree to which statistical metrics change from their normal state before and after the disease. Observation of NTPs over several days can help derive a theoretical histogram, such that the maximum distribution of NTPs is asymptotic, while the parameters of the theoretical histogram can be used as a new mathematical tool for evaluating neurological dysfunction.

Historically, the mechanism of action of NTUs has been studied by both modeling and experimentation. Hodgkin and Huxley presented deterministic time-evolution equations for an equivalent circuit model describing membrane potential changes in a neuron [1015]. We will refer to this model as the Hodgkin–Huxley model, or HHM. The HHM describes the detailed mechanism of the NTU as a physiological parallel resistor–capacitor circuit, as mentioned at the beginning of this section. The Leaky Integrate-and-Fire model (LIFM) [1621] is a simplified version of the HHM that focuses on the timing and frequency of spike generation, without considering detailed generation processes, to reduce the number of variables. LIFM is more computationally friendly than HHM, and is well-suited for simulating large-scale systems [20, 21]. HHM or LIFM can be combined with the exponential synapse models [2224] that express a decay of the electric current at the synapse, representing the behavior of the synaptic delays.

In terms of stochastic processes, NTU can be interpreted as a two-state transition problem, while the NTP can be formulated as a random variable. The detailed reason for this is explained later using the Petri net model [2527] in the Methods section. In this case, the discrete elapsed time Ti (i = 1, 2, ⋯, n) (used as a random variable sequence) is defined as the time at which a spike signal transmission is converted into an electrical signal. In addition, the random variable sequence of the time difference is defined as Ti as diTiTi−1 (i = 1, 2, ⋯, n), and dmax is defined as the maximum of di (i = 1, 2, ⋯, n) for each measurement. Notably, for sufficiently large m measurements, a probability density distribution, D, can be obtained using extreme value theory (EVT) [2831], to which the histogram (frequency distribution) of dmax asymptotes. EVT began with a theoretical study of the statistical distribution followed by the maximum values, which were later established with the Trinity Theorem, which shows that the distribution following such a maximum, if it exists, belongs to only one attraction domain of the three distributions: the Gumbel, Fréchet, and Weibull distributions [2831]. Therefore, if D is analytically determined using EVT, D can be used as a new indicator for objectively evaluating the maximum distribution of NTPs. Importantly, if a random variable sequence Xi (i = 1, 2, ⋯, n) follows an exponential or Pareto distribution, the distribution of the maximum order statistic of Xi is asymptotic to the Gumbel or Fréchet distribution of EVT, respectively [2831] (please refer to the Methods section for a mathematical description of EVT). Intriguingly, the transmission in a single NTU can be estimated to follow a Poisson distribution. Several experiments have shown that the frequency distribution of synaptic delays follows an exponential function, strongly supporting the idea that Ti follows a Poisson process, and that a sequence of random variables for that time interval di follows an exponential distribution [3234]. It follows that the asymptotic distribution of the maximum order statistic of the random variable of the time interval di can be determined if di follows an exponential or Pareto distribution. As mentioned above, several experiments have confirmed that NTPs follow for an exponential distribution [3234]. Therefore, the distribution of the maximum values of di obtained by solving the time evolution of HHM or LIFM is expected to follow the Gumbel distribution.

This study demonstrates the existence of an extreme value distribution of nerve transmission delays using theory and a single-neuron simulation. To the best of our knowledge, this is the first study to apply EVT to the problem of delays in neural spike signals. Prior research has recognized that delay occurs in the transmission of neural signals, resulting in physiological effects such as synaptic plasticity [3537]. However, no studies have yet elucidated the theoretical distribution of the maximum number of cases of nerve transmission delays. In particular, we applied EVT to the problem of delays in neural spike signals that follow a Poisson process in the time direction. For applications, once the asymptotic distribution of the histogram of the maximum value of NTPs in nerve transmission is identified, the upper bounds and their probability of occurrence can be predicted; this could be helpful in clinical practice. Herein, we can use electroencephalography (EEG) to measure the delays in the NTPs, and EVT to estimate the upper bounds of their distribution. This could serve as a stepping stone for the development of preventive measures. In addition, extreme value statistics also proves that the general extreme value distribution (GEVD) [3840] contains these three distribution types. As GEVD is adaptable and suitable for fitting measurement data, studies that use GEVD to analyze real data are becoming popular. Semi-empirical approaches that apply GEVD to measured distributions and estimate the model parameters of GEVD using probabilistic maximum likelihood or similar methods have dominated research on spatiotemporal data [4144]. In particular, in areas related to neuroscience, one study reported the analysis of experimental data on the velocities of dynein and kinesin in neurons with GEVD [45]. In contrast to these data analysis studies, our study aimed to demonstrate that the histogram of maximum time intervals of neural spike signals follows EVT, both theoretically and by simulation.

From an interdisciplinary perspective, EVT has been used in various fields, such as in predicting the maximum amount of precipitation [4648], the maximum age limit in a region, and the maximum speed at which an athlete can run a marathon [4951]. However, there have been few examples of the application of EVT to traffic transportation systems, even when we examine other related fields. In the fields of physics and aeronautics, we previously considered the spot assignment problem in airport ground traffic from a physical perspective [52]. Focusing on a roadway consisting of a single lane with a parking lane, we solved the problem of a vehicle moving along the roadway in one direction, stopping at one of the randomly selected spots in an adjacent parking lane, and starting again after a certain period of time, using a stochastic model known as the total asymmetric simple exclusive process (TASEP) [5355]. In our previous work [52], we examined the stochastic process of a vehicle stopping at a spot in a parking lane while moving along a road. We found that this scenario follows the so-called generation-death process, and that it is necessary to consider the order statistics of spot usage in order to obtain the frequency distribution of spot usage. Accordingly, the spot usage distribution was described using an extreme-value statistical distribution [52]. This is an example of the application of EVT to stochastic processes in the asymmetric spatial direction. By contrast, this study applied EVT to the problem of nerve transmission delays in the chronological direction.

HHM and LIFM are deterministic time evolution models; however, the actual spike signal time interval was confirmed to follow an exponential distribution. As such, it is necessary to modify the deterministic time evolution models into stochastic time evolution models. This study reproduces the stochastic properties in the LIFM by providing a stochastic fluctuation to the time constant τ, which determines the decay rate of the spike signal (see the Methods section). Briefly, each time a spike occurs, τ is updated based on a certain probability distribution. The time dependence of the voltage drop slightly fluctuates with each signal in real neurotransmission processes, making the time constant a random variable, and thus a more accurate representation of real physical phenomena. Therefore, this is a reasonable method for converting a deterministic time evolution model into a stochastic one. In summary, we demonstrated our theory that NTPs follow extreme-order statistics in a nerve transmission simulation for a single NTU using a stochastic LIFM combined with an exponential synapse model, which is also an emphasis of this study.

Methods

Single neuron models

The first detailed mathematical model of the transmission process of a single neuron was the HHM, formulated in 1952 by Hodgkin and Huxley, based on the experimental results of action potentials in squid axons [1015]. In that study, each neuron was shown to be a parallel resistance–capacitor circuit comprising K+ ion channels, Na+ ion channels, and K+ leak channels, while the spike signal was shown to propagate with a finite width waveform as a solution to the time-dependent differential equation of the electrical circuit. The HHM model accurately reproduced the neurotransmission process, including the waveform of the spike signal. However, the HHM model has some practical drawbacks, such as the fact that it retains four variables per neuron and requires a relatively accurate computational method, owing to the complexity of the equations. In physiological experiments, often only the timing of the spike signal is of interest, not the waveform of the signal. Accordingly, LIFM is an intrinsic model that simplifies the HHM by focusing only on the membrane potential shift and reducing the number of variables to one. The time evolution equation for the membrane potential in LIFM is as follows [1621]: (1) where τ is the time constant, Vrest is the resting potential, V(t) is the membrane potential at time t, R is the membrane resistance, Iext is a constant external current, and θ is the threshold for spike firing. Vreset is the reset potential and Vinit is the initial value of the membrane potential. The membrane potential is the equilibrium point at Vrest, which asymptotes to Vrest + Rlext at the rate of τ. In addition, s(t) is a discrete function equal to 1 if a spike is fired at time t, but is zero otherwise. When the membrane potential V exceeded θ, we set s(t) = 1 to express that the spike was fired at that time while simultaneously resetting the membrane potential V to Vreset. According to Eq (1), the change and resetting of the membrane potential are repeated and the spike signal travels through the neuron as a periodic signal.

In contrast to the original LIFM expressed in Eq (1), a deterministic differential equation, into a stochastic differential equation by replacing the time constant τ with ξ(μ, σ) where ξ(μ, σ) is the independent identity distribution given by the two parameters τ and σ, corresponding to the location and rate parameters for an exponential distribution, and the shape and location parameters for the Pareto distribution, respectively. We computed the variation in the membrane potential according to Eq (1) after replacing the time constant τ with ξ(μ, σ): We assumed that the current spike signal was transmitted to the presynaptic cell when the threshold θ was reached, and then calculated the change in current at the synapse using an exponential decay model, expressed as , where Is(t) is the current at the synapse at time t and τs indicates the time required for the chemical reaction process at the synapse, that is, the synaptic delay, the membrane potential is reset when both conditions are met: the membrane potential exceeds the threshold θ and the synaptic decay is below a time constant. Therefore, the condition V(t)>θ for resetting the membrane potential in Eq (1) is modified as V(t)>θIs(t) < ω, where ω corresponds to Is(0)e−1. Accordingly, we calculated both the spike signal transmission and synaptic delay, which makes the description closer to the actual nerve transmission compared to Eq (1). Nevertheless, in practice, the effect of synaptic delay is negligible in a single-neuron simulation in one direction. This can be explained as follows: In the case of multiple neurons, the current Is at the synapse feeds back to the variation in the membrane potential as Iext in some cases. However, in a single-neuron problem, the signal flows in one direction. In addition, it decays immediately compared to the variation in the membrane voltage. Therefore, Is does not affect the behavior of the entire system in our case because we focus on the distribution of the maximum time intervals when the synaptic time intervals follow an exponential distribution. In summary, modified versions of Eq (1) can be expressed as follows. (2)

Assumption of the Poisson process

Neural transmission in an NTU can be expressed using a directed graph consisting of two nodes. For clarity, we explain the NT system using the classical Petri net model [2527], which is an established representation of state-transition diagrams. We outline this Petri net model below, with reference to Fig 1(b). First, the components of a system or their states are depicted by a circular symbol called “place” and an event is depicted by a bar-shaped symbol called “transition.” The correlation between the components of place and transition can be expressed by a directed arrow called “arc.” The objects processed on the system are represented by a black filled circle called a “token.” The flow of objects was traced on a Petri net diagram using these tokens. The first place in (b) corresponds to a part of a neuron consisting of an NP and a presynaptic cell in (a), and the second place in (b) corresponds to a postsynaptic cell in (a), with unidirectional transmission from the presynaptic to the postsynaptic cell. Additionally, the “transition” shows all the events, including the spike generation, release, diffusion, reception, and conversion of neurotransmitters into electrical signals. The “token” shown as a black circle in (b) indicates the spike signals flow at the NP or neurotransmitters at the synapse. The weights w1 and w2 represent the conditions that can fire the transition in each arc; these weights usually represent the firing criteria. When the number of tokens exceeds the weight, the tokens are ready to pass through the arc. In NT problems, we regard arcs as always transmittable states as long as we ignore exceptional dysfunction phenomena such as synaptic fatigue [56], where neurotransmitters in the synapse become scarce and transmission is aborted. Hence, we can ignore the conditions for w1 and w2 in the first approximation and set them to 1. A comparison of (a) and (b) clearly shows that we can interpret NTU as a two-state transition problem.

The transmission in a single NTU can be assumed to follow a Poisson distribution for the following reasons: Considering a single NTU in a nerve chain connecting multiple NTUs which is sufficiently long to ignore the effect of the edges of the chain, we can assume an open boundary condition. Due to the refractory period, the postsynaptic cells in the NTU did not receive other signals during each signal transmission. Therefore, we can assume each event to be independent, where each “event” is defined as the transmission process from the generation of a spike signal at a neuron to its reception by receptors, as well as its conversion into electrical signals at a postsynaptic cell. As a thought experiment, we consider observing the transmission process in the NTU for finite time T. Owing to physiological homeostasis, it is reasonable to consider the transmission flow in a steady state. In addition, in a zero-order approximation, the transmission rate can be assumed to be invariant during the observation. The invariant transmission rate and independence of each event imply that the probability of events occurring at any time interval (ti, ti+ t) is independent of the events that occurred before Ti. Therefore, the system is memory-less. The reason for stating “in the zero-order approximation”, is that at this stage, we ignore synaptic fatigue [56], where neurotransmitters in the synapse become scarce and transmission is aborted. Consider discrete time Δt as a short time of the same order as di. The probability of an event occurring more than once during Δt is negligible, because di is the interval in which an event occurs. Accordingly, this system exhibited event scarcity. Thus, the system has the following conditions to be considered a Poisson distribution: stationarity, independence, memorylessness, and scarcity. Thus, Ti is a sequence of random variables following a Poisson process. Recall that di is a sequence of random variables for the time interval TiTi−1 (i = 1, 2, ⋯, n). If a sequence of random variables follows a Poisson process, then their difference sequence of random variables follows an exponential distribution [57]. Therefore, di is expected to follow the exponential distribution. As mentioned above, several experiments have shown that the frequency distribution of synaptic delays follows an exponential function, strongly supporting the idea that Ti follows a Poisson process and that a sequence of random variables for that time interval di follows an exponential distribution [3234].

If a random variable sequence Xi (i = 1, 2, ⋯, n) follows an exponential or Pareto distribution, then the distribution of the maximum order statistic of Xi is asymptotic to the Gumbel or Fréchet distribution of EVT, respectively [2831] (see the below for the mathematical description of EVT). It subsequently follows that the asymptotic distribution of the maximum order statistic of the random variable of the time interval di can be determined if di follows an exponential or Pareto distribution. Since NTPs can be assumed to follow an exponential distribution [3234], the distribution of the maximum values of di obtained by solving the time evolution of HHM or LIFM is expected to follow the Gumbel distribution.

Extreme value theory (EVT)

The maximum order statistic of the independent identical random variable is defined as Xj (j = 1, 2, ⋯, n) as Zn. Mathematically, this definition can be expressed as Zn ≔ max{X1, X2, …, Xn} = max Xj (j = 1, 2, …, n). Under this definition, EVT discusses the asymptotic distribution of Zn to obtain sufficient measurements of Zn. Fig 2 schematically explains the details of the extreme-value statistics. Suppose we measured m datasets of Zn in the example shown in Fig 2, Zn is d4 for Case 1, d2 for Case 2, and d1 for Case m. If m is sufficiently large, we can obtain the histogram (frequency distribution) of Zn. In particular, if we find k counts of data such that Zn becomes a value of Y, then there are mCk possible combinations in which we can observe Y for k times among m measurements. It is necessary to consider all of these points in order to derive the probability density function (PDF) for Zn. The PDF can be obtained as the first derivative of the cumulative distribution function (CDF), EVT begins by deriving the CDF of Zn. Now, consider the case in which the independent identical variable Xi has a population distribution F. The CDF of Zn is expressed as P(Znx), where P is the probability and x is a continuous point. From the definition of Zn, we can derive the following relationship. (3)

thumbnail
Fig 2. Schematic view of the extreme order statistics.

The upper part of the figure schematically describes the maximum order statistic when m data sets each of which consists of n data are measured. The maximum order statistic is the statistic of the maximum values sampled from each of the sufficient m datasets, each of which comprises sufficiently large n data. Extreme Value Theory (EVT) examines the asymptotic distribution of the maximum order statistic, revealing that there are three types of extreme value distributions: Gumbel, Fréchet, and Weibull distributions.

https://doi.org/10.1371/journal.pone.0306605.g002

Eq (3) shows that the CDF of Zn is equal to the population distribution F to the power of n. Thus, we focus on the distribution where Fn(x) converges. Fisher and Tippett [28] mathematically already proved the following equivalence relationship with respect to the convergence of Fn(x): (4)

Here, G is a continuous distribution that is neither degenerate nor divergent. indicates that F belongs to the attraction domain of G. Eq (4) states the following: if , i.e., if the population distribution F of independent identical random variable Xj (1 ≤ jn) belongs to the attraction domain of G, which is the asymptotic distribution of the maximum order statistic Zn and is neither degenerate nor divergent, an and bn exist for which Fn(anx + bn) converges in G for sufficiently large n. Conversely, if we find a pair an and bn, where Fn(anx + bn) converges to G, F belongs to the attraction domain of G. Eq (4) also implies that a linear transformation of G with respect to the scale an and location bn is permitted to avoid degeneration or divergence. This can be understood by employing and rewriting Eq (4) for .

The Trinity Theorem by Fisher and Tippett [28], Fréchet [29], and Gnedenko [30] proves that only three types of extreme distributions satisfy Eq (4): Gumbel, Fréchet, and Weibull distributions. This theorem also proves that any population distribution F is asymptotic to one of the three extreme distributions if . The CDFs of the Gumbel, Fréchet, and Weibull distributions are as follows: (5) (6) (7)

If Xj (j = 1, 2, ⋯, n) follows an exponential distribution, F can be expressed as F(x) = 1 − exp(−x), . In this case, the convergence of Fn(x) for a sufficiently large n can be identified by selecting an and bn as 1 and log(n) as follows [52]: (8)

Therefore, the maximum order statistic Zn of the random variable Xj (1 ≤ jn) converges to the Gumbel distribution if Xj follows an exponential distribution. Meanwhile, if Xj (1 ≤ jn) follows a Pareto distribution, F is expressed as F(x) = 1 − 1/xα, where α > 0 and x ≥ 1. In this case, by using a derivation similar to that in the exponential distribution, Fn(anx + bn) converges to the Fréchet distribution by selecting an and bn as n1/α and 0, respectively. (9)

In summary, Zn converges to the Gumbel distribution if Xj (1 ≤ jn) follows an exponential distribution, and the Fréchet distribution if Xj (1 ≤ jn) follows a Pareto distribution.

Analysis

Stochastic LIFM simulations were performed using Eq (2) for two different cases, in which ξ(μ, σ) follows an exponential or Pareto distribution. In our tests, we set (Vrest, Vreset, θ, R, Iext) to be (-65 mV, -65 mV, -55 mV, 1.0 MOhm, 12 nA) by reference to the literature [5860]. We further set the small time Δt and the number of time steps n to 1 ms and 1000 iterative steps, respectively; 1000 ms of physical time was computed in each simulation. We used the inverse transformation method [61, 62] to generate a time interval di (i = 1, 2, ⋯, n) random sequence of spike signals following exponential and Pareto distributions. In the inverse transformation method, the random sequence di (i = 1, 2, ⋯, n) is given by di = ξ(μ, σ) = (−1.0/σ)log(U) + μ for exponential distributions, where U denotes a uniform distribution. In this formula, σ corresponds to the rate parameter of the exponential function that determines the distribution scale. The parameter μ shifts the distribution in the positive direction to an asymptote of μ; this parameter corresponds to the parameter τ in Eq (1). Accordingly, we set μ to 20 ms and σ to 5 ms. By contrast, for the Pareto distribution, the random sequence di (i = 1, 2, ⋯, n) is given by di = ξ(μ, σ) = μ/(U)1.0/σ. In this model, the location and attenuation rate are given by parameters μ and σ, respectively. We set the parameter μ to 20 ms. On the other hand, we performed simulations for two different cases: σ = 7.5 and σ = 20.

Fig 3 presents the simulation results when ξ(μ, σ) follows an exponential distribution. First, Fig 3(a) indicates that ξ(μ, σ) follows an exponential distribution. Fig 3(b) shows the resulting time intervals of the spike signals obtained by solving Eq (2) with a randomly updated ξ, when each spike signal is generated. This confirms that the time interval of the generated spike signals followed an exponential distribution. Fig 3(c) shows the variation in current I (which corresponds to Is in the Methods section) at the synapse and membrane voltages V. Once the voltage reached 55 mV, it triggered the generation of a spike signal, which was immediately transmitted to the synapse, where the current was attenuated using the exponential decay model. As mentioned earlier, in a single-neuron problem, the signal flows in one direction, and the current Is immediately decays compared to the variation in the membrane voltage. Consequently, Is does not affect the behavior of the entire system. The circular dots shown in Fig 3(d) present a histogram of the maximum time interval measured over 100000 trials. Each trial iterated 1000 time steps for the time evolution of Eq (2). The histogram was normalized such that the distribution sum was 1. Notably, the obtained histogram corroborated the Gumbel distribution, which is consistent with EVT. Note that for the values of R-squared, the coefficient of determination was greater than 0.99 in all cases. Accordingly, we demonstrated that in single-neuron problems, the histogram of the maximum time intervals followed a Gumbel distribution when the time intervals followed an exponential distribution.

thumbnail
Fig 3. Simulation results when ξ(μ, σ) follows an exponential distribution.

(a) distribution of ξ(μ, σ), (b) distribution of the time interval, (c) example snapshot of spike signals, and (d) obtained Gumbel distribution.

https://doi.org/10.1371/journal.pone.0306605.g003

Fig 4 shows the simulation results for the case in which ξ(μ, σ) follows a Pareto distribution in the small and large cases of σ = 7.5 or σ = 20. As reported in previous transmission delay experiments, the distribution of NTPs primarily follows exponential distributions [3234]. However, in some cases, the NTP distributions exhibited sharper peaks and longer tails, similar to those of the Pareto distribution. In response to these observations, we investigated a histogram of the maximum NTPs for the Pareto distribution. Fig 4(a)–4(d) shows the results for σ = 20. First, (a) shows an example snapshot of the spike signals for reference, while (b) shows the obtained distribution of time intervals and fitting results using a Pareto distribution. The results indicate that the measured distribution followed a Pareto distribution. Because the maximum and minimum values are significantly separated in the Pareto distribution, we show the logarithmic scale of (b) in (c) to confirm the existence of the long tail, which is a trademark of the Pareto distribution. Fig 4(d) shows a histogram of the maximum time intervals measured in one million trials. Each trial iterated 1000 time steps for the time evolution of Eq (2). The histogram was normalized such that the distribution sum was 1. The value of R-squared, which is the coefficient of determination, was greater than 0.99. Notably, the histogram obtained followed the Frechet distribution, which was consistent with EVT.

thumbnail
Fig 4. Simulation results when ξ(μ, σ) follows a Pareto distribution.

(a)-(d) shows the results for σ = 20: (a) example snapshot of spike signals, (b) distribution of the time interval, (c) logarithmic scale of (b), and (d) obtained Frechet distribution. In contrast, (e)–(h) show the results for σ = 7.5: (e) example snapshot of spike signals, (f) distribution of the time interval, (g) logarithmic scale of (f), and (h) obtained Frechet distribution.

https://doi.org/10.1371/journal.pone.0306605.g004

The Fig 4(e)–4(h) shows the results for σ = 7.5. Fig 4(e) shows a snapshot of a spike signal. Fig 4(f) and 4(g) shows the distribution of time intervals on normal and logarithmic scales, respectively. Unlike the case of σ = 20, a drastically delayed case of d > 430 ms for σ = 7.5 was observed. Even in this extreme case, a Pareto distribution fits the measured distribution of the time intervals while maintaining its characteristic high peak and long tails. Fig 4(h) shows a histogram of the maximum time intervals measured in one million trials. Each trial iterated 1000 time steps for the time evolution of Eq (2). The histogram was normalized such that the distribution sum was 1. Notably, the obtained histogram followed the Frechet distribution, which is consistent with theory. Accordingly, we reported that in synaptic problems, the histogram of the maximum time intervals followed the Fréchet distribution when the time intervals followed a Pareto distribution.

Discussion

For classical single-neuron experiments such as the aforementioned squid experiments [1015], the extreme value distributions obtained in this study can directly explain the frequency distribution of the maximum delay between the arrival intervals of spike signals with statistical fluctuations. Conversely, for a more realistic neuronal series with a myriad of connected neurons, we should be reminded that S. Boudkkazi [6365] pointed out that both short- and long-term synaptic plasticity may be caused by the modulation of synaptic delay. They further suggested that the amplitude and duration of presynaptic action potentials determine the synaptic delay at excitatory synapses in the hippocampus and neocortex. In brief, they argued that the features (amplitude and duration) of the spike signal may determine synaptic plasticity. This situation is consistent with that described by the LIFM, with stochastic fluctuations in the time constants presented in this study. Therefore, the extreme-value statistical distribution, which is the frequency distribution of the maximum delay of the spike signal obtained in this study, may directly represent the frequency distribution that causes short- or long-term synaptic plasticity. However, it may be difficult to distinguish between the two because of the transition from short- to long-term synaptic plasticity. Synaptic plasticity is strongly associated with learning and memory [3537]. Thus, the extreme-value statistical distribution, which is the frequency distribution of the maximum latency, may be an indicator of memory capacity, or, conversely, forgetfulness, i.e., the frequency of short-term memory lapses.

On the other hand, in real neural networks, neurons are connected not only in series, but also in parallel. In a parallel-connected system, the probability of receiving a spike signal at neuron i is the sum of the probabilities that a spike signal is transmitted to neuron i from each of several input neurons independently connected to neuron i. Consider the stochastic process of an event in which the spike signal arrives at neuron i. The probability of receiving a spike signal from one neuron k (k = 1, 2, ⋯, na) of the na neurons connected to neuron i can be represented as a Poisson process, according to the Poisson distribution characterized by the parameter λk. In this case, the sequence of random variables at the time of arrival of the spike signal at neuron i is characterized by the parameter . Therefore, as discussed in this study, extreme-value statistical distributions can also be discussed. Nevertheless, to obtain the extreme value distribution of the maximum delay in a real neural network with a parallel system, it may be easier to fit the frequency distribution of the maximum value of the measured data in a neuron (corresponding to neuron i above) using GEVD rather than deriving the distribution analytically or theoretically. Izhikevich further reported on the observation of polychronization phenomena in parallel neuronal systems [66]. Several studies in related fields have reported the synchronization of events characterized by extreme-value distributions [67, 68]. This suggests that if the frequency distribution of the maximum delay specific to each group of neurons in a parallel-connected system can be obtained using GEVD, then the synchronization phenomena of their different extreme-value distributions may also be investigated. In other words, it is possible to discuss the relationship between the synchronization patterns of different extreme value distributions and physiological phenomena such as learning, memory, and forgetting. These issues should be investigated further in future studies.

Conclusion

This study presents nerve transmission, a complex physiological reaction, as a stochastic process. This study showed that the distribution of the maximum time interval of the spike signals followed extreme-order statistics. By introducing statistical variance into the time constant of the Leaky Integrate-and-Fire model, a deterministic time evolution model of spike signals, we allowed randomness to emerge in the time interval of the spike signals. We further confirmed that the time interval of the spiked signal followed an exponential distribution if the time constant followed an exponential distribution function. In this case, our theory and simulations confirmed that the histogram of the maximum time interval followed the Gumbel distribution, which is one of the three types of extreme-value statistics. In addition, we confirmed that the histogram of the maximum time interval followed a Fréchet distribution when the time interval of the spike signal followed a Pareto distribution. Our results showed that neurotransmission delays can be described using extreme value statistics. In this study, we used the classical LIFM model. Although several improved LIFM models have been presented previously, as long as the elapsed time of the spike signal transmission follows a Poisson process, the results from the improved models with excellent waveform reproducibility follow the same patterns as those presented in this study. In addition, the relationship between the extreme value distribution of spike signals and physiological phenomena, such as synaptic plasticity, memory, and forgetting was discussed, as well as the potential for applying the findings to more realistic networks. Overall, in this study, we successfully demonstrated that the distribution of extreme values is a new indicator of nerve transmission problems.

Acknowledgments

The authors would like to thank Editage (www.editage.jp) for the English language editing.

References

  1. 1. Pereda AE. Electrical synapses and their functional interactions with chemical synapses. Nature Reviews Neuroscience. 2014;15(4):250–263. pmid:24619342
  2. 2. Henley C. Foundations of neuroscience. [sn]; 2021.
  3. 3. Xiang Z, Tang C, Chang C, Liu G. A new viewpoint and model of neural signal generation and transmission: Signal transmission on unmyelinated neurons. Nano Research. 2021;14(3):590–600.
  4. 4. Katz B, Miledi R. The measurement of synaptic delay, and the time course of acetylcholine release at the neuromuscular junction. Proceedings of the Royal Society of London Series B Biological Sciences. 1965;161(985):483–495. pmid:14278409
  5. 5. Wang J, Miller MI, Ogielski AT. In: Eeckman FH, editor. A Stochastic Model Of Synaptic Transmission and Auditory Nerve Discharge (Part I). Boston, MA: Springer US; 1994. p. 147–152. Available from: https://doi.org/10.1007/978-1-4615-2714-5_24.
  6. 6. Sauer M, Stannat W. Reliability of signal transmission in stochastic nerve axon equations. Journal of Computational Neuroscience. 2016;40(1):103–111. pmid:26781640
  7. 7. Dutta S, Detorakis G, Khanna A, Grisafe B, Neftci E, Datta S. Neural sampling machine with stochastic synapse allows brain-like learning and inference. Nature Communications. 2022;13(1):2571. pmid:35546144
  8. 8. Shin YW, O’Donnell BF, Youn S, Kwon JS. Gamma oscillation in schizophrenia. Psychiatry Investig. 2011;8(4):288–296. pmid:22216037
  9. 9. Becker S, Nold A, Tchumatchenko T. Formation and synaptic control of active transient working memory representations. bioRxiv. 2020.
  10. 10. Hodgkin AL, Huxley AF. A quantitative description of membrane current and its application to conduction and excitation in nerve. The Journal of physiology. 1952;117(4):500. pmid:12991237
  11. 11. ARMSTRONG CM, BEZANILLA F. Currents Related to Movement of the Gating Particles of the Sodium Channels. Nature. 1973;242(5398):459–461. pmid:4700900
  12. 12. Aldrich RW, Corey DP, Stevens CF. A reinterpretation of mammalian sodium channel gating based on single channel recording. Nature. 1983;306(5942):436–441. pmid:6316158
  13. 13. Horn R, Vandenberg CA. Statistical properties of single sodium channels. Journal of General Physiology. 1984;84(4):505–534. pmid:6094703
  14. 14. Clancy CE, Rudy Y. Linking a genetic defect to its cellular phenotype in a cardiac arrhythmia. Nature. 1999;400(6744):566–569. pmid:10448858
  15. 15. Schwiening CJ. A brief historical perspective: Hodgkin and Huxley. J Physiol. 2012;590(11):2571–2575. pmid:22787170
  16. 16. Stein RB. Some models of neuronal variability. Biophys J. 2008;7(1):37–68.
  17. 17. Feng J. Is the integrate-and-fire model good enough?–review. Neural Networks. 2001;14(6):955–975. pmid:11665785
  18. 18. Gerstner W, Kistler WM. Spiking neuron models: single neurons, populations, plasticity; 2002. Available from: https://doi.org/10.1017/CBO9780511815706.
  19. 19. Lansky P, Sanda P, He J. The parameters of the stochastic leaky integrate-and-fire neuronal model. Journal of Computational Neuroscience. 2006;21(2):211–223. pmid:16871351
  20. 20. Igarashi J, Yamaura H, Yamazaki T. Large-Scale Simulation of a Layered Cortical Sheet of Spiking Network Model Using a Tile Partitioning Method. Frontiers in Neuroinformatics. 2019;13. pmid:31849631
  21. 21. Igarashi J, Shouno O, Fukai T, Tsujino H. Real-time simulation of a spiking neural network model of the basal ganglia circuitry using general purpose computing on graphics processing units. Neural Networks. 2011;24(9):950–960. pmid:21764258
  22. 22. Srinivasan R, Chiel HJ. Fast Calculation of Synaptic Conductances. Neural Computation. 1993;5(2):200–204.
  23. 23. Hendricks WD, Westbrook GL, Schnell E. Early detonation by sprouted mossy fibers enables aberrant dentate network activity. Proceedings of the National Academy of Sciences. 2019;116(22):10994–10999. pmid:31085654
  24. 24. Humphries M, Lepora N, Wood R, Gurney K. Capturing dopaminergic modulation and bimodal membrane behaviour of striatal medium spiny neurons in accurate, reduced models. Frontiers in Computational Neuroscience. 2009;3. pmid:20011223
  25. 25. Brauer W, Reisig W. Carl adam Petri and “Petri nets”. Fundamental concepts in computer science. 2009;3(5):129–139.
  26. 26. Petri CA. Communication with automata: Volume 1 supplement 1. DTIC Document. 1966;.
  27. 27. Thong WJ, Ameedeen MA. A Survey of Petri Net Tools. In: Sulaiman HA, Othman MA, Othman MFI, Rahim YA, Pee NC, editors. Advanced Computer and Communication Engineering Technology. Cham: Springer International Publishing; 2015. p. 537–551.
  28. 28. Fisher RA, Tippett LHC. Limiting forms of the frequency distribution of the largest or smallest member of a sample. Mathematical Proceedings of the Cambridge Philosophical Society. 1928;24(2):180–190.
  29. 29. Fréchet M. Sur la loi de probabilité de l’é cart maximum. Annales de la Société Polonaise de Mathé matique. 1927. Available from: https://cybra.lodz.pl/Content/6198/AnnSocPolMathe_t.VI_1927.pdf.
  30. 30. Gnedenko B. Sur La Distribution Limite Du Terme Maximum D’Une Serie Aleatoire. Annals of Mathematics. 1943;44(3):423–453.
  31. 31. Taylor AE. A study of Maurice Fréchet: II. Mainly about his work on general topology, 1909–1928. Archive for History of Exact Sciences. 1985;34(4):279–380.
  32. 32. Wierenga CJ, Wadman WJ. Miniature Inhibitory Postsynaptic Currents in CA1 Pyramidal Neurons After Kindling Epileptogenesis. Journal of Neurophysiology. 1999;82(3):1352–1362. pmid:10482754
  33. 33. Trigo FF, Sakaba T, Ogden D, Marty A. Readily releasable pool of synaptic vesicles measured at single synaptic contacts. Proceedings of the National Academy of Sciences. 2012;109(44):18138–18143. pmid:23074252
  34. 34. Saveliev A, Khuzakhmetova V, Samigullin D, Skorinkin A, Kovyazina I, Nikolsky E, et al. Bayesian analysis of the kinetics of quantal transmitter secretion at the neuromuscular junction. Journal of Computational Neuroscience. 2015;39(2):119–129. pmid:26129670
  35. 35. Knoblauch A, Körner E, Körner U, Sommer FT. Structural Synaptic Plasticity Has High Memory Capacity and Can Explain Graded Amnesia, Catastrophic Forgetting, and the Spacing Effect. PLOS ONE. 2014;9(5):1–19. pmid:24858841
  36. 36. Kauwe G, Pareja-Navarro KA, Yao L, Chen JH, Wong I, Saloner R, et al. KIBRA repairs synaptic plasticity and promotes resilience to tauopathy-related memory loss. The Journal of Clinical Investigation. 2024;134(3). pmid:38299587
  37. 37. van Rossum MCW, Shippi M, Barrett AB. Soft-bound Synaptic Plasticity Increases Storage Capacity. PLOS Computational Biology. 2012;8(12):1–11. pmid:23284281
  38. 38. Hosking JRM, Wallis JR, Wood EF. Estimation of the Generalized Extreme-Value Distribution by the Method of Probability-Weighted Moments. Technometrics. 1985;27(3):251–261.
  39. 39. Singh VP. In: Generalized Extreme Value Distribution. Dordrecht: Springer Netherlands; 1998. p. 169–183.
  40. 40. Abdulali BAA, Abu Bakar MA, Ibrahim K, Mohd Ariff N. Extreme Value Distributions: An Overview of Estimation and Simulation. Journal of Probability and Statistics. 2022;2022:5449751.
  41. 41. Amin MT, Khan F, Ahmed S, Imtiaz S. Risk-based fault detection and diagnosis for nonlinear and non-Gaussian process systems using R-vine copula. Process Safety and Environmental Protection. 2021;150:123–136.
  42. 42. Hubert S, Baur F, Delgado A, Helmers T, Rbiger N. Simulation modeling of bottling line water demand levels using reference nets and stochastic models. In: 2015 Winter Simulation Conference (WSC); 2015. p. 2272–2282.
  43. 43. Geng F, Dubos GF, Saleh JH. Spacecraft obsolescence: Modeling, value analysis, and implications for design and acquisition. In: 2016 IEEE Aerospace Conference; 2016. p. 1–13.
  44. 44. Rama D, Andrews JD. A reliability analysis of railway switches. Proceedings of the Institution of Mechanical Engineers, Part F: Journal of Rail and Rapid Transit. 2013;227(4):344–363.
  45. 45. Naoi T, Kagawa Y, Nagino K, Niwa S, Hayashi K. Extreme value analysis of the velocity of axonal transport by kinesin and dynein. Biophysical Journal. 2022;121(3, Supplement 1):400a.
  46. 46. Fonseca A, Porto F, Ferro M, Ogasawara E, Bezerra E. Analysis of precipitation data in Rio de Janeiro city using Extreme Value Theory. In: Anais Estendidos do XXXVII Simpsio Brasileiro de Bancos de Dados. Porto Alegre, RS, Brasil: SBC; 2022. p. 193–198.
  47. 47. Artha AA, Sofro A. Rainfall Prediction in East Java Using Spatial Extreme Value Theory. Journal of Physics: Conference Series. 2019;1417(1):012010.
  48. 48. Guermah T, Rassoul A. Study of extreme rainfalls using extreme value theory (case study: Khemis-Miliana region, Algeria). Communications in Statistics: Case Studies, Data Analysis and Applications. 2020;6(3):364–379.
  49. 49. Einmahl JH, Magnus JR. Records in athletics through extreme-value theory. Journal of the American Statistical Association. 2008;103(484):1382–1391.
  50. 50. Einmahl JHJ, Smeets SGWR. Ultimate 100-m world records through extreme-value theory. Statistica Neerlandica. 2011;65(1):32–42.
  51. 51. Arderiu A, de Fondeville R. Influence of advanced footwear technology on sub-2 hour marathon and other top running performances. Journal of Quantitative Analysis in Sports. 2022;18(1):73–86.
  52. 52. Tsuzuki S, Yanagisawa D, Nishinari K. Effect of walking distance on a queuing system of a totally asymmetric simple exclusion process equipped with functions of site assignments. Phys Rev E. 2018;98:042102.
  53. 53. Arita C. Queueing process with excluded-volume effect. Phys Rev E. 2009;80:051119. pmid:20364959
  54. 54. Arita C, Schadschneider A. Exact dynamical state of the exclusive queueing process with deterministic hopping. Phys Rev E. 2011;84:051127. pmid:22181388
  55. 55. Arita C, Schadschneider A. Dynamical analysis of the exclusive queueing process. Phys Rev E. 2011;83:051128. pmid:21728511
  56. 56. Simons-Weidenmaier NS, Weber M, Plappert CF, Pilz PK, Schmid S. Synaptic depression and short-term habituation are located in the sensory part of the mammalian startle pathway. BMC Neuroscience. 2006;7(1):38. pmid:16684348
  57. 57. Cinlar E. Introduction to stochastic processes. Courier Corporation; 2013.
  58. 58. Sekirnjak C, du Lac S. Intrinsic firing dynamics of vestibular nucleus neurons. J Neurosci. 2002;22(6):2083–2095. pmid:11896148
  59. 59. Geisler C, Brunel N, Wang XJ. Contributions of Intrinsic Membrane Dynamics to Fast Network Oscillations With Irregular Neuronal Discharges. Journal of Neurophysiology. 2005;94(6):4344–4361. pmid:16093332
  60. 60. Negro CAD, Chandler SH. Physiological and Theoretical Analysis of K+ Currents Controlling Discharge in Neonatal Rat Mesencephalic Trigeminal Neurons. Journal of Neurophysiology. 1997;77(2):537–553. pmid:9065827
  61. 61. Vogel C. Computational Methods for Inverse Problems. The SIAM series on Frontiers in Applied Mathematics; 2002.
  62. 62. Devroye L. Non-uniform Random Variate Generation. Springer-Verlag; 1986.
  63. 63. Boudkkazi S, Carlier E, Ankri N, Caillard O, Giraud P, Fronzaroli-Molinieres L, et al. Release-dependent variations in synaptic latency: a putative code for short- and long-term synaptic dynamics. Neuron. 2007;56(6):1048–1060. pmid:18093526
  64. 64. Boudkkazi S, Fronzaroli-Molinieres L, Debanne D. Presynaptic action potential waveform determines cortical synaptic latency. J Physiol. 2011;589(Pt 5):1117–1131. pmid:21224227
  65. 65. Boudkkazi S, Debanne D. Enhanced Release Probability without Changes in Synaptic Delay during Analogue-Digital Facilitation. Cells. 2024;13(7). pmid:38607012
  66. 66. Izhikevich EM. Polychronization: computation with spikes. Neural Comput. 2006;18(2):245–282. pmid:16378515
  67. 67. Chowdhury SN, Majhi S, Ozer M, Ghosh D, Perc M. Synchronization to extreme events in moving agents. New Journal of Physics. 2019;21(7):073048.
  68. 68. Gao M, Zhao Y, Wang Z, Wang Y. A modified extreme event-based synchronicity measure for climate time series. Chaos: An Interdisciplinary Journal of Nonlinear Science. 2023;33(2):023105. pmid:36859221