Skip to main content
Advertisement

< Back to Article

Fig 1.

Overview of the extended Arbor framework.

(A) Graphical presentation of cell morphologies via Arbor GUI (v0.8.0-dev-065b1c9 shown here). (B) Key code features for model simulation with Arbor. On the left: facilitated definition of models via the Python frontend by setting up a so-called recipe (where in some cases, there are shortcuts that further accelerate the definition of certain simple models). On the right: modular integration of high performance computing (HPC) hardware such as graphics processing units (GPUs) and message passing interface (MPI) ranks. (C) General approach of the new plasticity framework, enabling to simulate a wide variety of synaptic plasticity mechanisms in arbitrary model paradigms (examples of which are provided in Sect 3).

More »

Fig 1 Expand

Table 1.

Parameters for the plain STDP model.

In the case that two values are given, the first value was used for the detailed analysis shown in Fig 2C and in Fig A in S1 Appendix, and the second value was used to obtain the classical curve shown in Fig 2D (also cf. [37]).

More »

Table 1 Expand

Fig 2.

Classical spike-timing dependent plasticity (STDP), spike-driven homeostasis, and calcium-driven synaptic plasticity in Arbor.

Arbor implementations (in lighter blue) are cross-validated by comparison to Brian 2 (in orange) or a custom simulator. New features of the Arbor core code are highlighted in italic. (A) STDP paradigm where two Poisson spike sources stimulate an excitatory and an inhibitory synapse connecting to a single neuron (spikes shown in blue and red, respectively). The excitatory connection undergoes STDP (results shown in (C)). Image of dice from Karen Arnold/publicdomainpictures.net. (B) STDP paradigm where two regular spike trains, phase-shifted by delay , drive the weight dynamics of a single synapse (results shown in (D)). (C) Strength of the excitatory synapse, subject to STDP, as shown in (A) (goodness of fit between Arbor and Brian 2: , ). (D) Classical STDP curve, obtained as detailed in (B) (, ). (E) Homeostasis with two Poisson spike sources connected to an LIF neuron via current-based delta synapses (results shown in (G–I), averaged over 50 trials). One of these Poisson inputs spikes at a fixed rate and is plastic, while the other spikes at a varying rate and is static. (F) Paradigm of calcium-based, spike-timing- and rate-dependent synaptic plasticity, using the model by Graupner & Brunel [21]. Two regular spike trains, phase-shifted by delay , drive the stochastic weight dynamics of a single synapse (results shown in (J)). (G) Time course of the varying rate of the input in (E). (H) Strength of the plastic synapse, subject to the dynamics given in (E) (, ). (I) Firing rate of the neuron shown in (E) in the presence of homeostatic plasticity dynamics (, ). (J) Calcium-driven synaptic plasticity as shown in (F). Reproduction of the numerical DP curve from Fig 2 of the related paper [21] (the mean is given by the dark dashed line). Every synapse is subject to 60 spike pairs presented at . Arbor results were averaged over 4000 trials, the solid blue line indicates the mean and the shaded region the 95% confidence interval. Quantification of deviation between the mean curves: , . Note that the generation of this plot is now also demonstrated in an Arbor tutorial [38].

More »

Fig 2 Expand

Table 2.

Parameters for the homeostatic plasticity model.

More »

Table 2 Expand

Table 3.

Parameters for the implementation of the calcium-based plasticity model by Graupner & Brunel [21].

Note that as in the original mathematical model, the weights are kept without physical unit.

More »

Table 3 Expand

Fig 3.

Calcium-driven heterosynaptic plasticity in four spines on a dendritic branch.

Calcium is first introduced in spines 1 and 3 through synaptic input. Subsequently, calcium spatially distributes across the dendrite (according to Eq 14), which influences synaptic plasticity at other synapses, promoting either depression or potentiation. The parameter values are provided in Table 4. (A) Illustration generated using Arbor GUI [55]. Each segment is represented by a different color, and a segment can consist of multiple compartments. Spines 1–4 are located at , respectively. For the purpose of visualization, the morphology has been clipped at and , before scaling the dendrite along the x-axis by 2. (B) Paradigm of synaptic plasticity that depends on a spike-timing- and rate-dependent, diffusive calcium concentration (cf. [46]). Regular spike trains induce calcium injection in specific spines, eventually leading to weight changes (results shown in (C–D)). New features of the Arbor core code are highlighted in italic. (C) The change in the calcium level of each spine, and of the dendritic segments in between, in response to the stimulation to spines 1 and 3. Quantification of deviation between the simulators given by (): spine 1: (0.991, ); spine 2: (0.998, ); spine 3: (0.989, ); spine 4: (0.967, ; dendrite location 1: (0.998, ); dendrite location 2: (0.998, ); dendrite location 3: (0.998, )). (D) Synaptic weight changes, which follow the calcium level of the spines. Spines 1–3 undergo synaptic potentiation (elevated synaptic weights), while spine 4 undergoes depression (reduced synaptic weight). Quantification of deviation between the simulators (): spine 1: (0.982, 0.015); spine 2: (0.993, 0.014); spine 3: (0.981, 0.016); spine 4: (0.996, 0.003).

More »

Fig 3 Expand

Table 4.

Parameters for the calcium-based heterosynaptic plasticity model (also cf. Fig 3).

Note that the injection current amplitude I0 varies across implementations due to the differences mentioned in the main text.

More »

Table 4 Expand

Table 5.

Parameters for the model with calcium-based early-phase plasticity and STC-based late-phase plasticity based on [64] and [23].

The calcium concentration in this model is a dimensionless quantity since it is only considered in the synapses (see main text). We use parameters for the calcium-based early-phase model that were fitted on hippocampal slice data [21,71]. For networks, the calcium contribution parameters are corrected by a factor of 0.6 to account for in vivo conditions (cf. [70]).

More »

Table 5 Expand

Fig 4.

Basic early- and late-phase plasticity with synaptic tagging and capture (STC), cross-validated with stand-alone simulator, and memory recall performance with single-compartment model.

(A) Paradigm of two-phase synaptic plasticity with calcium-based early phase and late phase described by synaptic tagging and capture (see [23]). Specific spiking input drives the weight dynamics, which further depend on stochastic dynamics and diffusion of PRP (results shown in (C–G)). New features of the Arbor core code are highlighted in italic. (B) Fraction of a neuronal network consisting of excitatory (blue and dark blue circles) and inhibitory neurons (red circles). Following external input, the synapses between excitatory neurons undergo plastic changes implemented as detailed in (A), forming a Hebbian cell assembly (related results in (H)). (C) Averaged noisy early-phase synaptic weight (cf. Eq 17). The synapse receives spiking input at pre-defined times (indicated by bold gray arrows). Goodness of fit between the mean curves: , . (D) Limit cases of early- and late-phase synaptic weight (cf. Eqs 17 and 20). The presynaptic neuron is stimulated to spike at maximal rate (indicated by gray bar). The late-phase weight has been shifted for graphical reasons (cf. Eq 20; early phase: , ; late phase: , ). (E) Postsynaptic calcium concentration, successively crossing the thresholds for depression (LTD) and potentiation (LTP) (cf. Eq 19; , ). (F) The postsynaptic PRP concentration rises until it reaches its maximum through the continued stimulation (cf. Eq 21; , ). (G) Membrane potential of the postsynaptic neuron (, ). Basic early-phase plasticity dynamics (C,E,G): average across 10 batches, each consisting of 100 trials. Baseline levels are represented by fine, dotted lines. Basic late-phase plasticity dynamics (D,F): average across 10 batches, each consisting of 10 trials. Noise seeds were drawn independently for each trial. Results of Arbor are represented by continuous lines, results of the stand-alone simulator [60] by darker, dashed lines. For each curve, error bands represent the standard error of the mean (mostly too small to be visible). (H) Memory recall in networks of single-compartment neurons simulated with Arbor (qualitatively reproducing the point-neuron results of [23]). Pattern completion is measured by the coefficient Q (see Eq 22) for stimulated patterns of varied size (a varied number of neurons are stimulated for learning/recall). Average over 100 network realizations; error bars represent the 95% confidence interval.

More »

Fig 4 Expand

Fig 5.

Memory recall in a recurrent network of multi-compartment neurons after learning and after consolidation.

Results obtained with Arbor for networks of different kinds of multi-compartment neurons, demonstrating the impact of different values of the PRP diffusivity on memory consolidation. Networks consist of ‘small’ cells (radius of ) or of ‘large’ cells (radius of ), with either short or long dendrites (in which cases each neuron comprises in total 31 or 48 compartments, respectively). The radius and length values are given in Table 6. (A,B) Illustrations of used cell structures, generated using Arbor GUI [55]. Each segment is represented by a different color. A segment can consist of a multitude of compartments. Overlaid with illustrations of more realistic neuron structures that would have roughly similar functional properties. (A) a small (left) and a large (right) cell with short dendrites, (B) the same with long dendrites (cf. Table 6). (C) Paradigm of two-phase synaptic plasticity with calcium-based early phase and late phase described by synaptic tagging and capture. The impact of the diffusion of PRPs can be examined using different morphological neuron structures. New features of the Arbor core code are highlighted in italic. (D) Fraction of a neuronal network consisting of excitatory multi-compartment (blue and dark blue circles) and inhibitory neurons (red circles). Following external input, the synapses between excitatory neurons undergo plastic changes implemented as detailed in (C), forming a Hebbian cell assembly (related results in (E–L)). (E–H) Memory recall measured by pattern completion coefficient Q (see Eq 22) for a stimulated subset of varied size (a varied pattern of neurons are stimulated for learning/recall). Value Q > 0 indicates the successful recall of a memory representation. Average over 100 network realizations. Error bars represent the 95% confidence interval. (E) Recall stimulation at after learning (technically, , but late-phase plasticity does not occur on such short timescales). (F) Recall stimulation at after learning, . (G) Recall stimulation at after learning, . (H) Recall stimulation at after learning, . (I–L) Same as (E–H), but for large cells that consist of segments of twice the diameter.

More »

Fig 5 Expand

Table 6.

Cell morphology parameters for the network simulations of memory formation and consolidation with morphological neurons (Sect 3.6).

We investigated each combination of the cell and dendrite sizes. The values are chosen to approximate the effective functional dynamics that arise from the structures of real neurons (essentially, pyramidal cells) in hippocampus or neocortex. See the main text for more details.

More »

Table 6 Expand

Fig 6.

Benchmarking results of runtime and memory use with the synaptic memory consolidation model in Arbor and point-neuron simulators.

For 10s-recall paradigm in networks of 2000 neurons. The single-compartment simulations in Arbor as well the point-neuron simulations in Brian 2 (with cpp_standalone device) [4,79] and in the custom stand-alone simulator [60] are conducted as described in Sect 3.5; they are represented by data points on the left-hand side. The Arbor simulations with multi-compartment/morphological neurons of 48 compartments are conducted as described in Sect 3.6 and represented by data points on the right-hand side. Results are given for different hardware systems, HWS1: an older desktop computer (Intel Core i5-6600 CPU @ 3.30GHz, DDR3-RAM, using 1 thread), HWS2: a newer compute server (AMD Ryzen Threadripper PRO 5995WX CPU, DDR4-RAM, using 1 thread, in specified cases with NVIDIA T1000 GPU). For Arbor, results are distinguished between standard CPU execution, CPU with SIMD support, and with GPU support. The respective left bars with blue data points show the total runtime of the simulations (comprising initialization and state propagation phases). Measurements were performed using hyperfine in version 1.15. The respective right bars with orange data points show the use of main memory, given by the maximum over time of the number of ‘dirty’ bytes, including private and shared memory, as returned from pmap. Note that for the GPU cases considering the main memory use is not meaningful, since the GPU has its own additional memory. Data points represent the average over 10 trials; error bars represent the standard deviation. Also cf. Fig L in S1 Appendix.

More »

Fig 6 Expand

Fig 7.

Benchmarking of simulation runtime for large-scale networks.

(A–C) Total wallclock time to initialize and execute a simulation with 32768 cells over in Arbor and CoreNEURON. A busyring network of simple-branchy cells with tree depth 2 is used, run on the HWS2 system (AMD Ryzen Threadripper PRO 5995WX CPU with 64 cores, DDR4-RAM) with (A) CoreNEURON, (B) Arbor with SIMD, (C) Arbor with SIMD with STDP mechanisms for the random synapses. The respectively fastest paradigm for each implementation is highlighted by the gray box. (D) Scaling of the fastest results for the total runtime over network size. (E) Sketch of the busyring network consisting of rings of integrate-and-fire neurons (shown as blue disks), connected internally via excitatory synapses, and across the whole network via random synapses of weight zero. One neuron of each ring receives external stimulation. (F) Scaling of the setup and propagation runtime related to the total runtimes in (D). (G) Scaling of the setup and propagation runtime for fastest total runtime using an additional NVIDIA T1000 8GB GPU. For the case with STDP, the GPU-mediated speedup is indicated by dark gray bars. All values are averaged over 10 trials, with coefficient of variation in all cases. See Tables 8 and 9 for more details.

More »

Fig 7 Expand

Table 7.

Parameters for benchmarking large networks.

Used for benchmarking large networks with busyring in Arbor and CoreNEURON (STDP parameters are only used in Arbor). For CPU threads and MPI ranks, all combinations of powers of 2 from the given ranges were considered (the ranges were chosen according to the 64-core CPU of the HWS2 system).

More »

Table 7 Expand

Table 8.

Wallclock time measurements for busyring benchmark.

Total runtime results are provided for networks of simple-branchy cells as reported by Arbor and CoreNEURON (using the fastest paradigm as detailed in Fig 7AC). The shares of the setup and state propagation phases are given in brackets, respectively. Results are collected with the HWS2 system (AMD Ryzen Threadripper PRO 5995WX CPU with 64 cores, DDR4-RAM, GPU not used). All values are averaged over 10 trials, with coefficient of variation in all cases. Arbor with SIMD. In CoreNEURON, the most extensive simulation did not finish (d.n.f.) due to exceeded memory. See Table 9 for results with GPU.

More »

Table 8 Expand

Table 9.

Wallclock time measurements for busyring benchmark with GPU.

Total runtime results are provided for networks of simple-branchy cells as reported by Arbor (using the fastest paradigm as detailed in Fig 7B,C). The shares of the setup and state propagation phases are given in brackets, respectively. Results are collected with the HWS2 system (AMD Ryzen Threadripper PRO 5995WX CPU with 64 cores, DDR4-RAM, with NVIDIA T1000 8GB GPU). All values are averaged over 10 trials, with coefficient of variation in all cases. Arbor with SIMD. In CoreNEURON, due to exceeded memory, none of the simulations finished. See Table 8 for results without GPU.

More »

Table 9 Expand

Table 10.

Overview of the simulation code used to perform the simulations presented in this article.

The code of the Arbor implementations can also be found at https://doi.org/10.5281/zenodo.18088337.

More »

Table 10 Expand