Skip to main content
  • Loading metrics

Connectivity concepts in neuronal network modeling

  • Johanna Senk ,

    Roles Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Validation, Visualization, Writing – original draft, Writing – review & editing

    Affiliation Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institut Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany

  • Birgit Kriener,

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Resources, Software, Visualization, Writing – original draft, Writing – review & editing

    Affiliation Institute of Basic Medical Sciences, University of Oslo, Oslo, Norway

  • Mikael Djurfeldt,

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Resources, Writing – original draft, Writing – review & editing

    Affiliation PDC Center for High-Performance Computing, KTH Royal Institute of Technology, Stockholm, Sweden

  • Nicole Voges,

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Resources, Visualization, Writing – original draft, Writing – review & editing

    Affiliation INT UMR 7289, Aix-Marseille University, Marseille, France

  • Han-Jia Jiang,

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Resources, Visualization, Writing – original draft, Writing – review & editing

    Affiliations Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institut Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany, Institute of Zoology, University of Cologne, Cologne, Germany

  • Lisa Schüttler,

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Resources, Writing – review & editing

    Affiliation Chair of Theory of Science and Technology, Human Technology Center, RWTH Aachen University, Aachen, Germany

  • Gabriele Gramelsberger,

    Roles Conceptualization, Funding acquisition, Investigation, Methodology, Resources, Supervision, Writing – review & editing

    Affiliation Chair of Theory of Science and Technology, Human Technology Center, RWTH Aachen University, Aachen, Germany

  • Markus Diesmann,

    Roles Conceptualization, Funding acquisition, Investigation, Methodology, Resources, Supervision, Writing – original draft, Writing – review & editing

    Affiliations Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institut Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany, Department of Psychiatry, Psychotherapy and Psychosomatics, School of Medicine, RWTH Aachen University, Aachen, Germany, Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany

  • Hans E. Plesser,

    Roles Conceptualization, Funding acquisition, Investigation, Methodology, Resources, Supervision, Writing – original draft, Writing – review & editing

    Affiliations Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institut Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany, Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway

  • Sacha J. van Albada

    Roles Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Supervision, Writing – original draft, Writing – review & editing

    Affiliations Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institut Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany, Institute of Zoology, University of Cologne, Cologne, Germany


Sustainable research on computational models of neuronal networks requires published models to be understandable, reproducible, and extendable. Missing details or ambiguities about mathematical concepts and assumptions, algorithmic implementations, or parameterizations hinder progress. Such flaws are unfortunately frequent and one reason is a lack of readily applicable standards and tools for model description. Our work aims to advance complete and concise descriptions of network connectivity but also to guide the implementation of connection routines in simulation software and neuromorphic hardware systems. We first review models made available by the computational neuroscience community in the repositories ModelDB and Open Source Brain, and investigate the corresponding connectivity structures and their descriptions in both manuscript and code. The review comprises the connectivity of networks with diverse levels of neuroanatomical detail and exposes how connectivity is abstracted in existing description languages and simulator interfaces. We find that a substantial proportion of the published descriptions of connectivity is ambiguous. Based on this review, we derive a set of connectivity concepts for deterministically and probabilistically connected networks and also address networks embedded in metric space. Beside these mathematical and textual guidelines, we propose a unified graphical notation for network diagrams to facilitate an intuitive understanding of network properties. Examples of representative network models demonstrate the practical use of the ideas. We hope that the proposed standardizations will contribute to unambiguous descriptions and reproducible implementations of neuronal network connectivity in computational neuroscience.

Author summary

Neuronal network models are simplified and abstract representations of biological brains that allow researchers to study the influence of network connectivity on the dynamics in a controlled environment. Which neurons in a network are connected is determined by connectivity rules and even small differences between rules may lead to qualitatively different network dynamics. These rules either specify explicit pairs of source and target neurons or describe the connectivity on a statistical level abstracted from neuroanatomical data. We review articles describing models together with their implementations published in community repositories and find that incomplete and imprecise descriptions of connectivity are common. Our study proposes guidelines for the unambiguous description of network connectivity by formalizing the connectivity concepts already in use in the computational neuroscience community. Further we propose a graphical notation for network diagrams unifying existing diagram styles. These guidelines serve as a reference for future descriptions of connectivity and facilitate the reproduction of insights obtained with a model as well as its further use.


The connectivity structure of a neuronal network model is sometimes described with a statement such as “Ns source neurons and Nt target neurons are connected randomly with connection probability p”. One interpretation of this statement is an algorithm that considers each possible pair of source and target neurons exactly once and connects each such pair with probability p. Other interpretations of the same statement may allow multiple connections between the same pair of neurons, apply the connection probability non-uniformly on different neuron pairs, or include further assumptions on the distribution of in- and outgoing connections per neuron. These choices do not just affect the network structure, but can have substantial consequences for the network dynamics. To illustrate this point, we simulate two balanced recurrent networks of randomly connected excitatory and inhibitory spiking neurons based on the model of Brunel [1] (see Section “Materials and methods” for model details). Fig 1A shows the dynamics of the original model described in [1], where the number of incoming connections per neuron (in-degree) is fixed to Kin. In contrast, Fig 1B shows the dynamics of a network in which the number of outgoing connections per neuron (out-degree) is fixed to Kout. The total number of connections is the same in both networks and, by implication, an interpretation of the network’s connection probability, too. The network-averaged spike rate has a similar pattern across time in both instantiations. However, while the rates of individual neurons are alike for the network with fixed in-degree, they are broadly distributed for the network with fixed out-degree. These small and comparatively simple example network simulations already demonstrate that ambiguities in network descriptions can result in networks with statistically different activities.

Fig 1. Spiking neuron network simulations of a balanced random network with (A) fixed in-degree and (B) fixed out-degree.

Top left: Raster plots show spike times of 50 out of 10, 000 excitatory (E) and 50 out of 2, 500 inhibitory (I) neurons. Bottom left: Time-resolved spike rate from spike-count histogram across time with temporal bin width of 5 ms. Top right: Per-neuron spike rate from spike-count histogram for individual neurons. Bottom right: Normalized distribution of per-neuron spike rates with bin width of 2/s. Model details are given in Section “Materials and methods”.

For more complex networks with spatial embedding, hierarchical organization, or higher specificity of connections, the task of fully specifying the connectivity becomes correspondingly more daunting. As researchers are building more complete models of the brain, simultaneously explaining a larger set of its properties, the number of such complex models is steadily increasing. This increase is accelerated by the rise of large-scale scientific projects which carefully assemble rich connectivity graphs. For example, the Allen Institute for Brain Science has published a model of mouse primary visual cortex with a layered structure, multiple cell types, and specific connectivity based on spatial distance and orientation preference [2]. The Blue Brain microcircuit reproduces a small patch of rat somatosensory cortex featuring cell-type-specific connectivity based on paired recordings and morphological neuron reconstructions [3, 4]. The multi-area spiking network model of macaque visual cortex by Schmidt et al. [5] is a multi-scale network with specific connectivity between 32 cortical areas, each composed of interconnected excitatory and inhibitory neurons in four cortical layers. The network structure of these models is typically specified by a combination of explicit connectivity based on neuroanatomical data and connection patterns captured by probabilistic or deterministic rules. Regardless of how connectivity is specified, reproducible research requires unambiguous network descriptions and corresponding algorithmic implementations.

Mathematically defined models of neuronal networks are to be distinguished from their concrete implementation and execution in the form of simulations. Any given model has uniquely defined dynamics apart from potential stochasticity; model variants can obviously exist, but each variant is a model in its own right. The dynamics of all but the simplest models can only be fully explored using simulations, i.e., requiring the instantiation and execution of the model in the form of a computer program. Any abstract model can be implemented in multiple ways. A central challenge in computational neuroscience, as well as other fields relying on simulations, is to define abstract models so precisely that the researcher only needs to decide how to implement the model, but not what to implement. Our focus in this work is on facilitating such precise model descriptions, particularly with regard to network connectivity.

First, we review some terminology. Model neuronal networks generally consist of nodes, which represent individual neurons or neural populations; the latter is common in models describing activity in terms of average firing rates. In a concrete simulation code, network nodes are typically first created with a dedicated command. Network nodes are connected by edges. Connections are typically directed, i.e., signals flow from a source node to a target node. When nodes represent individual neurons, edges represent one or a small number of individual synapses, and when nodes represent groups of neurons, edges represent an average over many synapses. We use the term connection to mean a single, atomic edge between network nodes. Neuronal network simulation software usually provides a command allowing one to create such an edge between any two network nodes.

In many models, nodes are grouped into populations of homologous neurons. Populations can be nested hierarchically, e.g., one may consider an entire brain area as a population, the neurons within a specific layer of that area, or all neurons of a given cell type within the layer. Also edges in a network can be grouped, reflecting anatomical structure (nerve bundles), purpose (inhibitory recurrent connections), or developmental processes. We call such groups of edges projections. They play an important role in specifying and instantiating models: We can specify network connectivity by providing, for each projection between any pair of populations, a connection rule which defines how to create atomic edges (connections) between individual nodes. A projection is thus defined by a triplet of source population, target population and connection rule and represents a collection of atomic connections.

Neuronal network simulation software commonly provides support for connecting populations based on connection rules, which may be deterministic or probabilistic. A key challenge in the field of computational neuroscience, which we address here, is to precisely define connections rules and their properties, so that model descriptions obtain a unique interpretation and can be matched precisely to the implementations of these rules provided by simulation software.

A command to instantiate a single model neuron of a given type and a command to create an atomic edge between any pair of neurons is all that is required to construct a neuronal network model of arbitrary complexity in a computer—the model implementer just has to arrange for the right combination of calls through loops and other control structures. However, this approach has two significant shortcomings. First, most information about the structure of the network is lost. As the network is described on the lowest possible level, terms describing higher-order organizational principles of brain structures such as cell populations, layers, areas, and projections between them do not occur; they are implicitly contained in the algorithms. This limits the readability of the model specification and thereby the ability to verify and reuse the code. It also precludes systematic visualization or exploration of the network with computational tools. Second, a simulation engine reading the code will have little opportunity to parallelize network construction. Network specifications at higher conceptual levels, on the other hand, leave a simulation engine the freedom to use efficient parallelization, for example when connecting two populations of neurons in an all-to-all fashion. With the progress of neuroscience towards larger and more structured networks, the degree of parallelization becomes relevant. In typical simulations, network creation can easily become the dominant component of the total simulation time and may hinder a research project because of the forbidding compute resources it would require [6, 7]. High-level connectivity descriptions can help by exposing organizational principles for the simulator to exploit and giving the neuroscientist access to the expert knowledge encoded in the simulator design and the reliability of code used in many peer-reviewed studies. To be useful to computational neuroscientists, connectivity concepts for neuronal network models should encompass connectivity patterns occurring in real brains. On the one hand, small brains of simple organisms such as C. elegans exhibit highly specific connection patterns [8], which tend to require explicit connectivity matrices for their specification. The brains of more complex organisms such as mammals, on the other hand, have a multi-scale organization that can be captured at different levels of abstraction. Their brains are divided into multiple regions, each of which may contain different neuron types forming populations with statistically similar connectivity patterns. Some regions, such as the cerebellar cortex, have highly stereotyped, repetitive connectivity motifs [9]. Elsewhere, for instance in the cerebral cortex, the neuron-level connectivity appears more random [10, 11]. Nevertheless, the cerebral cortex exhibits a number of organizational principles, including a laminar and columnar architecture. On a larger spatial scale, the cortex is subdivided into different functional areas. Each of these areas is often in itself a complex, hierarchically structured network of substructures. These structural elements may be connected to each other, resulting in connectivity across various spatial scales.

At a basic level of organization, pairs of neurons are connected with a probability that depends on both the source and the target area and population. For instance, neurons within the same cortical layer are generally more likely to be connected to each other than neurons located in different layers [2, 1214]. Neurons can synapse on themselves [15] and can establish more than one synapse on any given target neuron [16]. Connection probability decays with distance both at the level of areas [17, 18] and at the level of individual neurons. Within local cortical circuits, the length constant for this decay is on the order of 150–300 μm [19, 20]. Typical assumptions for the local connectivity are a Gaussian or exponential decay of the connection probability between pairs of neurons with increasing distance between their cell bodies [21, 22]. Both within and between cortical areas, excitatory neurons form so-called patchy connections consisting of spatially clustered synapses [2326]. Within areas, this patchiness becomes apparent at the medium distance range of millimeters. Another important organizing principle is that neurons exhibit like-to-like connectivity. For instance, neurons with more similar receptive fields are more likely to be connected [2730]. In addition, having common neighbors increases the chance for a pair of neurons or areas to be connected, also known as the homophily principle [31]. Such homophily results in the presence of connection motifs of three or more neurons beyond what would be expected based on pairwise connection probabilities alone [32]. At higher levels of organization, the cerebral cortex has a hierarchically modular structure [33]. Sometimes cortex is also described as having small-world properties [34, 35]. In our treatment of connectivity concepts, we focus on the most fundamental properties of network circuitry but also touch upon such more complex organizational aspects.

With on the order of 104 incoming connections to each of the 1010 neurons of human cortex [36, 37], the estimated total number of connections in the full cortical network is 1014. Only the study of natural-density, full-scale networks gives reliable information about features such as the structure of pairwise correlations in the brain’s neuronal activity [38]. Downscaled networks obtained by reducing neuron and synapse numbers may only preserve some characteristics of the network dynamics, for example the firing rates, if parameters are adjusted for compensation. In the present study, we describe connectivity concepts based on the principles of neuroanatomical organization, abstracted in a way that allows for mathematical formalization and algorithmic implementations in simulators. The concepts target both the connectivity of small proof-of-concept network models with only hundreds or thousands of interconnected neurons and large-scale networks approaching the full size and connection density of biological brains. In this endeavor, we take into account the current practice in the field by considering published models and corresponding open-source code. These resources provide insight into the connectivity types relevant to computational neuroscientists and the way in which these are described and implemented. Our aim is to derive a unified vocabulary, along with mathematical and graphical notations, for describing connectivity in a concise and non-ambiguous way. Besides supporting the reproducibility, sharing, and reuse of neuronal network models, this effort facilitates efficient implementations of high-level connectivity routines in dedicated simulation software and hardware. Here, we use the term “high-level” to refer to the abstraction of network connectivity patterns to mathematical functions of few parameters. It is possible for a network model to be partly described by such high-level connectivity, whereas other aspects of the connectivity are specified in detail. The combined connectivity of such a model can then have many parameters. Abstractions of network organization encode our understanding of the structure of the system, enable more systematic analyses, in some cases direct comparisons with analytical theory, and greater comparability between models.

The concepts we discuss specify the connectivity between network nodes that are most often individual neurons but may equally well be neural populations or brain regions. While the nodes can also be multi-compartment neurons, we are not concerned with detailed connectivity below the neuronal level such as to specific dendritic compartments. In the case of plastic networks, we only consider the initial state, and do not describe the evolution of the connectivity.

We first review published models to identify which network structures are used by the community and how they are described. Next we turn to description languages and simulators to review how connectivity is abstracted in simulation interfaces. Based on this dual review, the following section proposes connectivity concepts for deterministic and probabilistic networks, and also addresses networks embedded in metric space. In addition to these mathematical and textual descriptions of the concepts, we propose a graphical notation for illustrating network structures. Our results conclude with a few examples of how the connectivity of neuronal network models is concisely and unambiguously described and displayed using our notation. Finally we discuss our results in the context of the evolution of the field.

Preliminary work has been published in abstract form [39, 40].


Networks used in the computational neuroscience community

We review network models for which both a manuscript and an implementation have been published. Models in computational neuroscience are often made available via one of a few common repositories. We select the most prominent repositories relevant to the present study, and in the following characterize the models fitting our scope contained in them.

The models entering this study are in the online repositories ModelDB [41, 42] and Open Source Brain (OSB) [43]. Both repositories have been maintained for years (or even decades in the case of ModelDB) and they support the curation, further development, visualization, and simulation of a large variety of models in different ways. ModelDB stores its models using the infrastructure of SenseLab ( Implementations on ModelDB generally aim to serve as static reference for a published article (although some entries link to version-controlled repositories) and no restrictions on programming languages or simulators are made. In contrast, all models indexed by OSB ( are stored in public version-controlled repositories such as GitHub ( to foster ongoing and collaborative model development. Models in OSB are standardized in the sense that they are made available in the model description languages NeuroML [44, 45] or PyNN [46], besides potentially further versions.

As this study focuses on network connectivity, we review network models of point neurons, simple multicompartment neurons (without considering connectivity specific to compartments), and neural mass models, but exclude neural field models as well as more detailed models. Therefore, we narrow the broad collection of models in ModelDB down to the MicrocircuitDB section Connectionist Networks (

Spiking, binary, and rate neurons are all accepted as network nodes. Plastic networks in which the connection strengths (e.g., spike-timing dependent plasticity [47]) or even the connectivity itself (structural plasticity [48]) evolve over time are not a priori excluded. However, for plastic networks we only consider structure independent of dynamics, i.e., only the result of the initial network construction. If an article describes multiple different network models, we concentrate on the one most relevant for this study. Only connections between neuronal populations are taken into account; connections with external stimulating and recording devices are ignored. For some of the indexed publications, the full (network) model is not actually available in the repository, and we exclude such incomplete models from this study.

All selected network models are characterized based on five main and several sub-categories and the results are summarized in Figs 26. For the main categories, we formulate the following guiding questions:

  1. Metadata (Fig 2) When, where, and by whom were article and code published?
  2. Description (Fig 3) How does the article describe the connectivity and is the description complete?
  3. Implementation (Fig 4) How is the connectivity technically implemented?
  4. Network (Fig 5) How are network nodes and edges characterized?
  5. Concepts (Fig 6) Which connectivity concepts are realized?
Fig 2. Metadata: When, where, and by whom were article and code published?

(A) Pie chart of repositories storing model code. “ModelDB”: section Microcircuit DB Connectionist Networks of ModelDB. “OSB”: Open Source Brain. (B) Abbreviated journal name in stacked, horizontal bar plot. (C) Year of publication in bar plot. (D) Location of all authors’ labs based on affiliations as Venn diagram. Intersections indicate collaborations between labs situated on different continents. Not included in the diagram are two publications of which all authors are affiliated with labs only in Australia and South America, respectively.

Fig 3. Description: How does the article describe the connectivity and is the description complete?

(A) Location of connectivity description. “Main”: in main manuscript; “Reference”: reference to other publication; “Supplement”: in separate file belonging to the same publication. (B) Means used to describe connectivity. Descriptions of the parameterization of connections are only counted if they are crucial for understanding whether connections exist. (C) Reference to model implementation in manuscript. “Software”: name of software given; “URL”: explicit hyperlink or DOI referencing published code; “Version”: software version given; “None”: implementation not mentioned (number of occurrences given in legend). Intersections in panels A–C mean that the connectivity is described in different locations, a combination of different means is used, and different references to the model implementation are given, respectively. (D) Whether connectivity is just specified as “random” or a connection probability is given without defining the connection rule. (E) Whether description is insufficient or inconclusive for implementing the network model.

Fig 4. Implementation: How is the connectivity technically implemented?

(A) Name of software framework (dedicated simulator or general-purpose software). (B) Implementation of connections. “Custom”: hard-coded; “Built-in”: routine from dedicated simulator. The intersection means that a part of the network connectivity is explicitly coded in a general-purpose language and another part uses built-in simulator functionality.

Fig 5. Network: How are network nodes and edges characterized?

(A) Interpretation of network nodes. “Single neuron”: connections exist between single neuronal units; “Population”: connections are established between nodes that represent multiple neurons. (B) Dynamics of the nodes. “Rate”: continuous signal; “Spiking”: spiking mechanism; “Binary”: on-off mechanism. (C) Plasticity. “Static”: identity of connections and weight values fixed; “Plastic”: potential changes of connections and weights during simulation. The intersections in panels A and C refer to models which have both properties in different parts of the networks.

Fig 6. Concepts: Which connectivity concepts are realized?

(A) Whether connections in the model are probabilistic or deterministic. (B) Whether at least some part of the model contains distance-dependent connections. (C) Name of deterministic connectivity rule specifying the connectivity in at least a part of the model network (compare Fig 7A and 7B). (D) Name of probabilistic connectivity rule specifying the connectivity in at least a part of the model network (compare Fig 7C–7F). One network model can use multiple deterministic and probabilistic rules or may use none of the given rules; therefore the numbers of models in panels C and D do not add up to the total number of studies. (E) Whether self-connections are allowed (illustrated in Fig 7G). The intersections in panels A, B, and E refer to models which have different properties in different parts of the networks. (F) Whether multiple connections from a source node to a target node are allowed (illustrated in Fig 7H).

Our model review comprises a total of 42 selected models with about 80% of the code found in ModelDB and about 20% in OSB (Fig 2A). The corresponding articles are listed in Section “Reviewed network models” in “Materials and methods”. They have appeared in a number of peer-reviewed journals and were published between 1996 and 2020; approximately 70% of the models were published since 2013 (Fig 2B and 2C). Scientists increasingly appreciate the value of reproducible research, which leads to more published code and in particular more use of dedicated repositories [42, 43, 4952]. Journal policies also play a role, as some journals explicitly encourage or even enforce the publication of code. For instance, with seven occurrences the Journal of Neuroscience ( is overrepresented in our list of journals (Fig 2B) and a possible explanation is that journal’s recommendation to deposit code of new computational models in suitable repositories such as ModelDB. The analysis of the authors’ affiliations shows that the models under consideration were developed mostly through collaborations spanning a small number of different labs, mainly from Europe and North America (Fig 2D).

Each article studied describes the model connectivity to some degree (Fig 3A). But about a quarter of the models are described partially outside the article proper, namely in referenced publications or supplementary material. One reason for an incomplete description in the main article might be space restrictions by the journal. Another reason is that some models build on previously published ones and therefore the authors decide to state only the differences to the original model. Without exception all articles use text to describe the connectivity; mostly the text is combined with other means such as illustrations, equations, and tables (Fig 3B). These other means may be only supportive, as is often the case with illustrations, or necessary to convey the complete connectivity. Although not encountered in the studies considered here, another means of description may be code or pseudo-code. The majority of articles contain some information about the model implementation. By model implementation we mean the executable model description defined either via an interface to a dedicated neuronal network simulator or via a general-purpose programming language. More than a third of the publications provide a direct link or other reference to the published code (Fig 3C). Since usage and default values of a software may change in the course of development, giving the software name but not the software version with which the model code needs to be run can be insufficient. More than a quarter of the articles considered do not mention a model implementation at all. We find that one reason for this observation is that the authors published the code after the article; another reason is that the published implementation occasionally does not originate from the authors of the article.

Next, we ask whether randomness in the connectivity is underspecified, meaning that either the word “random” is used without further specification, or a connection probability is given without definition (Fig 3D). This underspecification is identified in almost 10% of the articles. We find more than a third of the descriptions ambiguous (Fig 3E) due to missing details or imprecise formulations. We consider a connectivity description to be unambiguous if 1) in the case of deterministic connectivity, it enables reconstructing the identity of all connections in the model; or 2) in the case of probabilistic connectivity, it enables determining either the connectivity distribution, or the exact algorithm by which the connections were drawn. Here, we focus on the identity of the connections, including their directionality, and not on their parameterization (e.g., weights and delays).

Turning from the connectivity description in articles to the model implementations, we find that a wide variety of software is used for implementing the connectivity (Fig 4A). This software is either a general-purpose programming language such as MATLAB, Python, or C/C++, or a dedicated simulator for neuronal networks such as NEURON, NEST, or Brian. The prevalence of code for the commercial closed-source interpreter MATLAB (more than a third) may be explained by the fact that it is widely used in many research labs for analyzing experimental data and therefore has a tradition in neuroscience. Almost 80% of the model codes use custom, ad hoc implementations for defining the connectivity instead of, or in addition to, high-level functions provided by simulators (Fig 4B). Also precomputed or loaded adjacency matrices fall into the category “custom”.

In the following, we characterize the model networks according to their node and edge properties since these affect the interpretation of connectivity. If the connectivity is defined between single neurons, a connection may represent a single synapse or several individual synapses. However, if the connectivity is defined between nodes that represent populations of neurons, a connection is rather understood as an average over multiple synapses, i.e., an effective connection. This type of connectivity exists in one third of the studied models (Fig 5A). About half of the networks use as nodes rate neurons with continuous dynamics (Fig 5B); rate dynamics often coincide with the interpretation of nodes as neural populations. The other half use spiking neurons, i.e., neuron models which integrate their inputs and fire action potentials if a threshold is crossed. We encounter only one study using binary neurons that switch between On and Off states. About 40% of the models included have plastic connections at least in some part of the network (Fig 5C). Since changes in the connection structure or the weights occur during the course of the simulation, we only take the initial connectivity into account when identifying connectivity concepts.

Fig 6 combines the connectivity description in the articles with the available model implementations to bring forward which connectivity concepts are actually realized in the studies. Those properties which remain underspecified are marked with “Unclear”. The number of occurrences of “Unclear” does not add up to the number of connectivity descriptions identified as ambiguous Fig 3E. Reasons are that 1) in some cases the ambiguity in the description concerns an aspect not covered by the categories of Fig 6 (e.g., the number of connections is fixed, but the number is not given), and 2) sometimes, ambiguity in the description is solved by clear code. More than half of the models use only deterministic connection rules, and in the other half the connections are created using probabilistic rules (Fig 6A); one model combines both deterministic and probabilistic rules. Fig 7 illustrates connectivity patterns reflecting the most common rules: the deterministic rules “one-to-one” and “all-to-all”, and the probabilistic rules “random, fixed in-degree”, “random, fixed total number”, and “pairwise Bernoulli”. Among the deterministic rules, “all-to-all” dominates in the studies considered here (Fig 6C). About a quarter of the networks included here use spatial connections in at least some part of the model network, meaning that the nodes are embedded in a metric space and the connections depend on the relative positions of source and target neurons (Fig 6B). Connections that could be described as “one-to-all” or “all-to-one” are here summarized in the more general “all-to-all”. In particular the plastic network models included tend to use “all-to-all” connectivity for the initial network state and then let the weights evolve. In the networks with population-model nodes, pairs of source and target nodes were connected one at a time. Looking at this as a high-level connectivity can only be done by considering the network as a whole; it then corresponds to the rule with an explicit adjacency list, and we thus classify these cases as “explicit”. “Nearest-neighbor” connectivity could be seen as a special case of “one-to-one”, but we mention it here explicitly. By far the most common probabilistic rule is “pairwise Bernoulli”: for each pair of nodes at most one connection is created with a given probability (Fig 6D). The second most common rule is “random, fixed in-degree”. Examples for most of the remaining patterns depicted in Fig 7 are also observed, albeit in smaller numbers. Note that matched forward and reverse connections between pairs of neurons occur with deterministic rules such as “all-to-all” by construction but can also occur by chance with probabilistic rules. In one case, we encounter gap junctions which are symmetric by definition of the synapse model. Autapses or self-connections [53] are not allowed or do not occur by construction in about half of the networks (Fig 6E). Multapses, which are multiple connections between the same pair of nodes [54, 55], are allowed only in a single study (Fig 6F). We define a multapse as a set of connections sharing the same source node and target node and therefore also the directionality. The individual connections of a multapse can, however, use different parameters such as weights and delays. In judging the presence of multapses, a few subtleties are involved. First, cases where modelers capture the effects of multiple biological synapses using single, strong model synapses are not identified. Second, even if multiple connections between a given source and target node are explicitly generated, their effects may be lumped in the low-level code of a simulator when the model dynamics is linear [56, Section 5.3]. Autapses and multapses are rarely discussed explicitly, but their presence can be inferred from other specifications: The “pairwise Bernoulli” rule, for instance, considers each pair of nodes for connection only once; multapses are thus excluded.

Fig 7. Connectivity patterns reflecting the most common rules.

The ordered set of sources is depicted by the green squares on the left. They are connected to the ordered set of targets , depicted by the orange squares on the right. The respective in- and out-degrees are given next to the nodes. (A) One-to-one. (B) All-to-all. (C) Random, fixed in-degree with Kin connections per target node. (D) Random, fixed out-degree with Kout connections per source node. (E) Random, fixed total number of connections Nsyn. (F) Pairwise Bernoulli with connection probability p. (G) Autapse (self-connection). (H) Multapse (multi-connection).

Description languages and simulators

A neuronal network simulator typically provides an idiosyncratic model description language or supports a pre-existing one, for example a cross-simulator description language like PyNN [46], NeuroML [44], NineML [57], CSA [58], or SONATA [59]. A less common case is where the simulator consists of a library with an API called by a general-purpose language such as is the case for SPLIT [60] and, to some extent, GeNN [61]. We here consider model description languages either tied to a particular simulator or supported by multiple simulators.

The ways in which network connectivity is described in such languages broadly fall into three main categories: procedural descriptions, declarative descriptions at a population-to-population level, and more general declarative descriptions using algebra. Some languages support more than one of these paradigms.

Procedural descriptions.

Most simulators provide a primitive for connecting a source neuron to a target neuron:

Typically, source and target above refer to indices in some neuron enumeration scheme. For example, both NEST [6264], NEURON [65, 66] and Arbor [67, 68] have the concept of a global identifier or GID which associates a unique integer with each neuron. Many simulation environments offer a generic programming language where it is possible to write algorithms based on Connect to describe network connectivity. For example, the all-to-all connectivity pattern shown in Fig 7B, where each source neuron is connected to every target neuron, could be achieved by the procedural description:

A common pattern in such algorithms is to loop over all possible connections, as above, and call connect only if some condition is fulfilled.

If condition above is random() < p, where random() returns a uniformly distributed random number r, 0 ≤ r < 1, we obtain the pairwise Bernoulli pattern with probability p as shown in Fig 7E.

Procedural descriptions are the most general form of network specification: Any kind of connectivity pattern can be created by combining suitable Connect calls. Procedural descriptions of connectivity at the neuron-to-neuron level are for instance supported by the simulators NEST [6264], NEURON [65, 66], Arbor [67, 68], Brian [69], Moose [70], and Nengo [71], as well as the description language PyNN [46].

Our example for the procedural approach already exposes two shortcomings. First, the explicit loop over all possible combinations is generic, but it is also costly if the condition is only fulfilled in a small fraction of the cases. For particular distributions an expert of statistics may know a more efficient method to create connections according to the desired distribution. Taking the example of a Bernoulli trial for each source-target pair, this knowledge can be encoded in a simulator function pairwise_bernoulli(). In this way also non-experts can create Bernoulli networks efficiently. Second, the explicit loops describe a serial process down to the primitive Connect() between two nodes. This gives simulators little chance to efficiently parallelize network construction.

Declarative population-level descriptions.

A declarative description of connectivity describes the connectivity at a conceptual level: It focuses on the connectivity pattern we want to obtain instead of the individual steps required to create it. Typically, the declarative description names a connectivity rule which is then used in setting up connectivity between two neuronal populations or from a population to itself. A common example is:

Declarative descriptions operating on populations are expressive, since they specify connectivity in terms of generic rules. Simulator software can be optimized for each rule, especially through parallelization. Rule-based specification of connectivity helps the reader of the model description to understand the structure of the network and also allows visualization and exploration of network structure using suitable tools. Usually the user is limited to a set of pre-defined rules, although some simulation software allows users to add new rules.

Declarative population-level descriptions are for instance supported by the simulators NEST, Moose, and the description languages PyNN (connectors) and NineML. Commonly provided connectivity rules are: one-to-one, all-to-all, and variants of probabilistic rules. The “transforms” in Nengo can also be regarded as declarative descriptions of connectivity. NeuroML supports lists of individual connections. Its associated language NetworkML [44] provides declarative descriptions akin to those of PyNN and NineML, while its associated lower-level declarative language LEMS [45] supports the definition of types of connectivity based on partially procedural constructs (“structure element types”) such as ForEach and If, giving greater flexibility but departing to some extent from the spirit of declarative description.

Algebraic descriptions.

Using algebraic expressions to describe connectivity rivals procedural descriptions in the sense that they are generic and expressive. Such descriptions are also declarative, with the advantage of facilitating optimization and parallelization.

The Connection Set Algebra (CSA) [58] is an algebra over sets of connections. It provides a number of elementary connection sets as well as operators on them. In CSA, connectivity can be concisely described using expressions of set algebra. Implementations demonstrate that the corresponding network connectivity can be instantiated in an efficient and parallelizable manner [72].

In CSA, a connection is represented by a pair of indices (i, j) which refer to the entities being connected, usually neurons. A source population of neurons can be enumerated by a bijection to a set of indices selected among the non-negative integers (1) and a target population can be similarly enumerated. A connection pattern can then be described as a set of pairs of indices. For example, if , and both sets of neurons are enumerated from 0 to 1, a connection pattern consisting of a connection from a to c and a connection from b to d would in CSA be represented by {(0, 0), (1, 1)}.

However, in CSA it turns out to be fruitful to work with infinite sets of indices. E.g., the elementary (predefined) connection set δ = {(0, 0), (1, 1), …} can be used to describe one-to-one connectivity in general, regardless of source and target population size. We can work with CSA operators on infinite connection sets and extract the actual, finite, connection pattern at the end. Given the finite index sets above, , we can extract the finite one-to-one connection pattern between and through the expression where ∩ is the set intersection operator and × is the Cartesian product.

Another example of an elementary connection set is the set of all connections (2)

For the case of connections within a population (i.e., ) it is now possible to create the set of all-to-all connectivity without self-connections: (3) where − is the set difference operator.

Random pairwise Bernoulli connectivity can be described by the elementary parameterized connection set ρ(p), which contains each connection in Ω with probability p. The random selection operator ρN(n) picks n connections without replacement from the set it operates on, while the operators ρ0(k) and ρ1(k) randomly pick connections to fulfill a given out-degree or in-degree k, respectively.

Multapses are treated by allowing multisets, i.e., multiple instances of the same connection are allowed in the set. The CSA expression for random connectivity with a total number of n connections, without multapses, is: (4) where the Cartesian product of the source and target index sets, , constitutes the possible neuron pairs to choose from.

By instead selecting from a multiset, we can allow up to m multapses: (5) where ⊎ is the multiset union operator.

The operator (6) replaces each connection in a set C with an infinity of the same connection such that, e.g., ρ0(k)MC means picking connections in C to fulfill fan-out k, but now effectively with replacement. Without going into the details, multisets can also be employed to set limits on the number of multapses.

Population-level connectivity rules of languages and simulators.

Most neural network description languages and simulators provide several descriptors or routines that can be used to specify standard connectivity patterns in a concise and reproducible manner. We here give an overview over the corresponding connection rules offered by a number of prominent model description languages and simulators. This brief review supplements the literature review to identify a set of common rules to be more formally described in the next section.

We have studied connectivity rules of the following model specification languages and simulators:

  1. NEST is a simulator which provides a range of pre-defined connection rules supporting network models with and without spatial structure. To create rule-based connections, the user provides source and target population, and the connection rule with applicable parameters and specifications of the synapses to be created, including rules for the parameterization of synapses. The information here pertains to NEST version 3.0.
    In addition to the built-in connectivity rules, NEST provides an interface to external libraries, for example CSA, to specify connectivity.
  2. PyNN is a simulator-independent language. It provides a class of high-level connectivity primitives called Connector. The connector class represents the connectivity rule to use when setting up a Projection between two populations. The information here pertains to PyNN version 0.9.6.
  3. NetPyNE is a Python package to facilitate the development, simulation, parallelization, analysis, and optimization of biological neuronal networks using the NEURON simulator. It provides connectivity rules for explicitly defined populations as well as subsets of neurons matching certain criteria. A connectivity rule is specified using a connParams dictionary containing both parameters defining the set of presynaptic and postsynaptic cells and parameters determining the connectivity pattern. The information here pertains to NetPyNE version 1.0.
  4. NineML is an XML-based cross-simulator model specification language.
  5. Brian is a simulator which has a unique way of setting up connectivity. Connections between a source and target group of neurons are specified using an expression that combines procedural and algebraic aspects, passed to the connect method of the synapse object S:
    Here, EXPR is an integer-valued expression specifying the targets for a given neuron i. This expression may contain the variable VAR which obtains values from RANGE. For example, to specify connections to neighboring neurons, we can say
    where skip_if_invalid tells Brian to ignore invalid values for j such as −1.

The simulators NEURON and Arbor do not support high-level connectivity rules and are therefore not included here.

The population-level connectivity rules shared—under different names—between two or more of the above simulators are the following:

  1. One-to-one connects each source to one corresponding target.
  2. All-to-all connects each source to all targets.
  3. Explicit connections establishes the connections given in an explicit list of source-target pairs.
  4. Pairwise Bernoulli performs a Bernoulli trial for each possible source-target pair. With a certain probability p, the connection is included.
  5. Random, fixed total number establishes exactly Nsyn connections between possible sources and targets.
  6. Random, fixed in-degree connects exactly Kin sources to each target (where the same source may be counted more than once).
  7. Random, fixed out-degree connects each source to exactly Kout targets (where the same target may be counted more than once).

Languages and simulators vary with regard to whether autapses or multapses are created by a connectivity rule and whether it is possible to choose if they are created or not. Table 1 details the extent to which the rules above are implemented in the languages and simulators NEST, PyNN, NETPyNE, and NineML. In addition, PyNN supports the following rules:

  • Pairwise Bernoulli with probability given as a function of either source-target distance, vector, or indices.
  • Small-world connectivity of the Watts-Strogatz type, with and without autapses; out-degree can be specified.
  • Connectivity specified by a CSA connection set provided by a CSA library.
  • Explicit Boolean connection matrix.
  • Connect cells with the same connectivity as the given PyNN projection.
Table 1. Connectivity rules present in a selection of languages and simulators.

X: The rule is supported, A: The rule is supported and it is possible to specify whether autapses are created or not, M: Ditto for multapses.

The pairwise Bernoulli and random, fixed in- and out-degree rules in NEST support connectivity creation based on the relative position of source and target neurons.

Connectivity concepts

We here provide formal definitions of connectivity concepts for neuronal network models. These concepts encompass the basic connectivity rules illustrated in Fig 7 which are already commonly used by the computational neuroscience community (see Fig 6). Beyond that, we discuss concepts to reflect some of the richness of anatomical brain connectivity and complement in particular non-spatial connectivity rules with rules for spatially organized connectivity.

For each high-level connectivity rule, we give both an algorithmic construction rule and the resulting connectivity distribution. Modelers can use these definitions to succinctly specify connection rules in their studies. However, if details differ from our standard definitions, these details should still be specified. Furthermore, we suggest symbols that can be used to indicate the corresponding connectivity types in network diagrams and add the corresponding CSA expressions from [58].

In the specification of connectivity concepts we use the following notations and definitions. Let be the ordered set of sources of cardinality Ns and the set of targets of cardinality Nt. Then the set of all possible directed edges between members of and is given by the Cartesian product of cardinality NsNt.

If the source and target populations are identical () a source can be its own target. We call such a self-connection an autapse (cf. Fig 7). If autapses are not allowed, the target set for any node is , with cardinality Nt = Ns − 1. If there is more than one edge between a source and target (or from a node to itself), we call this a multapse.

The degree distribution P(k) is the distribution across nodes of the number of edges per node. In a directed network, the distribution of the number of edges going out of (into) a node is called the out-degree (in-degree) distribution. The distributions given below describe the effect of applying a connection rule once to a given pair.

Deterministic connectivity rules.

Deterministic connectivity rules establish precisely defined sets of connections without any variability across network realizations.

  1. One-to-one
    Symbol: δ
    CSA: δ
    Definition: Each node in is uniquely connected to one node in .
    and must have identical cardinality Ns = Nt, see Fig 7A. Both sources and targets can be permuted independently even if . The in- and out-degree distributions are given by P(K) = δK,1, with Kronecker delta δi,j = 1 if i = j, and zero otherwise.
  2. All-to-all
    Symbol: Ω
    CSA: Ω
    Definition: Each node in is connected to all nodes in .
    The resulting edge set is the full edge set . The in- and out-degree distributions are for , and for , respectively. An example is shown in Fig 7B.
  3. Explicit connections
    Symbol: X
    CSA: Not applicable
    Definition: Connections are established according to an explicit list of source-target pairs.
    Connectivity is defined by an explicit list of sources and targets, also known as adjacency list, as for instance derived from anatomical measurements. It is, hence, not the result of any specific algorithm. An alternative way of representing a fixed connectivity is by means of the adjacency matrix A, such that Aij = 1 if j is connected to i, and zero otherwise. We here adopt the common computational neuroscience practice to have the first index i denote the target and the second index j denote the source node.

Probabilistic connectivity rules.

Probabilistic connectivity rules establish edges according to a probabilistic rule. Consequently, the exact connectivity varies with realizations. Still, such connectivity leads to specific expectation values of network characteristics, such as degree distributions or correlation structure.

  1. Pairwise Bernoulli
    Symbol: p
    CSA: ρ(p)
    Definition: Each pair of nodes, with source in and target in , is connected with probability p.
    In its standard form this rule cannot produce multapses since each possible edge is visited only once. If , this concept is similar to Erdős-Rényi-graphs of the constant probability p-ensemble G(N, p)—also called binomial ensemble [73]; the only difference being that we here consider directed graphs, whereas the Erdős-Rényi model is undirected. The distribution of both in- and out-degrees is binomial, (7) and (8) respectively. The expected total number of edges equals E[Nsyn] = pNtNs.
  2. Random, fixed total number without multapses
    Definition: Nsyn ∈ {0, …, Ns Nt} edges are randomly drawn from the edge set without replacement. For this is a directed graph generalization of Erdős-Rényi graphs of the constant number of edges Nsyn-ensemble G(N, Nsyn) [74]. There are possible networks for any given number NsynNsNt, which all have the same probability. The resulting in- and out-degree distributions are multivariate hypergeometric distributions. (9) and analogously with Kout instead of Kin and source and target indices switched.
    The marginal distributions, i.e., the probability distribution for any specific node j to have in-degree Kj, are hypergeometric distributions (10) with sources and targets switched for P(Kout,j = Kj).
  3. Random, fixed total number with multapses
    Symbol: Nsyn, M
    Definition: Nsyn ∈ {0, …, NsNt} edges are randomly drawn from the edge set with replacement.
    If multapses are allowed, there are possible networks for any given number NsynNsNt. Because exactly Nsyn connections are distributed across Nt targets with replacement, the joint in-degree distribution is multinomial, (11) with p = 1/Nt.
    The out-degrees have an analogous multinomial distribution , with p = 1/Ns and sources and targets switched. The marginal distributions are binomial distributions and , respectively.
    The M-operator of CSA should not be confused with the “M” indicating that multapses are allowed in our symbolic notation.
  4. Random, fixed in-degree without multapses
    Definition: Each target node in is connected to Kin nodes in randomly chosen without replacement.
    The in-degree distribution is by definition . To obtain the out-degree distribution, observe that after one target node has drawn its Kout sources the joint probability distribution of out-degrees Kout,j is multivariate-hypergeometric such that (12) where ∀j Kj ∈ {0, 1}. The marginal distributions are hypergeometric distributions (13) with Ber(p) denoting the Bernoulli distribution with parameter p, because K ∈ {0, 1}. The full joint distribution is the sum of Nt independent instances of Eq 12.
  5. Random, fixed out-degree without multapses
    Definition: Each source node in is connected to Kout nodes in randomly chosen without replacement.
    The out-degree distribution is by definition , while the in-degree distribution is obtained by switching source and target indices, and replacing Kout with Kin in Eq 12.
  6. Random, fixed in-degree with multapses
    Symbol: Kin, M
    Definition: Each target node in is connected to Kin nodes in randomly chosen with replacement.
    Ns is the number of source nodes from which exactly Kin connections are drawn with equal probability p = 1/Ns for each of the Nt target nodes . The in-degree distribution is by definition . To obtain the out-degree distribution, we observe that because multapses are allowed, drawing Nt times Kin,i = Kin from is equivalent to drawing NtKin times with replacement from . This procedure yields a multinomial distribution of the out-degrees Kout,j of source nodes [75], i.e., (14)
    The marginal distributions are binomial distributions (15)
  7. Random, fixed out-degree with multapses
    Symbol: Kout, M
    Definition: Each source node in is connected to Kout nodes in randomly chosen with replacement.
    By definition, the out-degree distribution is a . The respective in-degree distribution and marginal distributions are obtained by switching source and target indices, and replacing Kout with Kin in Eqs 14 and 15 [75].

Networks embedded in metric spaces.

The previous sections analyze the connectivity between sets of nodes without any notion of space. However, real-world networks are often specified with respect to notions of proximity according to some metric. Prominent examples are spatial distance and path length in terms of number of intermediate nodes. The exact embedding into the metric space, such as the distribution of nodes in space or the boundary conditions, can have a strong impact on the resulting network structure. ρ here denotes the density, not to be confused with the CSA operator.

Given a distance-dependent connectivity, degree distributions result from this distance dependence combined with the distribution of distances between pairs of nodes [76]. If nodes are placed on a grid or uniformly at random in space, different asymptotic approximations to the degree distributions can be made [7779]. If the node distribution is (statistically) homogeneous, and the connection probability is isotropic, the average in- or out-degree for connections to or from any node i at a given distance from the node follows 〈K(r)〉 ∼ ρ(r)p(r), which is usually easier to derive than the full joint degree distribution, and can be used to statistically test whether network realizations are correctly generated [75, 80].

Here we specify the properties of spatial networks, which are also relevant for networks with feature-specific connectivity (e.g., based on sensory response tuning). In order to fully specify networks embedded in a metric space and with distance-dependent connectivity, the following quantities need to be listed:

  1. Dimension: Most often the space is one-dimensional (e.g., ring networks), two-dimensional (e.g., a layer of neurons), or three-dimensional (i.e., a volume of neurons).
  2. Layout: The layout specifies how nodes are arranged, for instance on a regular grid (e.g., orthogonal, isometric, or hexagonal) or uniformly at random.
  3. Metric: The metric specifies the concept of distance. On an orthogonal grid the max-norm metric () on the grid index can be the metric of choice, while for a uniformly random distribution of nodes the Euclidean metric (2) is typically chosen.
  4. Boundary conditions: If nodes are embedded into a space with boundaries, there tend to be inhomogeneities in the connectivity close to these boundaries. To avoid such potential inconsistencies, boundary conditions are often assumed to be periodic, i.e., opposite borderlines are folded back onto each other (e.g., a line into a ring, a layer into a torus, etc.).
  5. Distance dependence of the connectivity profile: The connectivity profile , sometimes called spatial footprint, specifies which nodes j are connected to a node i as a function of their distance . Profiles can be deterministic (e.g., a node connects to all other nodes within a certain distance rmax, specified via a boxcar profile f(r) ∼ Θ[rmaxr]) or probabilistic (a node connects to another node at a certain distance r with probability p(r) ∈ [0, 1], e.g., boxcar: p(r)∼c Θ[rmaxr], linear: p(r)∼max(c1c2 r, 0), sigmoidal: , exponential: , Gaussian: , or more complex, e.g., non-centered multivariate Gaussian with covariance matrix Σ: , , etc.). These distance-dependent connectivity profiles may be combined with rules for the establishment of multapses and higher-order moments. In the case of feature-specific connectivity as well as other generalized spaces and cases where a metric is difficult to define, it can be useful to generalize f to be a direct function of the sets of sources and targets, like a CSA mask: f = f(i, j) where . The distance could be treated similarly: rij = r(i, j) corresponding to a CSA value function.

Larger-scale and multiscale networks can have more complicated, heterogeneous structures, such as layers, columns, areas, or hierarchically organized modules. Distance dependencies may then have to be specified with respect to the different levels of organization, for example specific to their horizontal (laminar) and vertical (e.g., columnar) dimension (cf. “Introduction”). One example is networks modeling axonal patches, i.e., neurons that have axonal arborization in a certain local range, as well as further axonal sprouting in several distinct long-range patches [24, 27, 8183].

We discuss an explicit example of how to describe such connectivity rules in Section “Examples”.

Proposal for a graphical notation for network models

Network illustrations are a direct expression of how researchers think about a model and they are therefore a common means of network description (Fig 3B). They convey an intuitive understanding of network structures, relationships, and other properties central to the dynamics [84], and may also reflect how a model is implemented. If similar diagram styles are used, diagrams facilitate the reading of an article and allow for comparability of models across publications. However, computational neuroscience publications exhibit a wide variety of network diagram styles. While individual research groups and some sub-communities use similar symbols across publications, a common standard for the whole field has not been established yet.

In contrast, the related field of systems biology has developed the broadly accepted Systems Biology Graphical Notation (SBGN, [85]; see also [86]) over more than two decades. SBGN has an online portal (, an exchange and data format (SBGN-ML), a software library, and various further tools and databases.

Building on current practice in the computational neuroscience community, we propose a graphical notation framework for network models in computational neuroscience by defining which network elements to encode and how. We restrict ourselves to the simplest, most commonly used elements and provide a path to flexibly extend and customize the diagrams depending on the model specifics to expose. The notation uses simple standardized graphical components and therefore does not depend on a specific tool.

In the notation, a network is depicted as a graph composed of nodes and edges and enhanced with annotations. The nodes correspond to neuronal units or devices, the edges to connections, and the annotations specify the connections in terms of connection rules, possible constraints, and parameterization. The term “devices” refers to instances which are considered external to the main neuronal network but interact with it: either providing a stimulus or recording neuronal activity. Note that the nodes and edges of the graphical notation can combine multiple nodes and edges of the neuronal network; for instance, a population of network nodes can be indicated with one graphical node. A projection, referring to the set of connections resulting from one connectivity rule applied to a given source and target population, can be indicated with a single edge in the graphical notation.

Here we define diagram nodes and edges as well as annotations for the most common network types and propose a set of graphical elements to use. Thus, in the following, “node” and “edge” refer to the graphical components. A summarizing overview is given in Fig 8 for reference. The section concludes with a discussion on further techniques for creating appealing network diagrams.

Fig 8. Quick reference for the proposed graphical notation for network models in computational neuroscience.

Network node.

A network node in the graphical notation represents one or multiple units. These units are either neuron or neural population models, or devices providing input or output. Network connectivity is defined between these graphically represented nodes. Nodes are drawn as basic shapes. A textual label can be placed inside the node for identification. Nodes are differentiated according to a node class and a node type.

Node class.

The node class determines if a node represents an individual unit or a population of units by different frames of the shapes depicting the nodes. The distinction is a recommendation for diagrams that contain both kinds of nodes.

  1. Individual unit
    A node representing an individual unit may be depicted as a shape with a thin, single frame. Note that such an individual unit may be a population (e.g., neural mass) model.
  2. Population
    A node representing a population of units may be depicted as a shape with either a thick frame or a double frame. It is in principle possible to represent a group of population models this way.

Node type.

The node type refers to a defining property of a node and is expressed by a unique shape.

  1. Generic node
    A generic node, represented by a square, is used if the specific node types do not apply or are not intended to be emphasized.
  2. Excitatory neural node
    An excitatory neural node, depicted by a triangle, is used if the units represent neurons, and their effect on targets is excitatory.
  3. Inhibitory neural node
    An inhibitory neural node, depicted by a circle, is used if the units represent neurons and their effect on targets is inhibitory.
  4. Stimulating device node
    A stimulating device node, depicted by a hexagon, provides external input to other network nodes. Stimulating devices can be abstract units which for instance supply stochastic input spikes. Nodes with more refined neuron properties can also be considered as stimulating devices if they are external to the main network studied.
  5. Recording device node
    A recording device node, depicted by a parallelogram, contains non-neural units that record activity data from other network nodes.

Network edge.

A network edge represents a connection or projection between two nodes. Edges are depicted as arrows. Both straight and curved lines are possible. Edges are differentiated according to the categories determinism, edge type, and directionality.


The notation distinguishes between deterministic and probabilistic connections via the line style of network edges. Edges between two nodes representing individual units are usually deterministic.

  1. Deterministic
    Deterministic connections, depicted by a solid line edge, define exactly which units belonging to connected nodes are themselves connected.
  2. Probabilistic
    Probabilistic connections, depicted by a dashed-line edge, are constructed by connecting individual neurons from source and target populations according to probabilistic rules.

Edge type.

Analogously to the node type, the edge type emphasizes a defining property of the connection by specific choices of arrowheads. The edge types given here can be used for connections between all node types.

  1. Generic edge
    A generic edge, represented by a classical (or straight barb) arrowhead, is used if the specific edge types do not apply or the corresponding properties are not intended to be emphasized.
  2. Excitatory edge
    An excitatory edge, depicted by a triangle arrowhead, is used if the effect on targets is excitatory.
  3. Inhibitory edge
    An inhibitory edge, depicted by a filled circle tip, is used if the effect on targets is inhibitory.


The directionality indicates the direction of signal flow by the location of one or two arrowheads on the edge.

  1. Unidirectional
    Unidirectional connections are depicted with a tip at the target node’s end of the edge.
  2. Bidirectional
    Bidirectional connections are symmetric in terms of the existence of connections and their parameterization. Such connections are depicted with edges having tips on both ends. If the same units are connected but parameters for forward and backward connections are not identical, two separate unidirectional edges should be used instead.


Network edges can be annotated with information about the connection or projection they represent. Details on the rule specifying the existence of connections and their parameterization may be put along the arrow.

Connectivity concept.

The properties in this category further specify the presence or absence of connections between units within the connected nodes.

  1. Concept
    The definitions and symbols given in Section “Connectivity concepts” are the basis for this property.
  2. Constraint
    Specific constraint or exception to the connectivity concept.
    1. Autapses allowed
      Autapses are self-connections. The letter A indicates if they are allowed.
    2. Multapses allowed
      Multapses are multiple connections between the same pair of units and in the same direction. The letter M indicates if they are allowed.
    3. Prohibited
      The symbol of a constraint struck out reverses allowed to prohibited. E.g., autapses and multapses are prohibited: .


Properties of the parameterization of connections, e.g., of weights w and delays d, can be expressed with mathematical notation.

  1. Constant parameter
    A parameter, e.g., a weight, which takes on the same value for all individual connections is indicated by an overline: .
  2. Distributed parameter
    A tilde between a parameter (e.g., the weight) and a distribution indicates that individual parameter values are sampled from the distribution . This example uses for a generic distribution, but specific distributions, such as a normal distribution denoted by , are also possible.

Further specification.

Annotations for both the connectivity concept and the parameterization of connections can be specified further.

  1. Functional dependence
    Functional dependence on a parameter is expressed with parentheses, here indicated with a generic function f(·). Common use cases are the dependence on the inter-unit distance r or on time t. Connections drawn with a distance-dependent profile can be indicated with f(r). The exact function f used should be defined close to the diagram; already defined concepts such as a spatially modulated pairwise Bernoulli connection probability can also be used: p(r). Another example for a distance-dependent parameter could be a delay d(r). Plastic networks, in which the weights change with time, can be indicated with w(t).

Customization and extension.

The definitions given above are intended as a reference for illustrating network types that are in the scope of this study. Further graphical techniques may be used that go beyond these fundamental definitions, such as adding meaning to the size of network nodes (e.g., making the area proportional to the population size) or using colors (e.g., to highlight network nodes or edges sharing certain specifics). In the community, two ways of distinguishing excitatory and inhibitory neurons tend to be used: the “water tap” notation in which the excitatory neurons are shown in red and the inhibitory neurons in blue (e.g., [87]), and notations in which the inhibitory neurons are shown in red and the excitatory neurons in either blue or black, which may be thought of as “bank account” notation (blue: [5], black: [14]).

Fig 7 uses the proposed symbols for generic node and edge types to demonstrate basic connectivity patterns; in addition, we employ colors to differentiate source and target nodes and their connections. In Fig 1 we distinguish with blue and red between excitatory and inhibitory neurons, respectively, to give an example for the bank account notation.

Encoding the same feature in multiple ways is also encouraged if it supports intuition; in the proposed graphical notation, we use double encoding for node shapes and arrowheads. For complex or hierarchical networks, multiple diagrams may be created: for instance, one that provides an overview and others that bring out specific details.

The modular structure of our graphical notation framework allows for extension to features that are not yet covered. Symbols for additional network elements may be defined for example in the figure legend and applied as the researcher sees fit. The common classification of neural nodes into excitatory and inhibitory types used in the notation is one such example. On the one hand, a model-specific definition of these types can be formulated. On the other hand, further classification detail can be added to the graph (e.g., in the form of annotations) or additional node types can be introduced if necessary to represent nodes with further biophysical properties which are not covered by the above simple classification.

In the same way as our propositions for node types can be customized, adjustment of the other graphical elements is also encouraged. For example, having so far considered only networks coupled via chemical synapses, another possible extension is to define gap junctions as a novel edge type. One possibility here is to use the common symbol for electrical resistance:

  1. Gap junctions
    Electrical coupling via gap junctions is represented by a zig-zag line connecting the nodes.


To illustrate the symbolic and graphical notation proposed, we apply it in the following to three concrete example networks.

Two-population balanced random network.

The first example is the random, fixed in-degree variant of the balanced random network model also shown in Fig 1A (for details see Figs 12–15). Fig 9 shows different means for describing the connectivity of the model; the same options are covered in the model review in Fig 3B. The illustration (Fig 9A) uses the elements for nodes, edges, and annotations introduced in Section “Proposal for a graphical notation for network models” to depict the network composed of an excitatory (E, triangle) and an inhibitory (I, circle) neuron population, and a population of external stimulating devices (Eext, hexagon). Recurrent connections between the neurons in the excitatory and inhibitory populations are probabilistic (dashed edges) and follow the “random, fixed in-degree” rule (Kin) with the further constraints that autapses are prohibited () and multapses are allowed (M).

Fig 9. Different means to describe connectivity of a balanced random network.

Example descriptions for the model used in Fig 1A with description means similar to Fig 3B. (A) Network diagram according to the graphical notation introduced in Section “Proposal for a graphical notation for network models”. Symbols in annotations refer to the concepts and not the explicit parameters. (B) Textual description of the model layout. Subscript “” labels connections from source population E to target population ; the same applies to “” with source population I. and represent the explicit values used for the in-degrees. (C) Table according to the guidelines by Nordlie et al. [84]. (D) Equations according to the Connection Set Algebra (CSA) [58] using the index sets E and I. (E) PyNEST source code [63] specifying connections from source (pre) to target (post) populations with a connection dictionary (conn_spec). The use of all-to-all instead of one-to-one connectivity here is due to the specific implementation of the external drive in NEST.

Connections between different, non-intersecting populations by definition cannot have autapses and therefore it is not required to specify this along the corresponding edges. Neither does the absence of multapses between Eext and the neuronal populations need to be specified as we here assume a one-to-one connectivity (δ). This network diagram not only indicates if connections exist but also shows that their parameters, weights (w), and delays (d) are the same for each connection. However, the diagram does not express the parameter values, just as the numbers of incoming connections are left to be defined elsewhere. In contrast, the textual description (Fig 9B) adds subscripts to the connectivity concept to indicate that the excitatory and inhibitory in-degrees may be different: and , respectively. The table (Fig 9C) follows the guidelines by Nordlie et al. [84] and structures each connection in terms of a name, the source and target populations, and the connectivity rule. The set of equations (Fig 9D) formulates the connectivity by means of the Connection Set Algebra (CSA) [58]. While panels A–D of Fig 9 are primarily concerned with the conceptual description of connectivity, Fig 9E gives an implementation example using the PyNEST [63] interface of the simulator NEST [62]. The excitatory (E) and inhibitory (I) population are here represented by NodeCollections, storing the IDs of each neuron. By default, autapses and multapses are allowed; here we set both values explicitly for clarity. EExt in the code stands for a poisson_generator, a stimulating device node in NEST which generates independent sequences of input spikes sampled from the same Poisson process for each of its target neurons. In other words, EExt refers to just one NEST node which acts like the population of external stimulating devices indicated with EExt in Fig 9A–9D. Due to this specific implementation of the poisson_generator, the default connection rule all_to_all as a generalization of one-to-all connectivity is here applied instead of one_to_one.

Previous studies preferentially combine different ways of describing connectivity (Fig 3B) and also the example in Fig 9 highlights that one means alone may not be sufficient to exhaustively cover all aspects of the connectivity. For a comprehensive description, we recommend using at least one network diagram and a textual description for rapidly conveying the network structure, and a table for providing details. In addition, default assumptions, e.g., the presence or absence of multapses, should be made explicit; this can be done in the text.

Cortical microcircuit with distinct interneuron types.

The second example, shown in Fig 10, is a cortical microcircuit model [88] adapted from Potjans and Diesmann [89]. Extending the two-population network in Fig 9, this model comprises four cortical layers (L2/3, L4, L5, and L6). With its cell-type and layer-specific connectivity, the Potjans-Diesmann model represents the structure and dynamics of local cortical circuitry which is similar across different areas and species. The model has been used in a number of recent validation and benchmarking studies, and implementations for different simulators exist, including NEST [90], SpiNNaker [90, 91], Brian [92], GeNN and PyGeNN [93, 94], NeuronGPU [95], NetPyNE [96], and PyNN (available as a PyNN example and via Open Source Brain, While the original model by Potjans and Diesmann has only one excitatory (E) and one inhibitory (I) neuron population per layer, the model considered here distinguishes between three different inhibitory neuron types (SOM, VIP, and PV). All neuron populations receive external Poisson input Eext as in Fig 9 and additional input from an external thalamic population Eth. The thalamic targeting of all layers is in contrast to the Potjans-Diesmann model where only L4 and L6 receive thalamic input. Fig 10 shows three different diagrams to emphasize different aspects of the model. Here, the first two panels are used to give an intuitive overview of the network, while the third panel adheres to the proposed graphical notation to unequivocally represent the connectivity rules. Fig 10A uses a colored illustration to convey the overall components without specifying the connection rules. For the general model overview, cortical layers and subnetworks of inhibitory populations are framed by boxes. To avoid clutter, not all connections are shown and the distinction between probabilistic and deterministic connections via dashed and solid lines, as suggested in Fig 8, is not applied. Instead, only connections above a threshold connection probability are shown with solid lines, and two levels of line thickness help to distinguish between low- and high-probability connections. By taking this freedom we illustrate that customizations remain possible for overview figures, as long as the network is unequivocally described in the remainder. Arrows to or from a box represent the average connection probabilities to or from the network nodes contained in the box. The average connection probability equals the expected total number of connections divided by the maximum number of possible connections while considering all involved pairs of populations. For example, the average connection probability from an excitatory population E to the inhibitory populations is given by: (16)

Fig 10. Multi-layer microcircuit model with three inhibitory neuron types.

(A) Schematic overview of all neuronal populations, external inputs, and main connections. Inhibitory populations are grouped by boxes. In panels A and B, for probabilistic connections, only those with a probability of at least 4% are shown (thin lines: 4 to 8%, thick lines: ≥8%). (B) Detailed L2/3 connectivity between excitatory population and all three inhibitory populations; in panel A these connections are combined in two arrows (from and to the box). (C) Excitatory-inhibitory subnetwork with external inputs depicted with annotations according to the graphical notation in Fig 8. The connectivity is described with the rules “one-to-one” (δ) and “pairwise Bernoulli” (p), and the constraints autapses allowed (A) and multapses prohibited (). The synaptic weights (w) and delays (d) are specified as either constant (i.e., ) or sampled from lognormal distributions (i.e., ). Interneuron types: somatostatin expressing (SOM), vasoactive intestinal peptide expressing (VIP), parvalbumin expressing (PV).

Fig 10B zooms into layer L2/3 to highlight the connectivity between the excitatory population and the three inhibitory populations in this single layer, resolving the arrows in and out of the box. In panel A there is only one outgoing arrow from the inhibitory neuron box in L2/3 connecting to the excitatory population, but in panel B it becomes clear that the inhibitory subpopulations SOM and PV both have strong connections to E while VIP does not.

Fig 10C follows the proposed notations, as in Fig 9A, to illustrate the general components and connection rules that apply to the whole network regardless of layer and inhibitory cell type. While the original model by Potjans and Diesmann uses connectivity of the type “Random, fixed total number with multapses”, this model uses “pairwise Bernoulli” connectivity as indicated by the symbol p.

Combining these illustrations helps to understand the structure and characteristics of this model more intuitively. However, we do encourage deviations from and extensions to the proposed notation if it helps to improve the clarity of the diagrams, but these changes should be explained with care.

Spatial network with horizontally inhomogeneous structure.

The third example is a network embedded into two-dimensional space introduced in a paper by Voges & Perrinet [97] to model the dynamics of neocortical networks with realistic horizontal connectivity. The “PB model”, as it is called by the authors, incorporates both local and non-local connections between cells as observed for instance in the laminar structure of the visual cortex of cats [97, 98]. Local connectivity (footprint ≲ 150–300 μm) is observed to be approximately isotropic, with nearby cells being more likely to be connected than cells farther apart. On longer scales (≳ 1mm) so-called patches can be observed where the axons sprout and form several connections in a confined area (see Introduction).

As mentioned in Section “Connectivity concepts”, in order to define spatially embedded networks, the dimensions of the space, layout of neurons, metric of distances, boundary conditions, and, for distance-dependent connectivity, the form of this distance dependence need to be specified (Fig 11A). Here, NE excitatory and NI = NNE inhibitory neurons are embedded into a two-dimensional Euclidean space of size [0, L) × [0, L) with periodic boundary conditions (Fig 11A). Excitatory neurons are placed randomly according to a uniform distribution, while inhibitory neurons are distributed on jittered grid positions with grid constant and jitter with maximal jitter J where denotes the uniform distribution. Both populations have local and non-local, patchy connections [97] with different parameterizations for excitatory and inhibitory neurons (Fig 11A), based on [23].

Fig 11. Two-dimensional spatial network with patchy long-range connections.

(A) Spatial networks need to be defined in terms of dimension, layout, metric, boundary conditions, and the spatial or distance dependence of the connectivity, where applicable. In this example, neurons have both local and structured long-range connections [97]. Θ(x) = 0 if x < 0, 1 otherwise. (B) Sketch of patchy connectivity and parameters needed to define ppatch. (C) Graphical notation of network connectivity corresponding to Fig 8.

The global connection density (number of realized connections over number of possible connections) ctotal splits up into respective local and non-local parts motivated by anatomy [26, 97] (cf. Introduction), such that each neuron i in a given subpopulation has local and non-local connection densities cloc(i) and , where Npn is the number of patches per neuron (see Fig 11A). These underlying biologically motivated numbers then serve as constraints for the choices of the parameters needed in the following definitions [26, 97].

Local connections: In order to satisfy constraints with respect to both the fraction of connections assigned as local and the local spatial footprint, the out-degrees are in a first step drawn from a binomial distribution with a mean that produces the right connectivity fraction cloc. In a second step, random elements (i, j) of the set of potential synapses are drawn, and a connection is established with probability , until the required number is achieved. Multapses and autapses are excluded. The local connectivity of each neuron i at follows a Gaussian connectivity profile with center , maximal connection probability p0 and space constant or footprint σ, indicated by colored circles in Fig 11B.

Non-local, patchy connections: Non-local connection patterns in Fig 11B are determined for groups of neighboring neurons, such that all neurons located within a certain region (squares) project to a fixed subset of spatially distributed patches (light gray disks) allowed for this region. Again, first an out-degree is determined, and then the required number of synapses is established probabilistically according to Bernoulli trials, where the connection probability ) from a neuron at to each cell within one of its target patches centered at is constant within a certain radius. Multapses are excluded.

The basic parameters to characterize patchy projections are then:

  1. Np: the number of patches per group of neurons,
  2. Npn: the number of patches per single neuron (Npn≤Np),
  3. rp: the radius of a patch,
  4. dp: the distance between the center of a group , and patch center ,
  5. ϕ: the angle which characterizes a patch position relative to , see Fig 11A and 11B.

In particular, here the respective Np’s are drawn from uniform distributions with distinct minimum Npmin and maximum Npmax values for each population (E, I) while the Npn’s (Npn≤Np) are drawn from binomial distributions with specified means and the corresponding cutoff values. The distances dp come from normal distributions, while the angles ϕ are sampled uniformly from the interval [0, 2π) (Fig 11A and 11B, [97]). The Npn patches to which a given neuron projects are chosen uniformly at random from the Np patches for the group.

The remaining connectivity specifications are shown in the graphic in Fig 11C. Each population receives an external drive modeled as Poisson-process spike input. Moreover, delays are distance-dependent [97]. The exact connectivity parameters for each population, as well as weights and delays would need to be specified for instance in a table, which is beyond the scope of this example.


With the aim of supporting reproducibility in neuronal network modeling, we consider high-level connectivity in such models: connectivity that is described by rules applied to populations of nodes. As our main result, we propose a standardized nomenclature for often-used connectivity concepts and a graphical and symbolic notation for network diagrams.

Our proposal is informed by a review of model studies published in two well-known repositories (Open Source Brain and ModelDB), as representative of the wider body of neuronal network models. The network models reviewed are diverse in terms of when, where, and by whom they were published, their level of biological detail, and how their connectivity is defined and implemented (Figs 26). We find that the description of the connectivity in published articles is often insufficient for reproducing the connectivity rules, distributions, or concrete patterns used in the accompanying implementations. This is the case even though a large part of the identified connectivity concepts corresponds to rather basic rules. The devil is in the detail: deviations from standard rules, further constraints, or just the lack of rigorous definitions lead to ambiguities. Details sometimes omitted include whether self-connections (autapses) or multiple connections from a given source to a given target (multapses) are allowed.

In our review we further survey the use of high-level connection concepts in model descriptions and implementations and observe that they provide multiple advantages. High-level concepts allow for more concise and informative network specifications than explicit specification of atomic connectivity, in the form of either tables (databases) or algorithms expressed in elementary operations. Furthermore, for most high-level concepts presently used by modelers, simulation software provides dedicated code to efficiently instantiate connections in parallel or generate informative visualizations [54]. A significant obstacle to the systematic use of high-level concepts at present is the lack of a standardized terminology in the field: The same term may describe slightly different connectivity concepts with different authors or simulation codes, especially with respect to underlying assumptions about constraints, such as the presence of autapses.

In contrast to other approaches, we do not propose a new formal language (e.g., NeuroML, NineML) or a software implementation (e.g., PyNN). Instead, we gather terminology already in use in the community, expose interrelationships, and provide precise definitions. The result is a recipe helping neuroscientists to present their modeling work such that it has a higher chance of being reproducible. Furthermore, the user-level documentation of simulation engines can make reference to the presented definitions of connectivity concepts and point out any differences compared to the implementation at hand. A continuing debate and refactoring of individual codes may ultimately lead to a maturation of the field and the convergence of simulation engines.

Historical context

For more than a decade, computational neuroscientists have been aware of the need to gather the notions used to describe the structure of network models and to establish common practices for network definitions [86]. Ideas on systematizing connectivity concepts were discussed at the “NEST Topology Library & LFP Modeling Workshop” in 2008 at the Norwegian University of Life Sciences (NMBU). This resulted in two publications on tabular [84] and graphical [54] network representations. The workshop “Creating, Documenting and Sharing Network Models” held at the University of Edinburgh in 2011 reviewed the situation at the time and resulted in a joint article by the participants [55] which set out the research program for the present work.

Other efforts in the community have focused on implementations in specific tools or sets of tools. Examples include the simulation package NetPyNE [99], the model description languages NeuroML [44] and NineML [57], the SONATA data format for describing large-scale network models [59], and the Open Source Brain repository for network models also used here [43]. To foster the adoption, interoperability, and standardization of description languages and pertaining tools, the INCF Working Group on Standardized Representations of Network Structures ( was established in 2018.

While earlier work focused on the formal description of single neurons, network structure gained increasing importance. This was partly driven by the increasing complexity of network models but also by the need to reproduce network models of others. The latter is highlighted by the research on neuromorphic computing systems. Verification of these systems requires that the same network model can be instantiated on a conventional computer and on the new system under investigation. Groundbreaking work was carried out by the European FACETS project (2005–2010) in conceiving the meta-simulator language PyNN as a common front end for software and hardware simulation engines [46]. In this way, once a network has been formulated in PyNN, it can be instantiated in a software simulation engine such as NEST or a neuromorphic hardware system such as SpiNNaker [100]. The availability of high-level connectivity concepts such as “all-to-all” in PyNN must guarantee that all back-end engines interpret this in the same way. Expanding the connectivity concept into elementary pairwise connect requests already on the level of the PyNN interpreter is not an option as this would deprive a simulation engine of any chance of efficient parallelization. Another framework independent of a particular simulation engine is the Connection Set Algebra (CSA), described in Section “Description languages and simulators” [58].

Although fairly complete simulation codes for biological neuronal networks predate these decisive years by at least another decade [101103], a framework for expressing connectivity that is consistent across simulators and description languages has not been developed to the present day. The primary reason is not that fundamental concepts such as cortical layers, random networks, and spatially organized networks only emerged over time. All these have been well known already for many decades and have been used in network models. In 2008, Erik De Schutter [86] analyzed the situation by comparing the fields of computational neuroscience and systems biology. He placed the emergence of computational neuroscience as a field in the ‘second half of the eighties’ and the emergence of systems biology in the ‘late nineties’ of the last century. Within a few years systems biology came up with a first version of the community standard SBML [104] for model description while computational neuroscience, although ten years older, was still struggling to find a common ground at the time of De Schutter’s review. He explains the observation by a difference in scientific culture. Systems biology started on the background of large international collaborations for jointly uncovering the genome like the Human Genome Project (1990–2003). Thus researchers were already aware of the needs for standards and the methods to achieve them. Computational neuroscience only began to gain experience in large-scale collaborations with initiatives like the foundation of the Allen Institute for Brain Science in the US in 2003 and the European FACETS project in 2005, the latter eventually leading into the European Human Brain Project. Therefore, when the need for standardized model descriptions became apparent around 2010, the community was still learning how to do big science. This may explain why it has taken so long to explore and discuss standardized model descriptions.

In addition, De Schutter points out, for the young discipline of systems biology modern software development tools and the idea of open source were part of the culture from the beginning. Therefore the competence in research software engineering and the acceptance of software development as an integral part of scientific methodology may have been more widespread at this time. There is a long-standing awareness that software development in science, including computational neuroscience [105, 106], is subject to special conditions: most scientists are not trained programmers [107], and it is often difficult to receive proper credit for the time invested in developing software [108]. As a result such tasks regularly are assigned low priority and progress is slow. It is the responsibility of senior scientists and science politics to adapt performance indicators to modern science and improve the conditions for sustainable research software engineering.

Although the present work restricts itself to a compilation of the concepts for describing network structure, we have learned from the history of SBML [104] and related efforts in computational neuroscience such as NeuroML [44] that it is important to integrate the different views of the community in a series of workshops. Chances for acceptance are higher if a proposed framework results from bottom-up experience and a community approach. Thus our results can only constitute a first draft which now needs to be discussed, elaborated, and maintained.

In systems biology it was customary to illustrate biochemical interactions by graphs as a third pillar of communication next to plain English and systems of equations in order to support the explanation of complex networks. Only a few years after the initial definition of SBML the idea emerged to also standardize the components of such illustrations as SBGN, a Systems Biology Graphical Notation [85]. Also in computational neuroscience researchers regularly communicate the structure of a model by illustrations of different styles and level of detail. While in systems biology the graphical notation expresses functional relations and temporal sequences depending on the diagram type of the standard, in computational neuroscience at present the primary use is the abstract representation of the anatomical connections of the neuronal network. In designing our draft graphical notation for computational neuroscience we tried to respect the lessons learned while developing SBGN as reported in [85]. In particular neither position nor color carry inherent meaning and we started from notations already used in the literature. Le Novère et al. point at the relevance of software tools using the graphical notation for dissemination. In this spirit, the recent release of NEST Desktop [109] already adheres to our proposed graphical notation.


The actual richness of models goes beyond the scope of this work and is still growing as the recent progress in experimental neuroscience makes more comprehensive anatomical and physiological data available to modelers. This data availability fuels the research field of connectomics and leads to an advent of large models with detailed data-driven connectivity [2, 3]. These models may have specific information not only on which neurons are connected but also on the location and other properties of the individual synapses. The models typically combine a bottom-up approach with conceptual assumptions. Abstractions are crucial for generalization and for testing hypotheses on the specifics of the connectivity. While the complexity of such models cannot be fully reduced, they may still benefit from guidelines for concise and reproducible descriptions of their connectivity.

Apart from complex, data-driven models, various high-level connectivity patterns exist which we have not discussed here. The connectivity rules used by modelers so far and considered here mostly yield regular and random graphs. In regular graphs, every node is linked to a fixed number of other nodes according to a standard pattern. In contrast, in random graphs all connections are established probabilistically. We have thereby neglected more complex topologies such as small-world networks, which are in between regular and random and are characterized by small short path lengths and a large clustering coefficient [110, 111]. Another example is scale-free networks, which are characterized by their power-law degree distribution. A small number of nodes (so-called “hubs”) have a very high degree while most of the remaining ones have only few connections [73, 112]. Just as for data-driven models, future work may consider standards for consistently describing such networks.

Furthermore, the brains of many species, including mammals, follow a hierarchical organization, having different properties at different spatial scales. For instance, cerebral cortical areas are composed of layers, which contain populations of excitatory and inhibitory neurons, which may in turn be divided into subpopulations (cf. Fig 10). Many brain networks also have a clustered or modular structure, for example cortical networks consisting of macro- and minicolumns [113, 114]. This hierarchically modular organization suggests a multi-level description, where on the higher level not all details of the lower level are expressed for clarity. Our graphical notation already allows for nested populations, but the consistent description of hierarchical and modular networks requires further work.

Another aspect of biological neural networks we have neglected in this study is their adaptation over time via developmental processes and plasticity. Such plastic networks can for instance enable modeling inter-individual differences, potentially adding a layer of stochasticity beyond that of the initial structure. While the resulting networks are generally not easily captured in simple rules, compact accounts may be achieved by describing the initial state along with the growth or plasticity rules.

Neuronal network simulators should provide efficient high-level connectivity routines relevant for computational neuroscience. Which routines are available may, however, not solely depend on the need of the neuroscientist for a specific connection rule but also on the algorithmic efficiency. The rule “random, fixed total number”, for instance, is non-trivial to parallelize [93]. Vice versa, which rules are already implemented in simulators may influence which ones neuroscientists eventually use. This relates to the general question how instruments shape the development of scientific theories [115117]. Our literature review on published models shows that explicitly coded connectivity in general-purpose languages instead of using simulators with high-level commands is still quite common (Fig 4). A possible reason for this observation is that the effort to learn a simulator language outweighs a custom implementation as long as networks are small and the connectivity simple. The models in our review predominantly do not require a significant amount of computational resources and the chosen connectivity rules are not complicated to implement from scratch. We predict that the use of generic simulation codes will increase as models become more complex and the requirements for reproducible science and the publication of code become more strict. In turn this hopefully triggers an expansion of the simulators’ repertoire of well-described and efficiently implemented connection routines. A challenge in this context is posed by the increasing use of high-performance computing facilities and specialized neuromorphic hardware [118, 119]. On future exascale supercomputers, highly efficient solutions for the parallel implementation of connectivity will be particularly important. Neuromorphic hardware is often constrained with regard to the neural network connectivity it supports, and the identification of relevant connectivity concepts can help decide which types of connectivity to enable. The concepts already in use form a starting point for thinking about which high-level connectivity patterns future versions of simulation engines should provide.


This work constitutes rather a starting point than an end point. Just as most existing network models, the concepts we describe are still limited with regard to connectivity structures observed in neuroanatomy. As models advance in capturing the complex multi-scale organization of the brain, this needs to be reflected in concepts and graphical notation such that researchers can always communicate on the appropriate level of resolution while having access to all details if needed. It is our hope that the methods laid down here help to structure the debate.

Materials and methods

Reviewed network models

Table 2 lists all articles included in the literature review in Section “Networks used in the computational neuroscience community”.

Table 2. Alphabetical list of articles describing the reviewed network models.

Balanced random network

The balanced random network model used in Figs 1 and 9 is based on the model introduced by Brunel [1]. Our implementation extends the script which is part of the NEST source code ( by the option to switch between a “fixed in-degree” and a “fixed out-degree” version. Details about the model description are summarized in Figs 1214 and the parameters are given in Fig 15.

Fig 12. Description of balanced random network models following the guidelines of Nordlie et al. [84].

Distinction between “fixed in-degree” and “fixed out-degree” versions.


The authors would like to thank Daniel Hjertholm for inspiring work on testing connectivity generation schemes, Sebastian Spreizer for immediately adopting the graphical notation in NEST Desktop, Espen Hagen for detailed comments on the manuscript, Hannah Bos for fruitful discussions, and our colleagues in the Simulation and Data Laboratory Neuroscience of the Jülich Supercomputing Centre for continuous collaboration. The authors gratefully acknowledge the computing time granted by the JARA Vergabegremium and provided on the JARA Partition part of the supercomputer JURECA at Forschungszentrum Jülich (computation grant JINB33).


  1. 1. Brunel N. Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. Journal of Computational Neuroscience. 2000; 8(3): 183–208. pmid:10809012
  2. 2. Billeh YN, Cai B, Gratiy SL, Dai K, Iyer R, Gouwens NW, et al. Systematic Integration of Structural and Functional Data into Multi-scale Models of Mouse Primary Visual Cortex. Neuron. 2020; 106(3): 388–403.e18. pmid:32142648
  3. 3. Markram H, Muller E, Ramaswamy S, Reimann MW, Abdellah M, Sanchez CA, et al. Reconstruction and simulation of neocortical microcircuitry. Cell. 2015; 163(2): 456–492. pmid:26451489
  4. 4. Reimann MW, King JG, Muller EB, Ramaswamy S, Markram H. An algorithm to predict the connectome of neural microcircuits. Frontiers in Computational Neuroscience. 2015; 9. pmid:26500529
  5. 5. Schmidt M, Bakker R, Shen K, Bezgin G, Diesmann M, van Albada SJ. A multi-scale layer-resolved spiking network model of resting-state dynamics in macaque visual cortical areas. PLOS Computational Biology. 2018; 14(10): e1006359. pmid:30335761
  6. 6. Ippen T, Eppler JM, Plesser HE, Diesmann M. Constructing Neuronal Network Models in Massively Parallel Environments. Frontiers in Neuroinformatics. 2017; 11. pmid:28559808
  7. 7. van Albada SJ, Morales-Gregorio A, Dickscheid T, Goulas A, Bakker R, Bludau S, et al. Bringing Anatomical Information into Neuronal Network Models. arXiv preprint. 2020;.
  8. 8. Cook SJ, Jarrell TA, Brittin CA, Wang Y, Bloniarz AE, Yakovlev MA, et al. Whole-animal connectomes of both Caenorhabditis elegans sexes. Nature. 2019; 571(7763): 63–71. pmid:31270481
  9. 9. Roostaei T, Nazeri A, Sahraian MA, Minagar A. The human cerebellum: a review of physiologic neuroanatomy. Neurologic Clinics. 2014; 32(4): 859–869. pmid:25439284
  10. 10. Braitenberg V, Schüz A. Cortex: Statistics and Geometry of Neuronal Connectivity. 2nd ed. Springer-Verlag, Berlin; 1998.
  11. 11. Schüz A, Sultan F. Brain Connectivity and Brain Size. In: Squire L, Albright T, Bloom F, Gage F, Spitzer N, editors. Encyclopedia of Neuroscience. vol. 2. Amsterdam, Netherlands: Academic Elsevier; 2009. p. 317–326.
  12. 12. Binzegger T. A Quantitative Map of the Circuit of Cat Primary Visual Cortex. Journal of Neuroscience. 2004; 24(39): 8441–8453. pmid:15456817
  13. 13. Narayanan RT, Egger R, Johnson AS, Mansvelder HD, Sakmann B, De Kock CP, et al. Beyond columnar organization: cell type-and target layer-specific principles of horizontal axon projection patterns in rat vibrissal cortex. Cerebral Cortex. 2015; 25(11): 4450–4468. pmid:25838038
  14. 14. Feldmeyer D, Qi G, Emmenegger V, Staiger JF. Inhibitory interneurons and their circuit motifs in the many layers of the barrel cortex. Neuroscience. 2018; 368: 132–151. pmid:28528964
  15. 15. Ikeda K, Bekkers JM. Autapses. Current Biology. 2006; 16(9): R308. pmid:16682332
  16. 16. Kasthuri N, Hayworth KJ, Berger DR, Schalek RL, Conchello JA, Knowles-Barley S, et al. Saturated reconstruction of a volume of neocortex. Cell. 2015; 162(3): 648–661. pmid:26232230
  17. 17. Ercsey-Ravasz M, Markov NT, Lamy C, Van Essen DC, Knoblauch K, Toroczkai Z, et al. A predictive network model of cerebral cortical connectivity based on a distance rule. Neuron. 2013; 80(1): 184–197. pmid:24094111
  18. 18. Schmidt M, Bakker R, Hilgetag CC, Diesmann M, van Albada SJ. Multi-scale account of the network structure of macaque visual cortex. Brain Structure and Function. 2018; 223(3): 1409–1435. pmid:29143946
  19. 19. Perin R, Berger TK, Markram H. A synaptic organizing principle for cortical neuronal groups. Proceedings of the National Academy of Sciences. 2011; 108(13): 5419–5424. pmid:21383177
  20. 20. Packer AM, Yuste R. Dense, Unspecific Connectivity of Neocortical Parvalbumin-Positive Interneurons: A Canonical Microcircuit for Inhibition? Journal of Neuroscience. 2011; 31(37): 13260–13271. pmid:21917809
  21. 21. Hellwig B. A quantitative analysis of the local connectivity between pyramidal neurons in layers 2/3 of the rat visual cortex. Biol Cybern. 2000; 82: 111–121. pmid:10664098
  22. 22. Stepanyants A, Hirsch J, Martinez LM, Kisvárday ZF, Ferecsko AS, Chklovskii DB. Local potential connectivity in cat primary visual cortex. Cerebral Cortex. 2007; 18(1): 13–28. pmid:17420172
  23. 23. Binzegger T, Douglas RJ, Martin KAC. Stereotypical Bouton Clustering of Individual Neurons in Cat Primary Visual Cortex. Journal of Neuroscience. 2007; 27(45): 12242–12254. pmid:17989290
  24. 24. Voges N, Schüz A, Aertsen A, Rotter S. A modeler’s view on the spatial structure of intrinsic horizontal connectivity in the neocortex. Progress in Neurobiology. 2010; 92(3): 277–292. pmid:20685378
  25. 25. Muir DR, Douglas RJ. From Neural Arbors to Daisies. Cerebral Cortex. 2011; 21: 1118–1133. pmid:20884721
  26. 26. Voges N, Guijarro C, Aertsen A, Rotter S. Models of cortical networks with long-range patchy projections. Journal of Computational Neuroscience. 2010; 28(1): 137–154. pmid:19866352
  27. 27. Bosking WH, Zhang Y, Schofield B, Fitzpatrick D. Orientation selectivity and the arrangement of horizontal connections in tree shrew striate cortex. Journal of Neuroscience. 1997; 17(6): 2112–2127. pmid:9045738
  28. 28. Ko H, Hofer SB, Pichler B, Buchanan KA, Sjöström PJ, Mrsic-Flogel TD. Functional specificity of local synaptic connections in neocortical networks. Nature. 2011; 473(7345): 87–91. pmid:21478872
  29. 29. Wertz A, Trenholm S, Yonehara K, Hillier D, Raics Z, Leinweber M, et al. Single-cell–initiated monosynaptic tracing reveals layer-specific cortical network modules. Science. 2015; 349(6243): 70–74. pmid:26138975
  30. 30. Lee WCA, Bonin V, Reed M, Graham BJ, Hood G, Glattfelder K, et al. Anatomy and function of an excitatory network in the visual cortex. Nature. 2016; 532(7599): 370–374. pmid:27018655
  31. 31. Goulas A, Betzel RF, Hilgetag CC. Spatiotemporal ontogeny of brain wiring. Science Advances. 2019; 5(6): eaav9694. pmid:31206020
  32. 32. Song S, Sjöström P, Reigl M, Nelson S, Chklovskii D. Highly nonrandom features of synaptic connectivity in local cortical circuits. PLoS Biol. 2005; 3(3): e68. pmid:15737062
  33. 33. Meunier D, Lambiotte R, Bullmore ET. Modular and hierarchically modular organization of brain networks. Frontiers in Neuroscience. 2010; 4: 200. pmid:21151783
  34. 34. Bassett DS, Bullmore E. Small-World Brain Networks. The Neuroscientist. 2006; 12(6): 512–523. pmid:17079517
  35. 35. Bassett DS, Bullmore ET. Small-world brain networks revisited. The Neuroscientist. 2017; 23(5): 499–516. pmid:27655008
  36. 36. Abeles M. Corticonics: Neural Circuits of the Cerebral Cortex. 1st ed. Cambridge: Cambridge University Press; 1991.
  37. 37. Stepanyants A, Martinez LM, Ferecsko AS, Kisvárday ZF. The fractions of short- and long-range connections in the visual cortex. PNAS. 2009; 106(9): 3555–3560. pmid:19221032
  38. 38. van Albada SJ, Helias M, Diesmann M. Scalability of Asynchronous Networks Is Limited by One-to-One Mapping between Effective Connectivity and Correlations. PLOS Comput Biol. 2015; 11(9): e1004490. pmid:26325661
  39. 39. Senk J, Kriener B, Hagen E, Bos H, Plesser HE, Gewaltig MO, et al. Connectivity Concepts for Neuronal Networks. NEST Conference 2019; 2019.
  40. 40. Senk J, Kriener B, Djurfeldt M, Voges N, Schüttler L, Gramelsberger G, et al. Systematic textual and graphical description of connectivity. Bernstein Conference 2020 (G-Node); 2020.
  41. 41. Peterson BE, Healy MD, Nadkarni PM, Miller PL, Shepherd GM. ModelDB: An Environment for Running and Storing Computational Models and Their Results Applied to Neuroscience. Journal of the American Medical Informatics Association. 1996; 3(6): 389–398. pmid:8930855
  42. 42. McDougal RA, Morse TM, Carnevale T, Marenco L, Wang R, Migliore M, et al. Twenty years of ModelDB and beyond: building essential modeling tools for the future of neuroscience. Journal of Computational Neuroscience. 2017; 42(1): 1–10. pmid:27629590
  43. 43. Gleeson P, Cantarelli M, Marin B, Quintana A, Earnshaw M, Sadeh S, et al. Open Source Brain: A Collaborative Resource for Visualizing, Analyzing, Simulating, and Developing Standardized Models of Neurons and Circuits. Neuron. 2019; 103(3): 395–411.e5. pmid:31201122
  44. 44. Gleeson P, Crook S, Cannon RC, Hines ML, Billings GO, Farinella M, et al. NeuroML: A Language for Describing Data Driven Models of Neurons and Networks with a High Degree of Biological Detail. PLOS Comput Biol. 2010; 6(6): e1000815. pmid:20585541
  45. 45. Cannon RC, Gleeson P, Crook S, Ganapathy G, Marin B, Piasini E, et al. LEMS: a language for expressing complex biological models in concise and hierarchical form and its use in underpinning NeuroML 2. Front Neuroinformatics. 2014; 8: 21. pmid:25309419
  46. 46. Davison A, Brüderle D, Eppler JM, Kremkow J, Muller E, Pecevski D, et al. PyNN: a common interface for neuronal network simulators. Front Neuroinformatics. 2009; 2(11): 10. pmid:19194529
  47. 47. Morrison A, Aertsen A, Diesmann M. Spike-Timing Dependent Plasticity in Balanced Random Networks. Neural Comput. 2007; 19: 1437–1467. pmid:17444756
  48. 48. Diaz-Pier S, Naveau M, Butz-Ostendorf M, Morrison A. Automatic Generation of Connectivity for Large-Scale Neuronal Network Models through Structural Plasticity. Front Neuroanatomy. 2016; 10: 57. pmid:27303272
  49. 49. Crook SM, Davison AP, Plesser HE. Learning from the past: approaches for reproducibility in computational neuroscience. In: 20 Years of Computational Neuroscience. Springer; 2013. p. 73–102.
  50. 50. Rougier NP, Hinsen K, Alexandre F, Arildsen T, Barba LA, Benureau FC, et al. Sustainable computational science: the ReScience initiative. PeerJ Computer Science. 2017; 3: e142. pmid:34722870
  51. 51. Gutzen R, von Papen M, Trensch G, Quaglio P, Grün S, Denker M. Reproducible Neural Network Simulations: Statistical Methods for Model Validation on the Level of Network Activity Data. Front Neuroinformatics. 2018; 12: 90. pmid:30618696
  52. 52. Pauli R, Weidel P, Kunkel S, Morrison A. Reproducing polychronization: a guide to maximizing the reproducibility of spiking network models. Front Neuroinformatics. 2018; 12(46). pmid:30123121
  53. 53. Van der Loos H, Glaser EM. Autapses in neocortex cerebri: synapses between a pyramidal cell’s axon and its own dendrites. Brain Res. 1972; 48: 355–360. pmid:4645210
  54. 54. Nordlie E, Plesser HE. Visualizing neuronal network connectivity with connectivity pattern tables. Frontiers in Neuroinformatics. 2010; 3: 39. pmid:20140265
  55. 55. Crook SM, Bednar JA, Berger S, Cannon R, Davison AP, Djurfeldt M, et al. Creating, documenting and sharing network models. Network: Computation in Neural Systems. 2012; 23(4): 131–149. pmid:22994683
  56. 56. Rotter S, Diesmann M. Exact digital simulation of time-invariant linear systems with applications to neuronal modeling. Biological Cybernetics. 1999; 81(5-6): 381–402. pmid:10592015
  57. 57. Raikov I, Cannon R, Clewley R, Cornelis H, Davison A, Schutter ED, et al. NineML: the network interchange for neuroscience modeling language. BMC Neuroscience. 2011; 12: 1–2.
  58. 58. Djurfeldt M. The Connection-set Algebra—A Novel Formalism for the Representation of Connectivity Structure in Neuronal Network Models. Neuroinformatics. 2012; 10: 287–304. pmid:22437992
  59. 59. Dai K, Hernando J, Billeh YN, Gratiy SL, Planas J, Davison AP, et al. The SONATA data format for efficient description of large-scale network models. PLOS Computational Biology. 2020; 16: 1–24. pmid:32092054
  60. 60. Hammarlund P, Ekeberg Ö. Large neural network simulations on multiple hardware platforms. J Comput Neurosci. 1998; 5(4): 443–59. pmid:9877024
  61. 61. Yavuz E, Turner J, Nowotny T. GeNN: a code generation framework for accelerated brain simulations. Scientific reports. 2016; 6(1): 1–14. pmid:26740369
  62. 62. Gewaltig MO, Diesmann M. NEST (NEural Simulation Tool). Scholarpedia. 2007; 2(4): 1430.
  63. 63. Eppler JM. PyNEST: A convenient interface to the NEST simulator. Frontiers in Neuroinformatics. 2008; 2.
  64. 64. Fardet T, Vennemo SB, Mitchell J, Mørk H, Graber S, Hahne J, et al. NEST 2.20.1. Zenodo; 2020.
  65. 65. Hines M, Carnevale NT. The NEURON Simulation Environment. Neural Comput. 1997; 9: 1179–1209. pmid:9248061
  66. 66. Carnevale NT, Hines ML. The NEURON Book. Cambridge: Cambridge University Press; 2006.
  67. 67. Abi Akar N, Biddiscombe J, Cumming B, Huber F, Kabic M, Karakasis V, et al. arbor-sim/arbor: Arbor Library v0.5. Zenodo; 2021.
  68. 68. Abi Akar N, Cumming B, Karakasis V, Küsters A, Klijn W, Peyser A, et al. Arbor—A Morphologically-Detailed Neural Network Simulation Library for Contemporary High-Performance Computing Architectures. In: 2019 27th Euromicro International Conference on Parallel, Distributed and Network-Based Processing (PDP); 2019.
  69. 69. Goodman D, Brette R. Brian: a simulator for spiking neural networks in Python. Front Neuroinformatics. 2008; 2. pmid:19115011
  70. 70. Ray S, Bhalla US. PyMOOSE: interoperable scripting in Python for MOOSE. Frontiers Neuroinf. 2008; 2: 6. pmid:19129924
  71. 71. Bekolay T, Bergstra J, Hunsberger E, DeWolf T, Stewart TC, Rasmussen D, et al. Nengo: a Python tool for building large-scale functional brain models. Front Neuroinformatics. 2013; 7.
  72. 72. Djurfeldt M, Davison AP, Eppler JM. Efficient generation of connectivity in neuronal networks from simulator-independent descriptions. Frontiers in Neuroinformatics. 2014; 8: 43. pmid:24795620
  73. 73. Albert R, Barabási AL. Statistical mechanics of complex networks. Rev Mod Phys. 2002; 74: 47–97.
  74. 74. Erdős P, Rényi A. On random graphs. Publications Mathematicae. 1959; 6: 290–297.
  75. 75. Hjertholm D. Statistical tests for connection algorithms for structured neural networks [master’s thesis]. Norwegian University of Life Sciences. Ås, Norway; 2013. Available from:
  76. 76. Sheng TK. The distance between two random points in plane regions. Adv Appl Prob. 1985; 17(4): 748–773.
  77. 77. Hermann C, Barthelemy M, Provero P. Connectivity distribution of spatial networks. Physical Review E. 2003; 68: 026128.
  78. 78. Haenggi M. On distances in uniformly random networks. IEEE Transactions on Information Theory. 2005; 51(10): 3584–3586.
  79. 79. Moltchanov D. Distance distributions in random networks. Ad Hoc Networks. 2012; 10(6): 1146–1166.
  80. 80. Yger P, El Boustani S, Destexhe A, Fregnac Y. Topologically invariant macroscopic statistics in balanced networks of conductance-based integrate-and-fire neurons. J Comput Neurosci. 2009; 31: 229–245.
  81. 81. Gilbert CD, Wiesel TN. Clustered intrinsic connections in cat visual cortex. Journal of Neuroscience. 1983; 5: 1116–1133. pmid:6188819
  82. 82. Amir Y, Harel M, Malach R. Cortical hierarchy reflected in the organization of intrinsic connections in macaque monkey visual cortex. Journal of Comparative Neurology. 1993; 334(1): 19–46. pmid:8408757
  83. 83. Lund JS, Yoshioka T, Levitt JB. Comparison of intrinsic connectivity in different areas of macaque monkey cerebral cortex. Cerebral Cortex. 1993; 3(2): 148–162. pmid:8490320
  84. 84. Nordlie E, Gewaltig MO, Plesser HE. Towards Reproducible Descriptions of Neuronal Network Models. PLoS Computational Biology. 2009; 5(8): e1000456. pmid:19662159
  85. 85. Novère NL, Hucka M, Mi H, Moodie S, Schreiber F, Sorokin A, et al. The Systems Biology Graphical Notation. Nature Biotechnology. 2009; 27(8): 735–741. pmid:19668183
  86. 86. De Schutter E. Why are computational neuroscience and systems biology so separate? PLoS Comput Biol. 2008; 4(5): 78. pmid:18516226
  87. 87. Denève S, Machens CK. Efficient codes and balanced networks. Nature Neuroscience. 2016; 19(3): 375–382. pmid:26906504
  88. 88. Jiang HJ, van Albada SJ. A cortical microcircuit model with three critical interneuron groups. Bernstein Conference 2019 (G-Node); 2019.
  89. 89. Potjans TC, Diesmann M. The Cell-Type Specific Cortical Microcircuit: Relating Structure and Activity in a Full-Scale Spiking Network Model. Cerebral Cortex. 2014; 24(3): 785–806. pmid:23203991
  90. 90. van Albada SJ, Rowley AG, Senk J, Hopkins M, Schmidt M, Stokes AB, et al. Performance Comparison of the Digital Neuromorphic Hardware SpiNNaker and the Neural Network Simulation Software NEST for a Full-Scale Cortical Microcircuit Model. Frontiers in Neuroscience. 2018; 12. pmid:29875620
  91. 91. Rhodes O, Peres L, Rowley AGD, Gait A, Plana LA, Brenninkmeijer C, et al. Real-time cortical simulation on neuromorphic hardware. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences. 2019; 378(2164): 20190160. pmid:31865885
  92. 92. Shimoura RO, Kamiji NL, Pena RFO, Cordeiro VL, Ceballos CC, Cecilia R, et al. [Re] The cell-type specific cortical microcircuit: relating structure and activity in a full-scale spiking network model. ReScience. 2018; 4.
  93. 93. Knight JC, Nowotny T. GPUs Outperform Current HPC and Neuromorphic Solutions in Terms of Speed and Energy When Simulating a Highly-Connected Cortical Model. Frontiers in Neuroscience. 2018; 12. pmid:30618570
  94. 94. Knight JC, Komissarov A, Nowotny T. PyGeNN: A Python Library for GPU-Enhanced Neural Networks. Frontiers in Neuroinformatics. 2021; 15. pmid:33967731
  95. 95. Golosio B, Tiddia G, Luca CD, Pastorelli E, Simula F, Paolucci PS. Fast Simulations of Highly-Connected Spiking Cortical Models Using GPUs. Frontiers in Computational Neuroscience. 2021; 15. pmid:33679358
  96. 96. Romaro C, Najman FA, Lytton WW, Roque AC, Dura-Bernal S. NetPyNE Implementation and Scaling of the Potjans-Diesmann Cortical Microcircuit Model. Neural Computation. 2021; 33(7): 1993–2032. pmid:34411272
  97. 97. Voges N, Perrinet L. Complex dynamics in recurrent cortical networks based on spatially realistic connectivities. Frontiers in Computational Neuroscience. 2012; 6(41): 1–19. pmid:22787446
  98. 98. Kisvárday ZF, Eysel UT. Cellular organization of reciprocal patchy networks in layer III of cat visual cortex (area 17). Neuroscience. 1992; 46(2): 275–286. pmid:1542406
  99. 99. Dura-Bernal S, Suter BA, Gleeson P, Cantarelli M, Quintana A, Rodriguez F, et al. NetPyNE, a tool for data-driven multiscale modeling of brain circuits. eLife. 2019; 8: e44494. pmid:31025934
  100. 100. Furber SB, Galluppi F, Temple S, Plana LA. The SpiNNaker Project. Proc IEEE. 2014; 102(5): 652–665.
  101. 101. Hines ML, Carnevale NT. NEURON: a tool for neuroscientists. Neuroscientist. 2001; 7(2): 123–135. pmid:11496923
  102. 102. Bower JM, Beeman D. The Book of GENESIS: Exploring realistic neural models with the GEneral NEural SImulation System. New York: TELOS, Springer-Verlag-Verlag; 1995.
  103. 103. Diesmann M, Gewaltig MO, Aertsen A. SYNOD: An Environment for Neural Systems Simulations—Language Interface and Tutorial. 76100 Rehovot, Israel: The Weizmann Institute of Science; 1995. Technical Report GC-AA/95-3.
  104. 104. Hucka M, Finney A, Sauro HM, Bolouri H, Doyle JC, Kitano H, et al. The systems biology markup language (SBML): a medium for representation and exchange of biochemical network models. Bioinformatics. 2003; 19(4): 524–531. pmid:12611808
  105. 105. Diesmann M, Gewaltig MO. NEST: An Environment for Neural Systems Simulations. In: Plesser T, Macho V, editors. Beiträge zum Heinz-Billing-Preis 2001. vol. 58 of Forschung und wissenschaftliches Rechnen. Göttingen: Gesellschaft für wissenschaftliche Datenverarbeitung mbH; 2003. p. 43–70.
  106. 106. Muller E, Bednar JA, Diesmann M, Gewaltig MO, Hines M, Davison AP. Python in neuroscience. Frontiers in Neuroinformatics. 2015; 9: 11. pmid:25926788
  107. 107. Baxter S, Day S, Fetrow J, Reisinger S. Scientific Software Development Is Not an Oxymoron. PLOS Comput Biol. 2006; 2(9): e87. pmid:16965174
  108. 108. Akhmerov A, Cruz M, Drost N, Hof C, Knapen T, Kuzak M, et al. Raising the Profile of Research Software. Zenodo; 2019.
  109. 109. Spreizer S, Senk J, Rotter S, Diesmann M, Weyers B. NEST Desktop, an Educational Application for Neuroscience. eNeuro. 2021; 8(6): ENEURO.0274–21.2021. pmid:34764188
  110. 110. Strogatz SH. Exploring complex networks. Nature. 2001; 410: 268–276. pmid:11258382
  111. 111. Newman ME. The structure and function of complex networks. SIAM review. 2003; 45(2): 167–256.
  112. 112. Barabási AL. Scale-free networks: a decade and beyond. Science. 2009; 325(5939): 412–413. pmid:19628854
  113. 113. Buxhoeveden DP, Casanova MF. The minicolumn hypothesis in neuroscience. Brain. 2002; 125(5): 935–951. pmid:11960884
  114. 114. Molnár Z, Rockland KS. Cortical columns. In: Neural Circuit and Cognitive Development. Elsevier; 2020. p. 103–126.
  115. 115. Gramelsberger G, editor. From Science to Computational Sciences. Studies in the History of Computing and its Influence on Today’s Sciences. diaphanes/The University of Chicago Press, Zürich/Berlin; 2015.
  116. 116. Fischer P, Gramelsberger G, Hoffmann C, Hofmann H, Rickli H, Rheinberger HJ, editors. Natures of Data. A Discussion between Biology, History and Philosophy of Science and Art. diaphanes/The University of Chicago Press, Zürich/Berlin; 2020.
  117. 117. Gramelsberger G. Operative Epistemologie. (Re-)Organisation von Anschauung und Erfahrung durch die Formkraft der Mathematik. Meiner, Hamburg; 2020.
  118. 118. Nawrocki RA, Voyles RM, Shaheen SE. A mini review of neuromorphic architectures and implementations. IEEE Transactions on Electron Devices. 2016; 63(10): 3819–3829.
  119. 119. Young AR, Dean ME, Plank JS, Rose GS. A review of spiking neuromorphic hardware communication systems. IEEE Access. 2019; 7: 135606–135620.
  120. 120. Bartos M, Vida I, Frotscher M, Meyer A, Monyer H, Geiger JRP, et al. Fast synaptic inhibition promotes synchronized gamma oscillations in hippocampal interneuron networks. Proceedings of the National Academy of Sciences. 2002; 99(20): 13222–13227. pmid:12235359
  121. 121. Naze S, Bernard C, Jirsa V. Computational Modeling of Seizure Dynamics Using Coupled Neuronal Networks: Factors Shaping Epileptiform Activity. PLOS Computational Biology. 2015; 11(5): e1004209. pmid:25970348
  122. 122. Nicola W, Clopath C. Supervised learning in spiking neural networks with FORCE training. Nature Communications. 2017; 8(1). pmid:29263361
  123. 123. Chauhan T, Masquelier T, Montlibert A, Cottereau BR. Emergence of Binocular Disparity Selectivity through Hebbian Learning. The Journal of Neuroscience. 2018; 38(44): 9563–9578. pmid:30242050
  124. 124. Pilly PK, Grossberg S. Spiking Neurons in a Hierarchical Self-Organizing Map Model Can Learn to Develop Spatial and Temporal Properties of Entorhinal Grid Cells and Hippocampal Place Cells. PLoS One. 2013; 8(4): e60599. pmid:23577130
  125. 125. Cohen MX. Fluctuations in Oscillation Frequency Control Spike Timing and Coordinate Neural Networks. Journal of Neuroscience. 2014; 34(27): 8988–8998. pmid:24990919
  126. 126. Cutsuridis V. Does Abnormal Spinal Reciprocal Inhibition Lead To Co-Contraction Of Antagonist Motor Units? A Modeling Study. International Journal of Neural Systems. 2007; 17(04): 319–327. pmid:17696295
  127. 127. Ramirez-Mahaluf JP, Roxin A, Mayberg HS, Compte A. A Computational Model of Major Depression: the Role of Glutamate Dysfunction on Cingulo-Frontal Network Dynamics. Cerebral Cortex. 2017; p. bhv249.
  128. 128. del Molino LCG, Yang GR, Mejias JF, Wang XJ. Paradoxical response reversal of top-down modulation in cortical circuits with three interneuron types. eLife. 2017; 6.
  129. 129. Raudies F, Zilli EA, Hasselmo ME. Deep Belief Networks Learn Context Dependent Behavior. PLoS ONE. 2014; 9(3): e93250. pmid:24671178
  130. 130. Destexhe A. Self-sustained asynchronous irregular states and Up–Down states in thalamic, cortical and thalamocortical networks of nonlinear integrate-and-fire neurons. Journal of Computational Neuroscience. 2009; 27(3): 493–506. pmid:19499317
  131. 131. Rennó-Costa C, Tort ABL. Place and Grid Cells in a Loop: Implications for Memory Function and Spatial Coding. Journal of Neuroscience. 2017; 37(34): 8062–8076. pmid:28701481
  132. 132. Gunn BG, Cox CD, Chen Y, Frotscher M, Gall CM, Baram TZ, et al. The Endogenous Stress Hormone CRH Modulates Excitatory Transmission and Network Physiology in Hippocampus. Cerebral Cortex. 2017; 27(8): 4182–4198. pmid:28460009
  133. 133. Sadeh S, Silver RA, Mrsic-Flogel TD, Muir DR. Assessing the Role of Inhibition in Stabilizing Neocortical Networks Requires Large-Scale Perturbation of the Inhibitory Population. The Journal of Neuroscience. 2017; 37(49): 12050–12067. pmid:29074575
  134. 134. Hu B, Niebur E. A recurrent neural model for proto-object based contour integration and figure-ground segregation. Journal of Computational Neuroscience. 2017; 43(3): 227–242. pmid:28924628
  135. 135. Stevens JLR, Law JS, Antolik J, Bednar JA. Mechanisms for Stable, Robust, and Adaptive Development of Orientation Maps in the Primary Visual Cortex. Journal of Neuroscience. 2013; 33(40): 15747–15766. pmid:24089483
  136. 136. Huang CW, Tsai JJ, Huang CC, Wu SN. Experimental and simulation studies on the mechanisms of levetiracetam-mediated inhibition of delayed-rectifier potassium current (KV3.1): contribution to the firing of action potentials. Journal of Physiology and Pharmacology. 2009; 60(4): 37–47. pmid:20065495
  137. 137. Stroud JP, Porter MA, Hennequin G, Vogels TP. Motor primitives in space and time via targeted gain modulation in cortical networks. Nature Neuroscience. 2018; 21(12): 1774–1783. pmid:30482949
  138. 138. Humphries MD, Gurney KN. The role of intra-thalamic and thalamocortical circuits in action selection. Network: Computation in Neural Systems. 2002; 13(1): 131–156. pmid:11873842
  139. 139. Strüber M, Sauer JF, Jonas P, Bartos M. Distance-dependent inhibition facilitates focality of gamma oscillations in the dentate gyrus. Nature Communications. 2017; 8(1). pmid:28970502
  140. 140. Kazanovich Y, Borisyuk R. An Oscillatory Neural Model of Multiple Object Tracking. Neural Computation. 2006; 18(6): 1413–1440. pmid:16764509
  141. 141. Tikidji-Hamburyan RA, Canavier CC. Shunting Inhibition Improves Synchronization in Heterogeneous Inhibitory Interneuronal Networks with Type 1 Excitability Whereas Hyperpolarizing Inhibition Is Better for Type 2 Excitability. eneuro. 2020; 7(3): ENEURO.0464–19.2020. pmid:32198159
  142. 142. Kuchibhotla KV, Gill JV, Lindsay GW, Papadoyannis ES, Field RE, Sten TAH, et al. Parallel processing by cortical inhibition enables context-dependent behavior. Nature Neuroscience. 2016; 20(1): 62–71. pmid:27798631
  143. 143. Topalidou M, Rougier NP. [Re] Interaction Between Cognitive And Motor Cortico-Basal Ganglia Loops During Decision Making: A Computational Study. ReScience. 2015;.
  144. 144. Kulvicius T, Tamosiunaite M, Ainge J, Dudchenko P, Wörgötter F. Odor supported place cell model and goal navigation in rodents. Journal of Computational Neuroscience. 2008; 25(3): 481–500. pmid:18431616
  145. 145. Ursino M, Baston C. Aberrant learning in Parkinson’s disease: A neurocomputational study on bradykinesia. European Journal of Neuroscience. 2018; 47(12): 1563–1582. pmid:29786160
  146. 146. Leblois A. Competition between Feedback Loops Underlies Normal and Pathological Dynamics in the Basal Ganglia. Journal of Neuroscience. 2006; 26(13): 3567–3583. pmid:16571765
  147. 147. Vertechi P, Brendel W, Machens CK. Unsupervised Learning of an Efficient Short-Term Memory Network. In: Proceedings of the 27th International Conference on Neural Information Processing Systems—Volume 2. NIPS’14. Cambridge, MA, USA: MIT Press; 2014.
  148. 148. Lian Y, Grayden DB, Kameneva T, Meffin H, Burkitt AN. Toward a Biologically Plausible Model of LGN-V1 Pathways Based on Efficient Coding. Frontiers in Neural Circuits. 2019; 13. pmid:30930752
  149. 149. Vogels TP, Sprekeler H, Zenke F, Clopath C, Gerstner W. Inhibitory Plasticity Balances Excitation and Inhibition in Sensory Pathways and Memory Networks. Science. 2011; 334(6062): 1569–1573. pmid:22075724
  150. 150. Machens CK, Romo R, Brody CD. Flexible Control of Mutual Inhibition: A Neural Model of Two-Interval Discrimination. Science. 2005; 307(5712): 1121–1124. pmid:15718474
  151. 151. Wang XJ, Buzsáki G. Gamma Oscillation by Synaptic Inhibition in a Hippocampal Interneuronal Network Model. The Journal of Neuroscience. 1996; 16(20): 6402–6413. pmid:8815919
  152. 152. Masquelier T, Kheradpisheh SR. Optimal Localist and Distributed Coding of Spatiotemporal Spike Patterns Through STDP and Coincidence Detection. Frontiers in Computational Neuroscience. 2018; 12. pmid:30279653
  153. 153. Weber C, Wermter S, Elshaw M. A hybrid generative and predictive model of the motor cortex. Neural Networks. 2006; 19(4): 339–353. pmid:16352416
  154. 154. Masse NY, Grant GD, Freedman DJ. Alleviating catastrophic forgetting using context-dependent gating and synaptic stabilization. Proceedings of the National Academy of Sciences. 2018; 115(44): E10467–E10475. pmid:30315147
  155. 155. Wystrach A, Lagogiannis K, Webb B. Continuous lateral oscillations as a core mechanism for taxis in Drosophila larvae. eLife. 2016; 5. pmid:27751233
  156. 156. Mejias JF, Murray JD, Kennedy H, Wang XJ. Feedforward and feedback frequency-dependent interactions in a large-scale laminar network of the primate cortex. Science Advances. 2016; 2(11). pmid:28138530
  157. 157. Yamazaki T, Nagao S, Lennon W, Tanaka S. Modeling memory consolidation during posttraining periods in cerebellovestibular learning. Proceedings of the National Academy of Sciences. 2015; 112(11): 3541–3546. pmid:25737547
  158. 158. Morén J, Shibata T, Doya K. The Mechanism of Saccade Motor Pattern Generation Investigated by a Large-Scale Spiking Neuron Model of the Superior Colliculus. PLoS ONE. 2013; 8(2): e57134. pmid:23431402
  159. 159. Yang GR, Murray JD, Wang XJ. A dendritic disinhibitory circuit mechanism for pathway-specific gating. Nature Communications. 2016; 7(1). pmid:27649374