Supplementary Materials to : The complexity of dynamics in small neural circuits

In this article we present the Supplementary Materials to [1]. In order to make the mathematical part of this text as linear as possible, in some sections we repeat the derivations we already introduced in [1], and then we analyze them in more detail. On the other side, the remaining sections are completely new and self-contained. This work is organized as follows. First, in Sec. (S2) we partially calculate the codimension one bifurcation diagram of the network by evaluating the equilibrium points on the primary branch. This calculation is based on an asymptotic perturbative expansion of the neural equations, so it provides a good description of the membrane potentials in the stationary regime only for sufficiently large values of IE (see Eqs. (5) and (10) in [1]), where the branching point bifurcations do not occur. Then, as in [1], we divide the second part of the Supplementary Materials in two main sections, devoted to the study of weak (Sec. (S3)) and strong inhibition (Sec. (S4)). In both the cases we present detailed analytical calculations of the eigenvalues of the Jacobian matrix and of the codimension two bifurcation diagram for most of the local bifurcations of the network. Since the perturbative expansion introduced in Sec. (S2) cannot be extended to derive the secondary branches, a complete picture of the codimension one bifurcation diagram cannot be derived analytically. For this reason we conclude the Supplementary Materials by showing, in Sec. (S5), some panels of the codimension one bifurcations obtained by means of numerical tools, in order to further clarify the dynamics the model exhibits. As usual, all the results are obtained for the values of the parameters shown in Tab. (1).


Introduction
The brain is a complex system organized at multiple spatial scales, and the concerted interactions between these multiple scales of organization are probably crucial for the emergence of its computational power [24]. At the coarsest level, the brain can be divided into macroscopic circuits spanning several areas and containing millions of neurons. These macroscopic circuits are believed to accomplish very complex functions, ranging from motor control to cognition. At the finest level of organization, basic computations are performed at the single cell level by neurons, which can be seen as elementary computational units [20]. However, there is an intermediate, mesoscopic level of organization between the macroscopic and microscopic one: neurons are organized in microcircuits, whose size can vary from several thousands of cells as in cortical columns, to a few tens of cells as in micro-columns [45][46][47]. This mesoscopic level of investigation has received considerable attention in recent years both from the theoretical [25][26][27][28][29] and experimental [21,22] point of view, and it is often seen as a middle ground that is fundamental to link single neuron activity to behavior [23]. For this reason, finding an appropriate mathematical description of the brain at the mesoscopic scale is of fundamental importance for unveiling its emergent properties.
At the mesoscopic scale, the brain is often described as a collection of neural masses, i.e. homogeneous neuronal populations within a cortical column [30]. Usually, these groups of neurons are described by the so-called neural-mass models [31]. A typical example is the wellknown Jansen-Rit model [32][33][34], which describes local cortical circuits as a population of excitatory pyramidal cells, which receive inhibitory and excitatory feedback from local interneurons, and excitatory input from other regions such as the thalamus. This class of models may be studied using mean-field theory [35]. Mean-field theory is a mathematical tool that approximates the behavior of large networks [36][37][38], and it is useful to study the mass activity of few thousands of neurons, which constitutes the upper limit of mesoscopic descriptions of neural circuits and can adapt well to describe for example measures based on LFP/MEG/EEG recordings. This approximation becomes exact in the limit of networks with an infinite number of neurons, the so-called thermodynamic limit. For finite-size networks, however, the meanfield theory provides only an approximation of the real behavior of the system, and therefore may neglect important phenomena, such as qualitative differences in the transitions between static regimes and chaos [39], or in the degree or nature of correlations among neurons [40]. Clearly, these macroscopic differences in the dynamical and statistical behavior of finite and infinite-size networks may have important consequences on the information processing capability of the system, as potential oversimplifications in the mean-field approximation may hide important neural processes that are fundamental for the comprehension of neural circuits of finite size.
To go beyond the mean field limit, several mathematical techniques have been developed to quantify finite-size effects. These finite-size methods (such as the linear noise approximation [41], the density functional approach [42], large-deviations theory [43] and path-integral methods [44]) typically can only be applied to mesoscopic circuits composed of a finite but large number of neurons.
On the other side, most of the literature about small neural circuits deals with networks composed of 2-4 neurons (e.g. [1][2][3]), while methods for the analysis of networks made of a few tens of neurons, which represent the lower bound of the mesoscopic scale, are still limited. The purpose of our work is to make progress in the mathematical methodology to study the dynamics of such networks. The analysis of the dynamics of small neural networks was pioneered by Beer, who studied the bifurcations of networks of arbitrary size with highly symmetric assumptions on the strength of the synaptic weights [48], and through asymptotic approximations of the bifurcation manifolds [4]. In our article we extend his analysis to a more biologically plausible network of arbitrary size by deriving exact expressions of the bifurcation manifolds, and with less rigid constraints on the synaptic weights. In more detail, we consider a deterministic finite-size firing-rate network model with excitatory and inhibitory populations composed of an arbitrary number of homogeneous neurons in each population. Moreover, the model is characterized by homogeneous and arbitrarily strong weights between the populations. Then, we perform a numerical analysis of the global bifurcations that emerge by varying the external input currents (i.e. the stimuli) to the network and the strength of inhibition, and we introduce a mathematical theory that describes local bifurcations analytically. We find qualitative differences with the mean-field approximation when the system has strong inhibitory synaptic weights. In this case, through a phenomenon of spontaneous symmetry-breaking, the neural network undergoes a special bifurcation known as pitchfork or branching point [49], from which multiple solutions of the neural equations emerge. On the new branches, new bifurcations can occur, enriching considerably the complexity of the bifurcation diagram of the neural network. This dynamics is not revealed by the mean-field approximation.
This article is organized as follows. In Materials and Methods we describe the neural model we use. We then explain intuitively in Results the formation of new branches of solutions through spontaneous symmetry-breaking, depending on the strength of inhibition. This is followed by a numerical and analytical study of the bifurcations of the network in weak and strong-inhibition regimes for different sizes of the inhibitory population. In Discussion we conclude by examining the biological implications of our results and by comparing our approach with previous work on homogeneous networks.

Materials and Methods
Here we describe the assumptions we made about the model of finite-size neural circuits, whose structure is schematized in Fig 1, and whose dynamics we would like to investigate mathematically. These assumptions represent the best compromise we could find between biological detail, biological plausibility and mathematical tractability. We perform a numerical and analytical study considering the case of two neural populations of excitatory and inhibitory neurons respectively. The populations contain an arbitrary finite number of neurons, which are connected with each other through non-symmetric synaptic connections with arbitrarily strong weights. In order to make the network analytically tractable, we make some simplifying assumptions. In particular, we assume (as it is often made when considering local cortical circuits [30,31]) that the neurons in each population have homogeneous properties and parameters, that all neurons in the network are connected to each other (fully-connected topology), and that the axonal delays are negligible. Moreover, we study the deterministic neural equations, as it is common practice in the analysis of bifurcations. Therefore we do not consider typical sources of randomness such as stochastic fluctuations in the firing rates and random distributions of the strength of the synaptic weights. However, we note that some of these assumptions can be overcome by extensions of this formalism (see Discussion).
In more detail, we consider a widely used rate model to describe the dynamics of single neurons [30,35,38,40,48,53]: where N ! 4 represents the number of neurons in the network. The function V i (t) is the membrane potential of the ith neuron, while τ i is its membrane time constant. M i , the total number of connections to the ith neuron, is a normalization factor that has been introduced to avoid the explosion of the total synaptic input P NÀ1 j¼0 J ij A j ðV j ðtÞÞ in Eq (1) in the thermodynamic limit N ! 1. The synaptic connectivity matrix J specifies the strength of the synaptic connections in the network. J ij is the synaptic weight from the jth to the ith neuron, and for simplicity it is assumed to be deterministic and constant in time, for every pair of neurons. A j (Á) is the activation function of the jth neuron and converts its membrane potential into the corresponding firing rate n j ¼ A ðV j Þ. Typically, piecewise-linear activation functions are required in order to obtain analytical results [11,55,56]. However, here we will show how to obtain explicit expressions for the equilibrium points and the local bifurcations of the network when A ðVÞ is a smooth (i.e. infinitely differentiable) function. In more detail, we will consider S-shaped (i.e. sigmoidal) activation functions, since they are biologically realistic [6,54]. Particularly convenient from the mathematical point of view is the so-called algebraic activation function, which Neurons are fully connected and grouped into one excitatory and one inhibitory population. J αβ represents the set of synaptic weights from the population β to the population α, while I α is the external current (i.e. the stimulus) to the population α.
due to the high symmetry of Eq (3). The eigenvalues of the Jacobian matrix of highly symmetric systems can have multiplicity larger than one, leading standard numerical toolboxes to fail in detecting all bifurcations [78]. Nevertheless, by assuming N I = 2 throughout the article, the eigenvalues that generate the bifurcation points have always multiplicity one, thus ensuring a correct functioning of the continuation toolboxes. Furthermore, the choice N I = 2 allows us to provide a thorough bifurcation study that would have become prohibitive, in terms of clarity, for N I > 2. We also implement the analytical formulas in a standalone Python script (see S1 File), which performs the analysis of local bifurcations in a few seconds without a need of further more complex tools. This script is based on the bifurcation analytical formulas presented hereafter, which assume a sigmoidal activation function as in Eq (2). We however verified that different sigmoidal functions, such as the logistic function, provide qualitatively similar bifurcation diagrams [54].
Finally, it is important to underline that bifurcations are defined by many conditions. Nonetheless, in our analytical study we checked only the conditions on the eigenvalues of the network, since they proved sufficient to reproduce the numerical results. Due to the high variety of the bifurcations the system exhibits, a full check of all the remaining (non-degeneracy) conditions is beyond the purpose of this article.

Intuitive interpretation of the branching points
In mathematics, the branching-point bifurcations are described by the so-called equivariant bifurcation theory [50], namely the study of bifurcations in symmetric systems. The latter being rather technical, here we prefer to follow a more intuitive approach to the problem. First of all, we observe that according to bifurcation theory, local bifurcations are calculated by means of the eigenvalues of the Jacobian matrix of the network evaluated at the equilibrium points. Therefore, we set dV i ðtÞ dt ¼ 0 8i in Eq (3) to compute the stationary solutions of the system. For the subsequent analysis, it is useful to introduce the following parameter: and we will show that, as long as inhibition is weak (ψ < 1), the equilibrium points in each population are always homogeneous: Specifically, μ E and μ I are the solutions of the following system of algebraic non-linear equations, obtained from Eq (3): The curves defined by Eqs. F ðx; yÞ ¼ 0 and Gðx; yÞ ¼ 0 8ðx; yÞ 2 R 2 are the so-called nullclines of the network [17,18]. Fig 2 (top) shows an example obtained for J II = −10, while the remaining parameters are chosen as in Table 1. For completeness, in S1 Text we show how to get approximate analytical solutions for μ E,I , even though our bifurcation analysis can be performed without knowing them explicitly.
From Eqs (3) + (5), we obtain that the Jacobian matrix J of the linearized system evaluated at the stationary solution on the primary branch Eq (5) is: From this we can prove (see S1 Text) that the eigenvalues of J are: l R For example, for N I = 2, Eq (11) can be written more explicitly as follows: F m E ; m I;0 ; m I;1 The surfaces F ðx; y; zÞ ¼ 0; Gðx; y; zÞ ¼ 0; H ðx; y; zÞ ¼ 0; 8ðx; y; zÞ 2 R 3 are a higherdimensional extension of the nullclines F ðx; yÞ ¼ 0 and Gðx; yÞ ¼ 0 8ðx; yÞ 2 R 2 that we encountered in the weak-inhibition regime. These surfaces have been called nullsurfaces in [48]. Fig 4 (top) shows an example obtained for J II = −100, while the remaining parameters are chosen as in Table 1.
For the sake of clarity, here we treat in detail only the case N I = 2, and we will discuss briefly some results in the strong-inhibition regime for N I > 2. For N I = 2, from Eq (12) the Jacobian matrix on the secondary branches Eq (10) is:    (8)) is negative, the system (3) has only one stationary solution for a fixed set of parameters. This solution is stable and symmetric, but if a network parameter (e.g. I E ) is allowed to vary continuously, λ I may change sign. If this occurs, the solution becomes unstable and the network chooses randomly between two new alternative states, breaking the symmetry. The new states can be stable (e.g. for N I = 2) or unstable. In the former case the phenomenon of spontaneous symmetry-breaking may be understood intuitively as a ball that rolls in a double well potential and reaches a state of minimum energy, which corresponds to a stable stationary solution of Eq (3). In the weak-inhibition regime λ I is always negative, therefore strong inhibition is a necessary condition for spontaneous symmetry-breaking.  where: (here t is the transpose operator applied to a matrix), as derived in more detail in S1 Text. Intuitively, on the primary branch the Jacobian matrix was a 2 × 2 block matrix (see Eq (7)), because we had only one excitatory and one inhibitory membrane potential (μ E and μ I respectively), while on the secondary branches it is a 3 × 3 block matrix since now the two inhibitory neurons have different potentials (μ I,0 and μ I,1 ). However, the Jacobian matrix on the new branches can be calculated for a network with an arbitrary number of inhibitory neurons (see S1 Text for more details). Finally, we observe that for any N I it is possible to find a relation between any pair (i,j) of inhibitory membrane potentials in the strong-inhibition regime, which proves very useful when we calculate analytically the local bifurcations of the system on the secondary branches. Indeed, from the second equation of the system (11), we get: After some algebra Eq (14) can be converted into the following fourth-order polynomial equation: whose coefficients depend on μ I,i as follows: Weak-inhibition regime (ψ < 1) Even though in the article we focus on the case N E = 8 and N I = 2, the analytical results derived in this subsection are valid for arbitrary N E,I . As we said, we want to understand how the network's dynamics changes when we vary the external input currents I E,I and the strength of the synaptic weight J II . In this subsection we show the results that we obtain when we vary these parameters one by one, because this allows us to introduce the concepts of codimension one and codimension two bifurcation diagrams.
For N I = 2, we first fix I I to −10 and the inhibitory strength J II to −10. As illustrated in Fig 2 (top), Eq (6) admits multiple solutions depending on the specific value chosen for the current I E . Thus, we vary I E continuously while keeping I I and J II fixed. In this way we can plot how the solutions μ E,I of the system (6) change as a function of I E . As we anticipated in the previous subsection, the curve that we obtain (Fig 2 (bottom)) is called the primary branch of the network. This figure represents the codimension one bifurcation diagram of the network from which we individuate two special points, called local saddle-node bifurcations (shortened to LP hereafter). They identify the value of I E for which the number of solutions of Eq (6) changes (notice the correspondence with the top panels of Fig 2). These are the first examples of (local) bifurcation points that the neural network exhibits, and that lead to the formation of hysteresis (see left panel in Fig 6). Briefly, hysteresis was suggested to describe the reverberation and persistence of neural activity sustained endogenously after an external input is removed, which is thought to be crucial for phenomena such as working memory [62][63][64]. It is important to observe that even if reverberation requires bistability, the latter can be present without hysteresis, but very often they coexist, as in our model.
We then continuously vary both I E and I I , while keeping J II fixed, and we plot the solutions of Eq (6) as a function of both the external currents, so that μ E,I = μ E,I (I E ,I I ) now define three  15) (see S1 Text for their analytical calculation) as a function of μ I,i , and proves the formation of three branches of solutions of the stationary membrane potentials at the branching-point bifurcations. For example, in the case N I = 2 (left panel), the bisector of the first and third quadrants μ I,j = μ I,i represents the primary branch of the network, while the other two solutions that bifurcate from the branching points represent the secondary branches. The coordinates of the branching points are given by Eq (23). For the sake of clarity, we do not show the stability of the solutions, which is examined in the text. Moreover, the figure shows that for increasing N these bifurcations disappear, as we discuss in more detail in S1 Text.  On the left, we show an example of hysteresis displayed by the system. The plain lines describe stable equilibria, while the dashed line the unstable ones. On the right, we show an example of catastrophe manifold. The panel highlights three different behaviors of the network for increasing values of jI I j: leaky integrator, perfect integrator and switch (red curves). In particular, the perfect integrator corresponds to a cusp bifurcation (see Fig 7).  Three-and two-dimensional bifurcation diagrams in the weak-inhibition regime. Left, μ E is shown on the z−axis as function of I E −I I . Here we plot only the local bifurcations (blue: saddle-node curves, red: Andronov-Hopf curves) that bound the plain/dashed regions representing the stable/unstable equilibrium point areas. Right, complete set of codimension two bifurcations. The Andronov-Hopf bifurcation curves (red lines) are divided into supercritical (plain) and subcritical (dashed) portions. The supercritical/subcritical portions are bounded by a generalized Hopf bifurcation, GH, and Bogdanov-Takens bifurcations, BT. The latter are the contact points among saddlenode bifurcation curves (blue lines), Andronov-Hopf bifurcation curves (red lines), and homoclinic bifurcation curves (hyperbolic-saddle/ saddle-node homoclinic bifurcations are described by plain/dashed orange curves). SNIC bifurcations identify the contact point between the saddle-node curve and the homoclinic one. From GH originate two limit point of cycles curves (dark green lines) that collapse into the homoclinic curves. Before this, they present a cusp bifurcation, CPC. Each saddle-node curve shows, in addition to BT, a cusp bifurcation, CP.
doi:10.1371/journal.pcbi.1004992.g007 bifurcations we have in our network. For some values of the pair I E −I I , the system can undergo also a local Andronov-Hopf bifurcation, or H for short. These bifurcations are represented by the red curves in Fig 7, and correspond to the emergence of neural oscillations, which are thought to play a key role in many cognitive processes [65].
Hereafter, we list all the bifurcations the system undergoes, dividing them in groups depending on the codimension of the bifurcation, which is defined as the number of parameters (in our case I E,I ) that must be varied for the bifurcation to occur. Although only few of them are represented in Fig 2 (bottom), the complete set of codimension one bifurcations our system undergoes is: • local saddle-node bifurcations (LP), for which new equilibria arise, or collide and annihilate each other; • local Andronov-Hopf bifurcations (H), where stable or unstable self-sustained oscillations, described by limit cycles, arise or disappear; • global homoclinic bifurcations, where limit cycles vanish in a special equilibrium point (i.e. a neutral saddle, see the next subsection, or an LP bifurcation point), giving rise to an orbit with infinite period; • global limit point of cycles bifurcations (LPC), at which new limit cycles arise, or collide and annihilate each other.
The codimension one diagrams collecting all these bifurcations are shown in S1 Text. Moreover, on the curves defined by these bifurcations, and that are obtained by varying both I E and I I , the following codimension two bifurcations appear (see Fig 7, right): • local cusp bifurcations (CP), on the LP curves; • local generalized Hopf bifurcations (GH), which divide subcritical H curves from supercritical ones; • local Bogdanov-Takens bifurcations (BT), which represent the contact point between the LP, H (ending here) and homoclinic curves; • global cusp bifurcation of cycles (CPC), on the limit point of cycles curves; • global saddle-node on invariant circle (SNIC), where an LP bifurcation occurs simultaneously with a homoclinic bifurcation.
It is worth remarking that the bifurcation diagram in Fig 7 (right), obtained from the voltagebased model (3) in the weak-inhibition regime, is qualitatively similar to that of the activitybased (mean-field) Wilson-Cowan model (see Fig. 2.12 in [66]). This strong similarity results from the fact that under the hypotheses of stationary inputs, invertible connectivity matrix J and homogeneous membrane time constants, the two models are mathematically equivalent (apart from a rescaling by τ of the external input). Indeed, it is possible to prove that the membrane potentials V i (t) and the neural activities A i (t) are formally related by the linear expression [66]. However, in the next subsection we will show that things may change significantly in the strong-inhibition regime if we take into account finite-size effects.
Interestingly, Fig 7 presents two of the so-called catastrophe manifolds [67], one of which is shown in the right panel of Fig 6. This figure emphasizes the ability of the model to describe three different behaviors: leaky integrator, perfect integrator and switch. This triad represents the main ingredient for describing a mechanism which was proposed to explain interval timing by neural integration [68,69]. According to the theory, by changing some parameters of the network, it is possible to modify the shape of the equilibrium curve on the catastrophe manifold generating a ramping activity that can explain Weber's law of time perception [70]. This phenomenon can easily occur in our model, where the shape of the equilibrium curve can be changed dynamically by varying the input currents. Now we prove some of our previous results on local bifurcations from the analytical point of view (global bifurcations are often harder to study analytically, therefore they are not considered here). We base our analysis on [5], where the authors developed a technique for deriving analytical expressions of local codimension one bifurcations. Furthermore, for the first time we will extend their method to the analysis of spontaneous symmetry-breaking in systems with arbitrary size and to some local bifurcations with codimension larger than one. We start by observing that, according to bifurcation theory [49,61], LP is defined by the condition that one real eigenvalue of the Jacobian matrix becomes equal to zero. Therefore, from Eq (8), we conclude that for our network this bifurcation occurs whenever (9), provides: where we have defined the parameter v ¼ def m E . Now we invert A 0 I ðm I Þ (more details are provided in S1 Text), obtaining: and from Eq (6) we get: These are parametric equations in the and they define analytically the blue curves in Fig 7 ( As we said, this is not sufficient to prove that Eqs (16)- (19) describe LP curves, since we should check also the corresponding non-degeneracy conditions. Nevertheless, we observe a perfect agreement between these analytical curves and those obtained numerically by Cl_MatCont and XPPAUT, therefore for simplicity we do not check the remaining conditions and we leave them to the most technical readers. We adopt the same approach for the remaining bifurcations we are about to describe. Now we focus on the H bifurcations. According to [49,61], they appear whenever the network has complex-conjugate purely imaginary eigenvalues. Since λ E,I are always real, this condition can be satisfied only by l R 0;1 , by setting Y þ Z ¼ 0 and ðY À ZÞ 2 þ 4X < 0. In particular, from the equation Y þ Z ¼ 0 we get: where v ¼ def m E as before. Following the same procedure introduced before for the LP curves, we obtain a set of parametric equations for the pairs I E −I I that generate the H curves, with parame- As described above, BT represents the point where the LP and H curves meet each other, and identifies also the end of the H curve. From the condition l R 0 ¼ 0 or l R 1 ¼ 0 that defines the LP curves, and the condition l R 0;1 ¼ AEio (where ι represents the imaginary unit) that defines the H curves, we get l R 0 ¼ l R 1 ¼ 0. This is the condition that defines analytically the BT points, or equivalently v BT ¼ v e;f as given by Eq (21), from which the coordinates of the BT points in the I E −I I plane can be easily obtained through Eq (6). The remaining local bifurcations (i.e. CP and GH) are analytically intractable.
To conclude this subsection, we describe briefly the effect of the variation of the remaining synaptic weights on the codimension two bifurcation diagram, considering the weights in Table 1 as reference point. Given that their variation does not generate interesting phenomena such as the branching-point bifurcation which is obtained by varying J II , we describe the effects of these parameters succinctly. The reader may verify the following results by the supplemental Python code (see S1 File).
For J EE ) 10, the two LP curves become larger and larger on the I E axis (i.e. the distance between their vertical asymptotes increases). Moreover, the curves get closer and closer to each other, by shifting on the I E axis, until they intersect and their oblique parts (i.e. those between the BT points) overlap. If we increase J EE further, the LP curves split again in two disjoint parts, each one presenting two BT and two CP bifurcations (thus the total number of CP points increases from two to four). Between each pair of BT points (on the same LP curve) there is an H curve. These curves are very close to the corresponding LP curves, and if we increase J EE further they disappear, together with the BT bifurcations. Thus for very large J EE we get only two disconnected LP curves, or in other terms, for very strong excitation the oscillatory activity cannot be sustained anymore. Also on the opposite side, namely for weak excitation (i.e. J EE ! 0), the H curves disappear. More precisely, according to Eq (21), these curves vanish whenever the following condition is satisfied: Moreover, for weak excitation the width on the I E axis of the LP curves decreases, i.e. the distance between their vertical asymptotes becomes smaller and smaller, until the asymptotes collapse on each other and the LP curves disappear as well. In particular the curves vanish whenever the following condition is satisfied (see Eq (19)): For jJ EI j ) 70, the width of the two LP curves remains almost constant, while the distance between them (and therefore also the length of the H curves) increases continuously. On the other side, for jJ EI j ! 0, the two LP curves get closer and closer to each other, until they intersect and then their oblique parts between the BT points overlap. If we decrease jJ EI j further, similarly to the case with large J EE , the two LP curves split in two disjoint parts, each one presenting two BT and two CP bifurcations. For even smaller values of jJ EI j, the BT, CP and H bifurcations disappear, while the LP curves disappear for jJ EI j = 0.
For J IE ) 70, the LP curves are stretched vertically and shifted downwards along the I I axis. Clearly, in the opposite direction (i.e. for J IE ! 0), they are compressed, and while the I E coordinates of the vertical asymptotes remain almost unchanged with J IE , the two CP points get closer and closer to each other. At the same time, the two H curves tend to overlap. At some value of J IE , the two CP bifurcations touch each other and disappear, thus the two LP curves become tangent. If we further decrease J IE , the two LP curves split again, one over the other, and the BT and H bifurcations disappear.
All the phenomena that we have just described are qualitatively similar for different values of J II , thus they occur also in the strong-inhibition regime for the primary branch.

Strong-inhibition regime (ψ ! 1)
In the strong-inhibition regime (in particular here we consider the cases J II = −34 and J II = −100), most of the features of the weak-inhibition bifurcation diagram are preserved. However, besides the bifurcations explained in the previous subsection, from Figs 8, 9 and 10, we can see that the system undergoes also the following codimension one bifurcations: • local branching-point bifurcations (BP), at which two or more equilibrium curves emanate or collapse; • local torus bifurcations (TR), at which the network exhibits quasi-periodic oscillations; this is a local bifurcation of the Poincaré map of a limit cycle of the network [49], whose change of stability determines the formation of solutions of Eq (3) containing two incommensurable frequencies; and the following codimension two bifurcations: • local zero-Hopf (neutral saddle) bifurcations (ZH), at which the H curves and the BP curves intersect each other. This point identifies a change in stability of the H bifurcation curve.
In particular, the BP bifurcations lead our model to show multiple branches of stationary solutions for suitable current values. This is a finite-size effect due to the finite number of neurons in each population, which leads to a richer set of eigenvalues than that obtained by using methods based on the reduction of the number of equations, such as the mean-field approximation (see S1 Text for more details). In order to thoroughly investigate the bifurcations the system undergoes in presence of strong inhibition, we start by analyzing the codimension one bifurcation diagram for J II = −34. In particular, the diagram in  In the top panels, the stable/unstable primary equilibrium curve is described by plain/dashed black curves, while the secondary ones are described by plain/ dashed violet curves. H bifurcations appear on both the primary (red) and secondary (purple) equilibrium curves, giving rise to unstable and stable limit cycles respectively (the maxima and minima of the oscillations are described by dashed and plain brown curves respectively). In particular, the unstable cycles collapse into a homoclinic bifurcation, described by an orange line. The shaded colored boxes emphasize sampled areas of the codimension one bifurcation diagram, whose corresponding temporal dynamics is shown in the bottom-panels. In particular, the figure shows the correspondence between the BP, H and LP bifurcations and the split of the inhibitory membrane potentials, the emergence of oscillations and the sudden jump of the neural activity, respectively. We induced transitions between these different kinds of dynamics by increasing linearly the external input current I E while keeping I I fixed. doi:10.1371/journal.pcbi.1004992.g008 primary branch, we find two LP bifurcations and a subcritical H bifurcation, whose unstable limit cycles vanish in a homoclinic orbit. We also observe that the inhibitory neurons, as well as the excitatory neurons, for N I = 2 have the same bifurcation diagram (see Fig 8). However, differently from the excitatory case, this does not mean that the inhibitory membrane potentials are homogeneous. Indeed, when an inhibitory neuron is on the upper secondary branch (see the violet curve above the primary equilibrium curve in Fig 8, right), the other one is on the lower secondary branch, thus they are heterogeneous. For J II = −100, secondary branches of equilibrium points are still present, see Fig 9. Together with a subcritical H bifurcation, they unveil also LP bifurcations. In particular, the former generates unstable limit cycles that become stable after having crossed the limit point of cycles bifurcation (dark green loop). For increasing values of I E , the stable limit cycles collapse into the unstable limit cycles originated from the subcritical H bifurcation belonging to the primary equilibrium curve. Before collapsing, the stable limit cycles undergo a torus bifurcation (gray loop).
By varying also I I , we obtain the codimension two bifurcation diagrams displayed in Fig 10. It is worth noting that the branching points the system undergoes generate two bifurcation  [66]). Furthermore, the LP and H bifurcations that belong to the secondary branches give rise to further bifurcation curves (purple and light blue lines, respectively) in the I E −I I domain, as shown in Fig 10. Interestingly, since the BP bifurcation increases the dimension of the network from 2 (for λ I < 0, see Eq (6)) to 3 (for λ I > 0, see Eq (12)), the network can exhibit more complex dynamics, such as quasi-periodic motions originated from the torus bifurcations. The biological importance of quasi-periodic oscillations in neural communication was discussed in [71]. Now we want to study the local bifurcations from the analytical point of view. On the primary branch, the LP, H and BT bifurcations have the same expressions obtained in the weakinhibition regime, therefore they are valid for arbitrary N E,I , as well as the formulas that we derive below for the BP and ZH bifurcations. On the contrary, unlike the weak-inhibition regime, the formulas of the LP, H and BT bifurcations on the secondary branches in the strong-inhibition regime are valid only for N I = 2 (while N E can still be arbitrary), even though our formalism can be extended to the case N I > 2.
We start by considering the BP bifurcations, which are defined by the condition λ I = 0, as we saw before. From Eq (8) this condition implies: thus the solutions of this equation are: Eq (23) shows that μ I (BP) is defined only for ψ ! 1, and therefore that the BP bifurcations occur only in the strong-inhibition regime. Now, from the second equation of the system (6)  (8) and (9)). The bifurcations originated on the secondary branches are differentiated from those originated on the primary one. Specifically, we show H and LP curves (purple and light blue lines, respectively). In addition, we display the torus bifurcation curves (gray lines). doi:10.1371/journal.pcbi.1004992.g010 (we can also use Eq (12), since for λ I = 0 they are equivalent) we get: while from the first equation of Eq (6) we get: where μ I (BP) and μ E (BP) are given by Eqs (23) and (24)  More details can be found in S1 Text).
It is important to observe that for N I > 2 the network may undergo special branching points that are not determined by the condition λ I = 0, rather by the fact that one of the eigenvalues of the reduced Jacobian matrix introduced in S1 Text tends to zero. These branching points can be studied analytically through our approach, but since the complexity of the corresponding eigenvalues strongly depends on N I and on the degree of heterogeneity of the inhibitory population, we do not analyze them in detail.
The points where the H and BP curves meet each other define the ZH bifurcations. From this definition, we see that they can be calculated analytically from the conditions l R 0;1 ¼ AEio and λ I = 0, from which in turn we get: and therefore: As usual, if we substitute these expressions of the membrane potentials into Eqs (6) or (12), we obtain the coordinates of the ZH points in the I E −I I plane.
On the secondary branches that are generated by the branching points, new bifurcations can occur (in the case N I = 2, see for example the LP and H bifurcations in Figs (8) and (9), and the corresponding light blue and purple curves in Fig 10), also new branching points (for N I > 2), from which tertiary branches emerge, and so on. To study them, according to bifurcation theory, we need the Jacobian matrix of the network on the secondary (tertiary, and so on) branches, as we will explain more clearly in the next subsection. Here we focus again on the case N I = 2, therefore we can determine the local bifurcations on the secondary branches by means of Eq (13). Now we start with the LP bifurcations. We know that they are defined by the condition that one of the eigenvalues of Eq (13) is equal to zero. From it, as explained in S1 Text, we obtain that: where: Therefore if we invert Eq (26) and we use the solution of Eq (15), we obtain the expression of μ E as a function of μ I,0 . If we replace the solutions μ E and μ I,1 in the system (12), we get parametric equations for I E,I as a function of a single parameter, which is now defined as v ¼ def m I;0 . These equations are an analytical description of the light blue curves shown in Fig 10 (right) for J II = −100. Similarly, for the H bifurcations we obtain the following condition: where: therefore again it is possible to describe these bifurcations analytically, obtaining the same results we got numerically in Fig 10 for J II = −34 and J II = −100 (see the purple curves in both the panels). However, unlike the primary branch, our theory does not allow us to calculate the range of the parameter v ¼ m I;0 on the secondary branches, since the resulting equations that define the range are analytically intractable. In the same way, it is not possible to calculate explicitly the coordinates of the new BT bifurcations, where the LP and H curves that emanate from the secondary branches meet each other. Therefore they can be evaluated only through analytical approximations (which are beyond the purpose of the article), or through a numerical approach (as in S1 File).
For an arbitrary N I , if spontaneous symmetry-breaking has not occurred, and if the initial conditions are homogeneous, then all the neurons in each population are described by the same equation. Therefore the network can be interpreted as a two-dimensional system, and for the Poincaré-Bendixson theorem it cannot show chaotic behavior. Nonetheless, when a symmetry-breaking occurs, the inhibitory neurons split in subgroups, and the network becomes at least three-dimensional depending on the number of inhibitory subpopulations arisen. In this case, it is possible to check numerically the presence of chaos by computing the Lyapunov exponents [62]. In particular, when the symmetry-breaking occurred in the simplest case N I = 2, we excluded the presence of chaos since the Lyapunov exponents were always negative. However, the emergence of chaos is still possible for N I > 2, as we will discuss in the next subsection.
We conclude by describing briefly the effect of the variation of the remaining synaptic weights. When we discussed the weak-inhibition regime, we showed that the LP and H curves on the primary branch change in a qualitatively similar way for different values of J II , and the same conclusions apply to the strong-inhibition regime. On the other side, now we want to analyze the behavior of the BP curves and of the bifurcations on the secondary branches. For J II = −100 and increasing J EE , the most notable phenomenon is the overlap between the oblique parts of the BP and the LP curves of the primary branch. The latter finally collapse on each other and split in two disjoint parts, as in the case J II = −10 discussed for the weak-inhibition regime, while the bifurcations on the secondary branches do not show any interesting variation. Furthermore, when J EE ! 0, we observe first of all the disappearance of the ZH bifurcations. This occurs because the H curves on both the primary and the secondary branches do not meet the BP curve anymore. If we further decrease J EE , the two CP bifurcations on the LP curve of the secondary branches get closer and closer until they annihilate each other and the curve disappears. This phenomenon implies also the disappearance of the BT bifurcations on the secondary branches. For smaller values of J EE , the H curves on both the primary and secondary branches disappear, and finally also the LP curve on the primary branch (see the weak-inhibition regime) and the BP curve. To conclude, for large jJ EI j or large J IE , the LP curve on the secondary branches disappears again through the annihilation of its CP points (as explained above), while on the other side, when at least one of the two synaptic weights is small, we do not observe any interesting variation of the bifurcations on the secondary branches.

The study of networks with arbitrary N I
The same analysis can be performed on networks with an arbitrary number N I of inhibitory neurons. When λ I in Eq (8) goes to zero, we observed the formation of secondary branches from the primary one. This means that the inhibitory neurons split into two subsets [52,78], and all the neurons in a given subset behave identically. For this reason we can reinterpret the system as a network with an excitatory population with N E neurons, and two inhibitory populations. Furthermore, when we change the current I E (while keeping λ I > 0) with I I fixed, at some point one of the eigenvalues of the Jacobian matrix of the system (11) (see S1 Text for their analytical calculation, supposing that the sizes of the clusters are known) may go to zero, generating a new branching point on the secondary branches. In this case we observe the formation of tertiary branches, and so on.
Whenever a group of K inhibitory neurons splits into two clusters with sizes P and Q = K−P and membrane potentials m ðPÞ I and m ðQÞ I respectively, the Jacobian matrix of the network has eigenvalues l ðPÞ and P−1 respectively. Immediately beyond the BP bifurcation that originated the two clusters, one of l ðPÞ I and l ðQÞ I is positive. Thus if P and/or Q is larger than one, the new branches are unstable near the bifurcation point. These eigenvalues do not affect the stability of the new branches only in the case P = Q = 1 (having multiplicity equal to zero), which is compatible with the fact that for N I = 2 the secondary branches are stable near the branching point (see Figs 8 and 9). In the general case P = Q = 1 the stability of the new branches depends on the eigenvalues of the reduced Jacobian matrix introduced in S1 Text. These eigenvalues are strongly affected by the value of the parameter N I and by the degree of heterogeneity of the inhibitory population. For this reason it is not easy to draw general conclusions about the stability of the new branches in the case P = Q = 1.
In Fig 11 we show an example of formation of secondary branches in the case N I = 4, obtained as usual for the values of the parameters reported in Table 1. In this case K = 4, P = 3 and Q = 1, therefore the secondary branches are unstable near the branching points.
It is apparent there is an important difference compared to the case N I = 2. For a network with only two inhibitory neurons, Eq (12) implies that they both have the same codimension one bifurcation diagram (see the right panels of Figs 8 and 9). This is just a special case, because in general, for N I > 2, the inhibitory neurons have different codimension one bifurcation diagrams. Although Fig 11 is useful to get an idea of how a bifurcation diagram for N I = 4 would look like, it is important to underline that this diagram could not be entirely thorough. Indeed, since in this case λ I in Eq (8) has multiplicity larger than one, the Cl_MatCont toolbox may fail in providing us the complete picture [78]. Nonetheless, in principle it is possible to check the completeness of the diagram since, for an arbitrary N I , local bifurcations can still be calculated analytically through our approach. In fact, from Eqs (14) + (15), it is possible to express N I −1 inhibitory membrane potentials as functions of the remaining one, which can be used as a parameter for the parametric equations in the codimension two bifurcation diagram.
To conclude, we observe that for N I > 2 new kinds of bifurcations may appear, which do not occur in the case N I = 2. For example, for N I = 3, if the network has three different inhibitory potentials, the characteristic equation of the Jacobian matrix has the form pðlÞ ¼ ðl À l E Þ N E À1 p R ðlÞ, where p R ðlÞ (the characteristic polynomial of the reduced Jacobian matrix introduced in S1 Text) is a fourth order polynomial. This means that in principle, for some values of the parameters, the network may have two pairs of purely imaginary eigenvalues. This condition corresponds to the formation of a double-Hopf bifurcation, which in turn may imply a local birth of chaos (see for example [72,73]).

Discussion
We proved the emergence of complex dynamics in small neural circuits, characterized by strong finite-size effects, which cannot be accounted for by the mean-field approximation and that are much less likely to occur in large fully-connected neural systems. We showed, through a detailed numerical and analytical study of the bifurcations, that small homogeneous neural networks undergo spontaneous symmetry-breaking through branching-point bifurcations, which leads to the formation of strongly heterogeneous activity in the inhibitory population. This phenomenon occurs when we increase the strength of the synaptic weights in the inhibitory population, and shows the dangers of applying mean-field hypotheses to small populations of homogeneous neurons. A reason of interest of this finding is that it shows that it is possible to obtain a heterogeneous distribution of activity across neurons, as observed in real networks, even starting from the simple assumption of neurons with homogeneous characteristics. Heterogeneity of neural responses is thought to be an important feature of neural population codes, as it helps increasing the diversity of response profiles of different neurons [14,15] and to reduce the detrimental effect of noise correlations on information processing [16]. Both such effects help reducing the redundancy of the information carried by different neurons. It is natural to speculate that avoiding massive redundancy is particularly important for small neural systems, as they must code as much information as possible with a small number of units. Thus, the features of neural dynamics specific to small neural systems that we reported here may be helpful for their function. The analysis we performed can be used to understand not only the dynamics of circuits at the mesoscopic scale in mammals, but also the behavior of tiny brains such as those of rotifers and nematodes [19]. However, it is important to observe that the finite-size phenomena that we described here may also in principle arise in networks of large sizes, for example when jJ II j is at least of the order of N t I in fully-connected networks, or when the connectivity is sufficiently sparse (see S1 Text). It is possible to extend our reasoning to argue that also in such cases the formation of multiple branching-point bifurcations and a stronger heterogeneity of the neural activity may arise from populations of neurons with homogeneous characteristics. This leaves the possibility open that such effects may happen also in circuits of not-so-small size.
The model we introduced is largely analytically tractable, therefore it allowed us to derive exact formulas of the local bifurcation points. We implemented these formulas in a standalone Python script, which performs the analysis of local bifurcations in a few seconds without the need of complex tools for the study of bifurcations, such as Cl_MatCont and XPPAUT, which make use of the continuation of equilibria.
To conclude, we discuss the differences and advances that our model presents with respect to previous work on neural networks, and we discuss how it can be significantly extended and generalized through the introduction of new features that improve the biological plausibility of the model, without losing the possibility to investigate analytically its dynamics.

Comparison with previous work
Bifurcations in networks of arbitrary size. In his pioneer work [48], Beer provided a fairly complete description of bifurcations for 1-and 2-neuron circuits, and added a few results for networks of larger size that are valid under very specific, and not always biologically plausible, assumptions about synaptic connectivity. Our work builds on this knowledge and extends it. Specifically, we performed a detailed analytical and numerical analysis of the possible dynamical behavior of a network composed of two neural populations. Due to the high dynamical complexity of the network, we studied extensively the case of an excitatory population composed of an arbitrary number of neurons N E and an inhibitor populations with N I = 2, and then we described how to extend our results to the case N I > 2.
The neurons in each population were identical, and this made the equations of our model symmetric under permutations of the neural indexes within a population. More precisely, our equations are invariant under the action of the group S N E × S N I , where S N α is the permutation group on N α items (also known as symmetric group). To the best of our knowledge, previous studies of homogeneous neural systems with symmetry have been restricted to networks of neural oscillators composed of one population with symmetry group S N [78]. Networks composed of one excitatory and one inhibitory population have been considered (e.g. [13,66]), but only for large populations or in the thermodynamic limit N ! 1. In our work we performed a detailed bifurcation analysis for a two-populations network in the more complex case of populations with arbitrary finite size. The presence in our finite-size model of a second population (and the corresponding S N E × S N I symmetry) increased the complexity of the model with respect to these previous studies, and was a key factor in creating a surprisingly rich network dynamics even under the hypothesis of homogeneous neurons in each population.
As in [4], we based our analytical study of bifurcations on the method introduced in [5] for local codimension one bifurcations. Moreover, for the first time we extended this approach to the analysis of spontaneous symmetry-breaking and to some codimension two bifurcations. Our approach can also be easily applied to local bifurcations with codimension larger than two in neural networks composed of several populations (see S1 Text). This may become useful to study more complex and biologically plausible networks, especially when the complexity of the numerical simulations becomes prohibitive. In general we may consider a network with P populations, and whose equations are invariant under the action of the group S N 0 Â S N 1 Á Á Á Â S N PÀ1 . In this network, a codimension C bifurcation is described by a ðP À C Þ-dimensional manifold, whose parametric equations contain P À C independent parameters. The only exception, as in the case with two populations considered in this article, is the BP manifold, whose equation can be written by expressing any current as an explicit function of all the others (see Eqs (23) + (24) + (25)). For example, in a network with three populations A, B, C, in the codimension three diagram spanned by the currents I A , I B , I C we get that: • the bifurcations LP, H etc. are described by surfaces with two parameters; • BT, CP etc. by lines with one parameter; • codimension three bifurcations by points; • the BP bifurcations are described by surfaces with explicit formulas However, a complete classification of bifurcations with C ! 3 is still missing in the literature. Nevertheless, the method introduced in this article can be used in principle to extend the approach introduced in [5] to local bifurcations with large codimension.
Spontaneous symmetry-breaking in neural fields. Since we consider identical neurons within each population, our model can be cast in the context of the bifurcation theory of dynamical systems with symmetry, which is known as equivariant bifurcation theory [50]. Symmetry-breaking and branching-point bifurcations have already been studied in neuroscience through equivariant bifurcation theory, but in a conceptually different way. In [51], Ermentrout and Cowan used symmetry-breaking, as well as arguments developed in the context of reaction-diffusion equations, to evaluate hallucinations in primary visual cortex under the effect of drugs. They idealized the cortex as a plane and described the local activity of a population of neurons by means of neural-field equations, so that each population was univocally identified by its position in the continuous space of the plane. Their theory exploits the symmetry the neural-field equations inherit from the geometrically regular structure of the anatomical connections in primary visual cortex. In equivariant bifurcation theory, this symmetry is described by the invariance of the equations under the action of the Euclidean group E(2), which is the group of rigid motions in the plane, generated by translations, rotations, and reflections. However, the network analyzed in our work is described by equations that are invariant under the action of the group S N E × S N I . This is a completely different kind of symmetry compared to that of the Euclidean group E(2), and it allows us to study in an analytically simple way networks made of a finite number of neurons. Indeed, this is conceptually different from the theory of Ermentrout and Cowan, which relies on infinite-dimensional neural-field equations. Thus, their theory does not describe finite-size effects, and does not use the underlying symmetry for this purpose.
From a biological point of view, the strong inhibition that leads to spontaneous symmetrybreaking may loosely correspond to effect of anesthesia, as some kinds of anesthetics, such as propofol, thiopental and isoflurane, act on γ-Aminobutyric acid (GABA), the primary inhibitory neurotransmitter [74]. Our results prove that the dynamics repertoire of small neural populations can be completely different depending on the level of inhibition. In particular, in our model symmetry is broken by an increase of the inhibitory synaptic weights. This differs from the theory of hallucinations of Ermentrout and Cowan, in which spontaneous symmetry-breaking requires enhanced excitatory modulation as well as a decreased inhibition to occur [51].

Future directions
Comparison with spiking models. Rate models of the type used in our study have been shown to generate dynamics similar to that of the firing rate in spiking neuron networks under a wide variety of conditions. These conditions include slow dynamics [80], slow synapses [81], and asynchronous states [82]. An important topic for future research is to understand when the dynamics of the rate model we used is close to that of spiking neuron models and when it is not. While addressing this issue goes far beyond the remit of the present study, it is useful to speculate on the conditions in which we expect our rate model to correspond more closely to spiking neuron models. This correspondence may depend on the details of the spiking neuron model chosen for comparison [83] as well as on the specific set of considered model parameters and the dynamics it generates [84]. In particular, it is likely that the correspondence between our model and a spiking neuron model holds better when the network is in an asynchronous state [82]. Networks with all-to-all coupling, like ours, are relatively easy to synchronize [85] since they have the shortest path length possible (L = 1). However, asynchrony is not incompatible with an allto-all connectivity assumption. Asynchronous states may be generated in several ways, e.g. with both inhibitory and excitatory neurons [86] or with purely excitatory networks [83]. Because of this, it is tempting to speculate that the rate equations we used describe reasonably well the firing rate of an underlying spiking model in a wide range of bifurcation parameters I E , I I . At the light of the results of [83,86] strong synchronization is expected to arise mainly with strongly hyperpolarizing currents I E , since these strong currents deactivate the excitatory population thereby effectively making the network purely inhibitory and thus more likely to strongly synchronize.
Possible extensions of the model. In this article, for simplicity we decided to lay down the foundations for the mathematical study of small neural systems by making a number of simplifying assumptions in the model. However, it is important to note that many of these assumptions can be relaxed in future work, while still keeping the model analytically tractable. For example, it is possible to introduce some heterogeneity in the network. A first element of heterogeneity that we discussed in Results was the heterogeneity generated in the inhibitory population by spontaneous symmetry-breaking. However, another, and perhaps more trivial, way to introduce heterogeneity in both the neural populations is through an explicit symmetrybreaking, namely the breaking of the permutation symmetry by parameters in the neural Eq (3) that do not respect the symmetry. Clearly, in order to break the permutation symmetry explicitly, these parameters must introduce differences between the neurons in a given population, so that they are not identical anymore under exchange of the neural indexes. Any parameter of the network can serve to this purpose, but since in this article we focused mainly on the importance of the synaptic weights, here we suppose that the symmetry is explicitly broken by the terms J αβ . For this reason, we can suppose for example that the weights between two populations are independent and identically distributed as J ab $ N ð J ab ; s 2 J ab Þ, where J ab and σ J αβ are respectively the mean and the standard deviation of the normal distribution. σ J αβ measures the degree of heterogeneity of the network, since for large σ J αβ the N α × N β weights from population β to population α are more likely to be different. The main effect of this heterogeneity is to remove the degeneracy of the eigenvalues λ E,I , see S11 Fig. The spectrum of matrices with distributed weights is studied in the field of random matrix theory [75]. However, random asymmetric block-matrices have been considered only recently [76]. Usually these matrices are supposed to have zero-mean random entries, also on the main diagonal, while the entries of our Jacobian matrix have non-zero mean and are deterministic on the main diagonal. For this reason some heuristics must be introduced to adapt the results of random matrix theory to our neural network. For example, it is possible to prove that in the weak-inhibition regime, given sufficiently small standard deviations σ J αβ , the eigenvalues λ E and λ I respectively split into N E −1 and N I −1 eigenvalues, which are uniformly distributed in the complex plane within the circles of radius Here m E;I are the solutions obtained from Eq (6) after replacing J αβ with J ab . This distribution is a generalization of Girko's circular law [77], and it clearly shows that heterogeneity in the neural equations removes the eigenvalues degeneracy increasing the complexity of the bifurcation structure.
It is interesting to observe that, similarly to the heterogeneity caused by spontaneous symmetry-breaking, also the one determined by explicit symmetry-breaking can be regulated dynamically through the stimuli I E,I . Indeed, the degree of heterogeneity caused by the random weights in the population α is ffiffiffi ffi N a p N s J aa A 0 a m a ð Þ. Therefore, it depends on the input currents through the activation function. This suggests that in real networks the effective degree of heterogeneity is determined by a superposition of both kinds of symmetry-breaking.
Our work can also be extended to spatially organized interacting populations of neurons. As it is often the case in numerical simulations of neural-field dynamics, the populations may be arranged on a two-dimensional regular lattice with periodic boundary conditions. In topological terms, the two-dimensional lattice can be thought of as being mapped onto a torus. The torus does not have boundaries and the neurons on it would be still identical since they would all have the same number of connections to other neurons. Therefore we expect that spontaneous symmetry-breaking would play an important role also in this kind of networks.
To conclude, it is possible to study how the bifurcations affect the behavior of the network under periodic inputs, as in [57], and the relationship between bifurcations and random noise. For example, it is well-known that saddle-node bifurcations in stochastic systems lead to critical slowing down [12], while random fluctuations may perturb a stable equilibrium state close to an Andronov-Hopf bifurcation, generating sustained oscillations known as quasi-cycles [13].
Recent studies of neural models of the whole cortex at the edge of criticality have proposed stimulus-and noise-driven bifurcations as the neurophysiologic mechanism underlying the rich dynamical behavior of the brain [7,8]. In principle our formalism may be applied to perform a systematic analysis of bifurcations in these models.
Supporting Information S1 Text. Supplementary calculations and numerical simulations. This text contains additional information about the analytical derivation of the equilibrium points and the codimension two bifurcation diagram. By means of numerical simulations we also provide a complete picture of the codimension one bifurcation diagrams for weak and strong inhibition. Moreover, we show how the network size and sparse connections influence the branching-point bifurcations of the system. (PDF) Saddle-node (LP), as well as Andronov-Hopf (H) bifurcations, lie on the equilibrium curve. Supercritical/subcritical H bifurcations give rise to stable/unstable limit cycles, whose maxima and minima are described by plain/dashed brown curves. Homoclines, which are characterized by large amplitude limit cycles with infinite period, are described by orange loops. The dark green loops identify the values of the current at which the limit cycles cross the limit point of cycles bifurcations. Here, limit cycles change stability. For I E = 2 and I E = 7 the solutions converge to stable foci, giving rise to damped oscillations (red and blue curve, respectively). For I E = 13 and I E = 15 the solutions converge to stable nodes (black and purple curve, respectively). Right, we fix I I = −13.3 (area G in S5 Fig). For I E = 0.216 we find lowamplitude oscillations of about 35 Hz (red curve). For I E = 5.564 the amplitude of the oscillations is larger than the previous case, and the frequency increases up to 160 Hz (blue curve). High-amplitude oscillations occur for I E = 11.85; since this current is close to the homoclinic bifurcation, the frequency decays to 19 Hz (black curve). Finally, for I E = 12.5 the system reaches a stable node (green curve). The panels on the left column show the whole set of eigenvalues (blue dots) of the Jacobian matrix for increasing network's size (N = 10, 100 and 1,000), while the panels on the right column represent a zoom of the eigenvalues close to the x-axis in the complex plane. The eigenvalues have been calculated numerically for J EE ¼ 10, J EI ¼ À70, J IE ¼ 70, J II ¼ À10, σ J EE = σ J IE = 1, σ J EI = σ J II = 0.1, and I E = I I = 0, while the remaining parameters have been chosen according to Table 1 in the main text. The panels on the right show that the eigenvalues λ E and λ I split into N E −1 and N I −1 eigenvalues respectively, which are distributed according to a circular law. The red circles represent the theoretical prediction of the support of the distribution according to random matrix theory, see text. (TIF) S1 Table. Domain of the LP parametric curve on the secondary branch. This table reports the range of the parameter v ¼ def m I;0 obtained numerically from the system of inequalities (S57) in S1 Text, for different values of J II . We observe that for large jJ II j the parameter v min reaches a constant value, while v max increases linearly. This result suggests that simple asymptotic expressions of v min and v max can be derived analytically. Nevertheless this calculation is beyond the purpose of the article, and is left to the interested reader.
(TIF) S1 File. Python script. This code generates the codimension two bifurcation diagram of the network by using the analytical formulas derived in the article. (PY)