Skip to main content
Browse Subject Areas

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Irregular Dynamics in Up and Down Cortical States

  • Jorge F. Mejias ,

    Affiliations Centre for Neural Dynamics, University of Ottawa, Ottawa, Ontario, Canada, Department of Electromagnetism and Matter Physics, University of Granada, Granada, Spain

  • Hilbert J. Kappen,

    Affiliation Donders Institute for Brain, Cognition and Behaviour, Radboud University of Nijmegen, Nijmegen, The Netherlands

  • Joaquin J. Torres

    Affiliation Department of Electromagnetism and Matter Physics, University of Granada, Granada, Spain


Complex coherent dynamics is present in a wide variety of neural systems. A typical example is the voltage transitions between up and down states observed in cortical areas in the brain. In this work, we study this phenomenon via a biologically motivated stochastic model of up and down transitions. The model is constituted by a simple bistable rate dynamics, where the synaptic current is modulated by short-term synaptic processes which introduce stochasticity and temporal correlations. A complete analysis of our model, both with mean-field approaches and numerical simulations, shows the appearance of complex transitions between high (up) and low (down) neural activity states, driven by the synaptic noise, with permanence times in the up state distributed according to a power-law. We show that the experimentally observed large fluctuation in up and down permanence times can be explained as the result of sufficiently noisy dynamical synapses with sufficiently large recovery times. Static synapses cannot account for this behavior, nor can dynamical synapses in the absence of noise.


Neural systems, even in the absence of external stimuli, can exhibit a wide variety of coherent collective behaviors, as in vivo and in vitro experiments show [1][3]. One of the most prominent examples is the spontaneous transition between two different voltage states, namely up and down states, observed in simultaneous individual single neuron recordings as well as in local field measures. Such behavior, which is generated within the cortex, may provide a framework for neural computations [4], and could also coordinate some sleep rhythms into a coherent rhythmic sequence of recurring cortical and thalamocortical activities [3], [5], [6]. The phenomenon of up and down transitions has been measured in a number of situations, such as in the primary visual cortex of anesthetized animals [7], [8], during slow-wave sleep [1], [5], [6], in the somatosensory cortex of awake animals [9], or in slice preparation under different experimental protocols [3], [10], [11], to name a few. The origin of such structured neuronal activity is still unclear, although several studies have shown that both intrinsic cell properties [12][14] and the high level of recurrency present in actual neural circuits [3], [15], [16] may contribute to the generation of up and down transitions. In particular, the contribution that reverberations in recurrent neural networks may have in the appearance of these transitions could depend strongly on synaptic properties. It is known, for instance, that excitatory synapses with a slow dynamics (such as synapses mediated by NMDA receptors) may play a relevant role in the generation of persistent activity or up cortical states [17]. On the other hand, several modeling studies indicate that activity-dependent synaptic mechanisms, such as short-term synaptic depression and facilitation, can induce voltage transitions between up and down neural states as well [16], [18][20].

Many crucial points about the understanding of up and down transitions are, however, still lacking. For instance, in vivo experiments in the cat visual cortex show that the permanence times in the depolarized or up state present a high variability, and can range from a scale of milliseconds to seconds [7]. A similar level of irregularity has also been recently found in in vivo recordings of up-down transitions in the rat auditory cortex [21], as well as in sleep-wake transitions [15], [22], [23], where power-law distributions in the duration of wake states have been measured. Such complexity in the time series of the neuron membrane potentials remains far to be explained, and could reflect scale invariance in permanence times, which could in turn be a (preliminary) indicative of criticality. In fact, there are many recent studies that have shown criticality in different contexts in the brain [24], [25], as well as in neural network models which present self-organization and criticality properties [26][28], and even it has been reported to occur in sleep-wake transitions in in vivo conditions [22], [23]. Although it is worth noting that the irregularity of the dynamics of up and down states is not a sufficient condition for criticality, a concrete characterization of such irregularity may be a convenient starting point for future works on this topic.

To study in detail the relevant issue of irregular up and down cortical dynamics, we propose in this work a minimal model for up and down transitions in neural media. We consider a simple bistable rate model whose stable solutions represent two possible voltage states of the mean membrane potential of the network. More precisely, such states correspond, respectively, to high and low levels of activity in the network (that is, the up and down cortical states). In addition, we consider that the synaptic connections between neurons of the network present short-term synaptic depression (STD) mechanisms, which introduce temporal correlations, as well as synaptic stochasticity, in the dynamics of the system [29][32]. A complete analysis of this simple mathematical model depicts, both numerically and within a theoretical probabilistic approach, the appearance of power-law dependences in the distribution of permanence times in the up state. Our results show that the appearance of such scale free distributions is due to the complex interplay between several factors including synaptic stochasticity and the temporal correlations introduced by STD. The emergence of power-law dependences could, indeed, explain the high variability in permanence times in the up state suggested by experiments [7], [21].


Our starting point is a bistable rate model, which mimics the dynamics of the electrical activity of a population of interconnected excitatory neurons (although it can be easily extended to other situations) with two stable levels of activity. The model has the form [33](1)where is the mean firing rate of the (homogeneous) neural population, is the maximum level of activity which can be reached by the population (in absence of noise), is the synaptic coupling strength in absence of STD, and is the firing threshold of the neurons in the population. The variable is a Gaussian white noise of zero mean and standard deviation , which takes into account the inner stochasticity of the neural population (caused by other sources of uncontrolled noise in the system). The parameter is the population time constant, which may be assumed to be around the duration of the synaptic current pulse [34], [35]. For generality purposes, we set , and therefore time and frequency are given in units of and , respectively. The term represents the transduction function, which gives the nonlinear effect that the mean postsynaptic current (coming from recurrent connections of the neural population) induces in the network mean firing rate. Employing this form for , the up and down stable levels of activity correspond to and , respectively.

On the other hand, the variable in equation (1) takes into account the dynamical modification of the strength of the synaptic connections during short time scales due to high network activity, and it is usually named short-term synaptic plasticity. Based on the model proposed in [29], [36] for short-term depression, and following previous studies concerning the dynamics of neural populations [16], we assume that evolves according to(2)where is the characteristic time scale of the STD mechanism, and is a parameter related with the reliability of the synaptic transmission. According to experimental measurements for these parameters in the somatosensory cortex of the rat [36], we set and unless specified otherwise. Assuming, for instance, a population time constant of , which would approximately correspond to the duration of a fast synaptic current pulse mediated by AMPA receptors, we obtain , which is within the physiological range measured in [36]. The last term on the right hand side of equation (2) is added to the original model in [36] to include some level of stochasticity in this, otherwise, deterministic description of synaptic transmission. The inclusion of such term constitutes a simple manner of considering the stochasticity due, for instance, to the unreliability of synaptic transmission [31], [32], the stochastic properties of receptor-transmitter interactions [37], the sparse connectivity of cortical circuits [38], [39], or other sources of noise not yet considered (see the Discussion Section for more details). The parameter controls the strength of this fluctuating term, and is a Gaussian white noise with zero mean and variance one.

Equations (1) and (2) constitute our minimal model of an excitatory neural network with stochastic depressing synapses. The simplifications assumed by such model allows to obtain some analytical derivations for the quantities of interest, and concretely for the probability distributions of permanence times in the up state, denoted by . Bistable systems in the presence of different sources of noise have been theoretically studied in detail in many works [40][44]. Here, however, we have employed a probabilistic approach which is very appropriate for the computation of the distribution of permanence times. In the following, we will derive an approximate expression for within this approach. First, we obtain the potential function and the conditions in which the dynamics of the system is driven by the variable . After that, we compute the probability distribution of ruin times of which, as we will see, leads to the probability distribution of permanence times in the up state, namely .

A. The potential function

In order to compute the potential function of the dynamics (1,2) (namely ) one can see that, for realistic values of , the dynamics of is very slow compared to that of . We therefore can write equation (1) as(3)where we have adiabatically eliminated from the dynamics of . The extrema of are given by the solutions of the equation(4)In the following, we choose , with and . With this choice, one can easily check from equation (3) that the potential becomes symmetric in around when .

Equation (4) may have one or three solutions, depending on the slope of the hyperbolic tangent and on the value of . In order to obtain three solutions of (4) (that is, the bistable regime) the maximal slope of the hyperbolic tangent must be large enough, concretely the condition must be fulfilled. In addition, the threshold term must be not too small or too large so that has three crossing points with the straight line rather than one. This last condition can be written, as a first approach, as and , where are the values where the curvature of the hyperbolic tangent is maximal and minimal, respectively. The points can be easily computed from the third derivative of :(5)By setting we obtain(6)Using now these values for , the conditions and can be written as(7)which implies that, in order to have one maxima and two minima in , the variable must be in the range , where(8)From equation (8), one can see that the range of that allows to have three extrema in the potential is(9)The condition implies which is, therefore, a sufficient condition to obtain a double well potential for some value of . One can find, however, a small discrepancy between this approximate prediction and the actual properties of . The discrepancy appears because we have assumed that a sufficient condition for the existence of the three fixed point solutions of equation (4) is that and , and such assumption is only approximately correct. Plotting directly the potential as a function of reveals that the condition to obtain a double well potential for is , rather than .

Assuming that the above condition () is satisfied, three different shapes for the potential function can be found, as the figure 1A illustrates. When the potential function presents only one minimum, located around . Similarly, for the potential presents also a single minimum, but now located around . Finally, for the potential will take a double well shape, with the maximum being located around and the minima located around and , respectively.

Figure 1. Considerations for the mean-field approach.

(A) Potential function , as a function of the mean firing rate and for different values of . One can appreciate the different regimes explained in the main text. Other parameters are and . (B) An Ornstein-Uhlenbeck (OU) process (see equation 10) with and . A typical return event (with return time ) and a first passage event (with first passage time ) are indicated for illustrative purposes. For the first passage time, the threshold (depicted as a blue dashed line) was fixed to 0.15.

It is worth noting that , with being the mean value of . Due to this, if the range is small compared with the fluctuations of , namely , the potential function will spend most of the time in the regimes and , with the double well regime appearing only when the system tries to jump from one of these regimes to the other (that is, when ). A direct consequence of this is that the mean firing rate will basically be switching between the up and down states (that is, and ), and that this switching will be driven by the dynamics of , as the figure 2 illustrates. Therefore, one expects that the distribution of permanence times of in the up (down) state, becomes approximately equal to the distribution of permanence times of in the () regime, as long as is satisfied. Due to this equivalence, in order to compute we only need to compute the distribution of permanence times of the variable in the regime, denoted as .

Figure 2. Time series showing the dynamics of our system.

(A) Time series of the mean firing rate of the neural population for deterministic depressing synapses. The temporal evolution of the variable is also plotted for illustration purposes. (B) Histogram of the mean firing rate, which shows the existence of two well defined states of activity in and , corresponding to the down and up states respectively. The values of the parameters are and . (C) Same as (A), but with a certain level of intrinsic stochasticity on the dynamics of the synapses (concretely, we set ). The two-headed arrow shows a typical interval of permanence in the up state, denoted by . (D) Same as (B), but for . The other parameters take the same values as in (A) and (B).

On the other hand, it should be noted that, since is a fraction of available neurotransmitters, its value should be kept within the range . In practice, this means that the value of must not be too large, so in order to make one has to restrict to small. In the results presented here, remain in its realistic range of values, and imposing ad hoc restrictions in such a way that is always within the range does not affect the results obtained here.

B. Distribution of permanence times

In order to compute the distribution of permanence times of in the (or ) regime, one can assume that the firing rate takes its mean value in equation (2). This is a reasonable approach since is much slower than for realistic values of the parameters. Considering this approach, and after the rescaling , equation (2) can be written as(10)which is the equation of the Ornstein-Uhlenbeck (OU) process (see [45] for details), with being the correlation time and . Therefore, computing the distribution of permanence times in the up state for our system is equivalent to obtain the distribution of the so called ruin times for the OU process [46], [47], which may be defined as follows: if we consider a stochastic process starting at from , the ruin time is the interval , where is the time at which returns to for the first time. Since is a stochastic process, the ruin times are stochastic quantities which follow a certain probability distribution.

The strategy employed here to calculate the distribution of ruin times is based on the relation between the ruin time and the first passage time, which is the typical time that a stochastic process needs to arrive at a certain threshold value when starting from a certain initial condition [47]. Because of the symmetry of the OU process, the distribution of ruin times are equivalent when considering excursions of the variable in the region or in the region. If we consider excursions in the region, we can set a small positive threshold near zero (that is, ), in such a way that the typical ruin time will be approximately equal to the corresponding first passage time, as the figure 1B illustrates. The excursions in the region typically lead to very short first passage times (since is too small) which we will not take into account in our calculations by considering only large enough ruin times.

The first passage time for the OU process with a small threshold can be performed by using the relation(11)where is the conditional probability distribution of the OU process, and is the first passage time distribution. This equation can be solved by taking into account the following property of the Laplace transformation(12)where is the Laplace transform of . By solving the Fokker-Planck equation associated with equation (10), one can obtain the conditional probability for the OU process(13)where , and being the standard deviation of . From expression (13), and assuming that is large enough (more precisely, assuming that , which is a valid hypothesis since most of the permanence times in the up state are much lower than ), one arrives at(14)We denote and . Employing the Laplace transformation in and the following expressions are obtained(15)Now, taking into account the property (12) in equation (11), the expression for is(16)Finally, for small one can approximate . With this approximation, one can easily perform the inverse Laplace transformation to equation (16) and obtain the distribution of first passage times for the OU process(17)A similar expression may be obtained if one considers more classical derivations of the first passage time of the OU process (see, for instance, [48]). In order to obtain the distribution of ruin times of the OU process, one has to consider a small (but positive) value of , which leads to . The distribution of ruin times of the variable , namely , and therefore, the distribution of permanence times in the up state, namely , for our system are also given by(18)which corresponds to a power-law probability distribution for .

Summarizing, the three following conditions must be fulfilled to obtain a power-law dependence in with exponent :

  • Large enough values of . With this condition, we ensure that the dynamics of is much slower than that of .
  • Large enough values of . In particular, we must have , according to the condition and the definitions and . This condition can be achieved even with very small values of , since can be arbitrarily small (by increasing , for instance).
  • The condition must hold to ensure the existence of two well defined (up-down) states.

All these conditions may be easily achieved (up to some point) with realistic values of the model parameters, indicating that power-law distributions of the permanence times in the up state are plausible to be found in actual cortical media.


As we have stated in the previous sections, equations (1–2) govern the dynamics of our simplified neural system. A typical time series of the dynamics of this model, for the case of deterministic synapses (that is, ), is depicted in figure 2A. In this case, the mean firing rate of the population is characterized by a periodic switching between up and down states. This type of periodic behavior was already found and analyzed in previous theoretical studies [14], [16], [18] and yields bimodal histograms for the mean firing rate of the neural population (see figure 2B), as the experiments indicate [3]. However, these approaches ignore the stochastic nature of synaptic transmission, and other forms of stochasticity at the synaptic level, which seem to be crucial for information processing in neural systems [31], [32], [49]. Considering a certain level of synaptic stochasticity in addition to STD in our model, one obtains a qualitatively different emergent behavior, as is shown in figure 2C for . The mean firing rate presents then a complex switching between up and down states, and in particular involves a high variability in the permanence times in the up state.

When deterministic synapses are considered (that is, ) the dynamics of the mean firing rate becomes quasi periodic, as it was reported in [16], [18], [19], for instance. This type of dynamics naturally leads to exponential distributions for the permanence times. More precisely, for our model is similar (except for the term ) to the one analyzed in [16], which shows periodic oscillations of the network mean firing rate. In our case, however, the term introduces certain level of stochasticity which turns these periodic oscillations into quasi-periodic oscillations. This leads to the exponential distributions for the permanence times in the up state. When is increased, on the other hand, the stochasticity of the synapses leads to the appearance of power-law distributions for the duration of the up states. This behavior is shown in figure 3A, where low values of corresponds to exponential distributions for , while larger values of give as predicted by our theoretical calculations. Such power-law distributions may explain the high variability of permanence times in the up state, which has been observed in a number of in vivo experiments, such as in the cat visual cortex [7] and rat auditory cortex [21], to name a few. Interestingly, similar power-law dependences have been observed during sleep-wake transitions in vivo when one measures the distribution of permanence times in the wake state [22], [23]. On the other hand, exponential-like distributions, obtained for the case of having , are not able to explain this variability of the duration of up states.

Figure 3. Probability distributions of permanence times in the up state.

(A) Probability distribution , obtained with numerical simulations, for different values of the noise strength . One can see that high values of lead to the appearance of power-law distributions with , as the mean-field solution predicts. For numerical simulations, we employed time series of duration and averaged over trials. The values of the other parameters were and . To compute , we have considered that the up state has been reached during a period (with ) if during this period. We set . (B) Probability distributions of permanence times in the up state, for different values of and fixed and . In order to fix and , we have conveniently modified and , respectively, for each value of . We employed time series of duration and averaged over trials. Other parameters are and .

By looking at the data for in figure 3A, one can observe the existence of a small deviation of the numerical results (blue points) with respect to the theoretically predicted slope (solid line) for very large values of . Such deviation is due to the fact that the separation of timescales between the dynamics of and (a necessary condition to obtain power-law dependences in ) is only approximate when considering realistic values of the parameters (and in particular, realistic values for ). More precisely, the approximation fails when the activity of the system falls in the occasional periods of very long permanence times in the up state (that is, for large enough values of , comparable with ). In order to study the effect of the separation of timescales between and , we have computed for different (increasing) values of while keeping fixed values for and (this can be done by properly modifying and with , respectively). As a consequence of this, the only effect of increasing will be a clearer separation of timescales between and . The results are shown in figure 3B, where one can see that larger values of (that is, clearer separation of timescales) lead to a displacement of the effective cut-off towards higher values of , as expected, and a clearer power-law distribution emerges.

It is worth noting that the appearance of an effective cut-off in for realistic conditions does not represent an unrealistic feature of the model, but rather it constitutes a prediction about the effective range of permanence times which are expected to occur in actual neural systems. Indeed, for realistic values of the parameters, our results predict permanence times in the up state up to , which is the maximum permanence time observed in experimental realizations [7]. Larger permanence times in the up state (of about seconds, for instance) should be expected to appear only as a consequence of input driven mechanisms (such as persistent activity associated with working memory tasks [50], [51]), and not as a consequence of spontaneous transitions between different voltage levels, which are the matter of interest in this work.

For a better characterization of the dynamics of the system, one can use, for instance, other statistical magnitudes such as the autocorrelation function of , which can be defined as(19)Here, indicates a temporal average. The autocorrelation function is depicted in figure 4A for the case of deterministic depressing synapses () and stochastic depressing synapses (). presents, for , two well located peaks at , which indicates a strong periodicity of the time series (as can be seen in figure 2A). On the contrary, the inclusion of a certain level of intrinsic stochasticity in the dynamics of introduces more pronounced temporal correlations in the dynamics of the system. This fact reflects the existence of long permanence stays in the up state, which occurs with more probability for high enough values of , as we have already discussed.

Figure 4. Autocorrelation and power spectra.

(A) Autocorrelation function of the mean firing rate for deterministic () and stochastic () synapses, in the presence of STD. (B) Power spectra of the mean firing rate for the two cases illustrated in (A). For both panels, we have averaged over time series of duration each, and we have fixed and .

The spectral properties of the dynamics can be analyzed as well, via the power spectrum defined as(20)

As one could expect, the power spectrum of the case presents a pronounced peak around a certain frequency, which in the particular case presented in the figure 4B is . The power spectrum for higher values of shows however different properties than the case . For instance, the figure 4B (which considers ) indicates an approximated power-law behavior for the power spectrum, with . This scale-free dependence can be understood by considering that, if is algebraic with exponent , the corresponding power spectrum becomes also algebraic with exponent , where the equation relates both exponents [44]. In our particular case, since , one obtains a theoretical prediction of for the exponent of the power spectrum. The theoretical relation between and exposed above, however, is only valid under the so called single interval approximation, which implies that the integration variable in equation (20) is smaller than the permanence time (see [44] for details). This condition does not strictly hold for our system (where ranges over several scales), and therefore it may introduce deviations in the theoretically predicted value of (which is around ) with respect to the value found in simulations (of around ).

Besides the level of synaptic stochasticity, i.e. , other parameters of the model could have an important effect on the dynamics as well. The parameter , for instance, controls the level of stochasticity of the dynamics of , and therefore one should expect that increasing its value could strongly influence the probability distribution . This is shown in figure 5A, where an increase of disrupts the appearance of power-law dependences, and exponential distributions appear instead. This change in is due to the fact that high levels of the additive noise make the system to jump more frequently from one state to the other, and therefore long stays in the up state (and thus distributions with long power-law tails) rarely occur.

Figure 5. Influence of other parameters of the model.

(A) Probability distributions of permanence times in the up state, for different values of . Other parameters are and . (B) Same as in (A), but for different values of . The other parameters take the same values as in (A), except for . (C) Probability distribution as a function of and . The three different regimes are shown with different colors (see main text for details). Other parameters are and . For all panels, we have averaged over times series of duration each.

The parameters involving the dynamics of also affect the probability distributions . The parameter , for instance, is responsible for the modulation of via the mean firing rate (see equation (2)), and therefore it can influence both the dynamics of and . As one may see in figure 5B, when takes low values a bump in emerges for high . Such deviation from the power-law dependence indicates that long stays in the up state occur more frequently than in the power-law case. Attending at equation (2), one can see that an increase of the mean firing rate decreases the variable via the parameter . Therefore, if takes lower values the decrement of will be smaller. As a consequence, the stays of in the regime (see Methods Section) will last longer, and the stays of the system in the up state will also last longer, causing the observed deviation from the power-law tendency. It should be noted, however, that the values of which allow the appearance of power-law dependences in for our model agree with the values of measured in actual cortical media where up and down transitions are observed [36].

We have also analyzed in detail the effect that varying has on the probability distribution of permanence times. Note that, contrarily to the previous study presented above, we have now varied the parameter while all the other parameters are kept fixed. This implies that the modification of will now have an effect on the separation of timescales between and , but also on the concrete value of and on the amplitude of the noisy term of equation (2) (namely ). The results are shown in figure 5C, where one can distinguish three different regimes as a function of the particular value of . For low (red region in the figure), the probability distributions show an exponential decay for large permanence times. The reason for this decay is that, for low , the variable does not perform long excursions in the region (see Methods Section), and therefore the probability to have large values of decreases and the power law behavior for is not obtained. As is increased, long excursions for begin to occur, and we obtain a power law behavior (green region in the figure). Finally, one can appreciate that, for even larger values of (blue region in the figure), the probability distribution of permanence times in the up state presents a power law dependence with being an increasing function of . Such dependence can not be explained by our previous theoretical predictions, based in the assumption that the system is in the bistable regime, and deserves a detailed analysis which will be exposed in the next section.

Further analysis

In the Methods Section, we established several conditions which had to be fulfilled in order to obtain power law dependences for . In particular, our previous analysis indicates that the condition must hold in order to have a potential function with three extrema (bistable regime). However, as we will see in the following, power law expressions for may appear even if the potential function has only one extremum in (concretely, one minimum), although the origin of such power law distributions is different from the one considered in previous sections, as we will see.

When (which occurs for or , for instance), the potential function has only one minimum in , whose location strongly depends on . An approximated expression for the location of this minimum as a function of can be obtained by expanding the hyperbolic tangent of the fixed point expression of (see equation (4)) around its argument (which is small in this limit), yielding(21)where is the value of which corresponds to the minimum of the potential function. Therefore as varies around , the location of the minimum of the potential also varies in the same way around . As an example, time series of both and are shown in figure 6A for a given set of parameters which satisfies . In this time series, the variable fluctuates around the value , which is fully determined by (that is, the variable becomes a slave variable of ). The predictions of equation (21) agree approximately well with simulations and with the numerical evaluation of the fixed points of equation (1), as the figure 6B shows.

Figure 6. Behavior of the system when the condition holds.

(A) Time series of the variables and . (B) The same time series, but represented on the plane, illustrates the fact that is a slave variable of (although some level of inner stochasticity on is still present). The green line corresponds to the approximate expression (21), while the blue line is the numerical evaluation of the fixed point solutions of (see equation (4)). The inset shows the situation in which the system shows a bistable dynamics, analyzed in the previous section. (C) The potential function as a function of for different values of . One can appreciate the existence of only one minimum, whose location is controlled by . (D) Histograms of the mean firing rate of the system for different values of . For the cases showed in this panel, the condition is only satisfied for the case . For all panels, , and unless specifically specified.

Since behaves now as a stochastic variable which does not present a clear bistable dynamics, the numerical computation of the distribution of the permanence times will depend on the exact value of above which the system is considered to be in the up state. As we have seen before, this threshold value takes the form (see caption of figure 3), where usually may take a value between and . While the results presented for (that is, the bistable regime) are quite robust for different values of , in the regime this parameter has indeed some effect on , which indicates the difficulty to accurately analyze the up and down dynamics in this case.

In figure 7A, one observes that the distribution shows also a power law behavior for and different values of , for a set of parameter values which satisfies (that is the monostable regime). The concrete value of depends strongly on and it has also a weaker dependence with , as the figure 7B illustrates. This type of power-law behavior appearing in the monostable regime corresponds to the blue region in figure 5C, as well.

Figure 7. Statistics of permanence times in the up state for .

(A) Probability distribution of permanence times in the up state in the regime, for and different values of . One can see that power law relations appear. (B) Dependence of with for the conditions presented in (A). The inset shows the dependence of with the parameter for the case . We have averaged over time series of duration each. Other parameters are and .

It is worth noting that actual recordings of up and down transitions does not present a clear distinction between up and down states, and several nontrivial methods are commonly employed to discriminate between both states [52]. Therefore, the results found for the regime could indeed reflect the behavior of actual cortical up-down transitions, showing power law dependences in with and indicating that the concrete nature of the transitions is a synaptic-driven monostable dynamics.

For a complete characterization of the model, one can summarize all the observed behaviors in a phase plot such as the one presented in figure 8A. A total of four different behaviors can be found in the space. The first one concerns the dynamics of whose permanence times in the up state follows an exponential distribution (labeled as “E” in the figure). If the noise amplitude is sufficiently high, one can increase the value of to reach the regime “C”, in which the dependence is obtained. By increasing even more, the probability distribution takes the form , with (regime denoted by “S”), as we have already seen in figure 6. Finally, we also observe that when the depression time scale is not large enough (and ), a regime of quasi-periodic time series of is obtained, with a well-defined duration of up states (regime denoted by “P”). The lines between the different regimes have been obtained by visual inspection of for different values of and . In particular, the regime “P” is characterized by the appearance of a bump in the probability distribution for some value of (which reflects a preferred duration of the up state), and the existence of such bump has been used as a criterion to distinguish between regimes “P” and “E”. Similarly, we assumed that the regimes “C” and “S” correspond to the situation in which a power-law behavior that extends for two decades or more is found for . Such criterion, together with an estimation of the slope of the power-law via standard Levenberg-Marquardt fitting algorithms, allows to distinguish between regimes “E”, “C” and “S”.

Figure 8. The different dynamical regimes of the model.

(A) Phase plot which shows the different behaviors found in our system. These behaviors corresponds to time series of for which permanence times in the up state follow an exponential distribution (E), a power-law distribution with (C), or a power-law distribution with (S). In addition, a phase with a well-defined duration of the up state is found (P). In panel (B) some of these behaviors are depicted. From top to bottom one can see situations P, E and C. Other parameters are and .

It must be clarified, however, that actual up and down cortical transitions present most likely a richer repertoire of dynamical regimes than the one obtained with our simplified model. It is known, for instance, that attractor neural networks with dynamic synapses may exhibit different dynamics corresponding to memory, non-memory and switching regimes [18], [19]. In this work, we have extensively explored different regimes of switching behavior, and its implications for the up and down dynamics observed in the cortex. The memory and non-memory regimes, however, can be also found in our simplified model by assuming that . After taking these limits, the system will be in the memory regime if the potential function is bistable, or in the non-memory regime if is monostable.


We have shown that the experimentally observed large fluctuations in up and down permanence times can be explained as the result of sufficiently noisy dynamical synapses with sufficiently large recovery times. Our study suggests that a power-law distribution for these permanence times may emerge as a consequence of these two ingredients. Static synapses cannot account for this behavior, nor can dynamical synapses in the absence of noise.

The origin of up and down cortical transitions is still unclear, although different factors that may influence their occurrence have been recently reported. It is known, for instance, that inhibitory GABAergic currents strongly contribute to the temporal coding and spike timing precision of cortical networks during up states of activity [3], [53], [54]. Several modeling studies also show the relevance of inhibitory interneurons in the generation of many types of oscillations in the brain (see for instance [55]). However, other studies indicate that most of the main features of up and down transitions depends strongly on synaptic plasticity mechanisms, both of long-term and short-term ones [16], [56], and that the transitions appear even in the absence of inhibition [16]. In this work we have made the common assumption that the effects of inhibition can be treated as additive and can be incorporated in the threshold of the neuron. This is known to be a valid approximation in mean field neural network analysis, but may fail when precise timing and details of the dynamical aspects of the neuron affect the inhibition [57], [58].

Regarding to synaptic characteristics, recent works show that synaptic fluctuations could have an important role in the generation of transitions between up and down states [14], [59], [60]. Since our model introduces stochasticity in the synaptic dynamics in a highly simplified manner, however, the last term in equation (2) should not be associated only with ureliability in synaptic transmission. Indeed, we have assumed that other sources of stochasticity may be contributing to this fluctuating term in the mean-field quantities and . For instance, it is widely known that connectivity in actual cortical media is highly sparse. Such feature implies that, in order to obtain the mean-field quantity , the average over synapses must be performed over a number of synapses, with ranging over connections per neuron [38]. In this situation, the fluctuations of would be of order , which leads to a range of for the values given above. As we have seen, our results state that a value of is enough to obtain power-law distributions (see figure 3), which lies within this range. Therefore, topology-induced fluctuations constitute an important source of stochasticity which could be responsible of the appearance of power-law distributions in . Other sources of stochasticity at synaptic level, such as the stochastic properties of receptor-transmitter interactions, may also contribute to the last term of equation (2). Moreover, the low activity rates typical from cortical media lead to a poor time-averaging of the incoming input, and therefore the fluctuations at the postsynaptic level will be large at these short-time scales (of the order of the typical synaptic integration time constant).

On the other hand, the amplitude of the noisy term, , does not need to be very high to induce the appearance of power-law distributions in . As we have stated above, a sparse connectivity already induces a level of stochasticity which is within the desired range, for instance. Furthermore, the noisy term could even be arbitrarily small: attending to our theoretical predictions, a necessary condition to have power-law distributions is that fluctuations of must be much larger than (see Methods Section). Since may be lowered to arbitrary levels (by increasing , for instance), even a small noisy term in the dynamics of may induce power-law distributions.

It is also known that short-term synaptic mechanisms, such as short-term depression and facilitation, usually play a role in the efficient processing of information. In particular, they may be relevant in many tasks, such as in signal detection and coding [29], [61][63] or switching between different activity patterns previously stored [19], [64]. However, their role on the transitions between cortical states has been pointed out only by a few studies [15], [16], [65], and their possible effects on the statistics of the transitions, which is the focus of our work, have been ignored. To the best of our knowledge, the present study is the first one which analyzes, even in a simplified manner, the strong effect of synaptic stochasticity– in a general sense– and dynamic synapses in the statistics of the up and down transitions. The possible role of other short-term synaptic mechanisms, such as STF, has not been addressed yet and constitutes a interesting issue still open.

In our analysis we assumed that the dynamics is symmetric in the up and down states. This is in contradiction with experimental evidences [66] which shows that power-law distributions are obtained for permanence times in the up state, while permanence times in the down state are exponentially distributed. However, this discrepancy disappears when one considers a more realistic transduction function which gives an asymmetric potential for the dynamics, and as a consequence the up-down symmetry is broken. More detailed studies considering, for instance, some of the biologically realistic aspects discussed above, should be performed to test our predictions. In particular, a more elaborated study considering realistic neuron models (such as Hodgkin-Huxley model [67]) and stochastic STD models (see [29], [49], for instance) is necessary, as well as more detailed experimental studies which may confirm our predictions.

From a general point of view, evidences of criticality have been recently found in an increasing number of neural systems, such as in the functional connectivity of the living human brain [24], in critical avalanches of neuronal activity [25], or in sleep-wake transitions [23], to name a few. According to the results presented in this work, transitions between up and down cortical states could also present some relevant properties typical of systems at criticality. Some of these properties have been already measured in experiments, such as a high sensitivity of the system to external stimuli [8], or the presence of power-law dependences in the power spectra of the neural dynamics [53].

It is worth noting that other kind of probability distributions for , such as a log-normal distribution, could also satisfactorily explain the irregularity in the up states found in experiments. Our study shows the importance of some biophysical factors, such as the neurotransmitter recovery time and the inherent synaptic stochasticity, and predicts a power-law dependence on as a consequence of such factors. However, further study is needed to investigate other mechanisms, not taken in account in this work, which could influence the permanence times in the up state. In a more general sense, our results may proportionate a new perspective of the phenomena of up and down transitions (and a theoretical framework) that could serve to conciliate the main experimental findings, and that could help for a deep understanding of this complex dynamics of the brain activity.


We thank useful discussions with Sebastiano de Franciscis and Samuel Johnson.

Author Contributions

Conceived and designed the experiments: HJK JT. Performed the experiments: JFM. Analyzed the data: JFM. Contributed reagents/materials/analysis tools: JFM HJK JT. Wrote the paper: JFM HJK JT.


  1. 1. Steriade M, McCormick DA, Sejnowski TJ (1993) Thalamocortical oscillations in the sleeping and aroused brain. Science 262: 679–685.
  2. 2. Arieli A, Sterkin A, Grinvald A, Aertsen A (1996) Dynamics of ongoing activity: explanation of the large variability in evoked cortical responses. Science 273: 1868–1871.
  3. 3. Sanchez-Vives MV, McCormick DA (2000) Cellular and network mechanisms of rhythmic recurrent activity in neocortex. Nat Neurosci 3: 1027–1034.
  4. 4. McCormick DA (2005) Neuronal networks: flip-flops in the brain. Curr Biol 15: R294–R296.
  5. 5. Steriade M, Nunez A, Amzica F (1993) A novel slow (<1hz) oscillation of neocortical neurons in vivo: depolarizing and hyperpolarizing components. J Neurosci 13: 3252–3265.
  6. 6. Steriade M, Nunez A, Amzica F (1993) Intracellular analysis of relations between the slow (<1hz) neocortical oscillation and other sleep rhythms of the electroencefalogram. J Neurosci 13: 3266–3283.
  7. 7. Lampl I, Reichova I, Ferster D (1999) Synchronous membrane potential fluctuations in neurons of the cat visual cortex. Neuron 22: 361–374.
  8. 8. Anderson J, Lampl I, Reichova I, Carandini M, Ferster D (2000) Stimulus dependence of two-state fluctuations of membrane potential in cat visual cortex. Nat Neurosci 3: 617–621.
  9. 9. Petersen CCH, Hahn TTG, Mehta M, Grin-vald A, Sakmann B (2003) Interaction of sensory responses with spontaneous depolarization in layer 2/3 barrel cortex. Proc Nat Acad Sciences USA 100: 13638–13643.
  10. 10. Cossart R, Aronov D, Yuste R (2003) Attractor dynamics of network up states in the neocortex. Nature 423: 283–288.
  11. 11. Shu Y, Hasenstaub A, McCormick DA (2003) Turning on and off recurrent balanced cortical activity. Nature 423: 288–293.
  12. 12. Mayor G, Tank D (2004) Persistent neural activity: prevalence and mechanisms. Curr Opin Neurobiol 14: 675–684.
  13. 13. Loewenstein Y, Mahon S, Chadderton P, Kitamura K, Sompolinsky H, et al. (2005) Bistability of cerebellar Purkinje cells modulated by sensory stimulation. Nat Neurosci 8: 202–211.
  14. 14. Parga N, Abbott LF (2007) Network model of spontaneous activity exhibiting synchronous transitions between up and down states. Front Neurosci 1: 57–66.
  15. 15. Steriade M, Timofeev I, Grenier F (2001) Natural waking and sleep states: a view from inside neocortical neurons. J Neurophysiol 85: 1969–1985.
  16. 16. Holcman D, Tsodyks M (2006) The emergence of up and down states in cortical networks. PLoS Comput Biol 2: 174–181.
  17. 17. Wang XJ (1999) Synaptic basis of cortical persistent activity: the importance of nmda receptors to working memory. J Neurosci 19: 9587–9603.
  18. 18. Pantic L, Torres JJ, Kappen HJ, Gielen SCAM (2002) Associative memory with dynamic synapses. Neural Comput 14: 2903–2923.
  19. 19. Torres JJ, Cortes J, Marro J, Kappen H (2007) Competition between synaptic depression and facilitation in attractor neural networks. Neural Comp 19: 2739–2755.
  20. 20. Melamed O, Barak O, Silberberg G, Markram H, Tsodyks M (2008) Slow oscillations in neural networks with facilitating synapses. J Comput Neurosci 25: 308–316.
  21. 21. Deco G, Marti D, Ledberg A, Reig R, Sanchez-Vives MV (2009) Effective reduced diffusion-models: a data driven approach to the analysis of neuronal dynamics. PLoS Comput Biol 5: e1000587.
  22. 22. Pearlmutter BA, Houghton CJ (2009) A new hypothesis for sleep: tuning for criticality. Neural Comput 21: 1622–1641.
  23. 23. Lo CC, Chou T, Penzel T, Scammell TE, Strecker RE, et al. (2004) Common scale-invariant patterns of sleep-wake transitions across mammalian species. Proc Nat Acad Sci USA 101: 17545–17548.
  24. 24. Eguiluz VM, Chialvo DR, Cecchi GA, Baliki M, Apkarian AV (2005) Scale-free brain functional networks. Phys Rev Lett 94: 018102.
  25. 25. Beggs JM, Plenz D (2003) Neuronal avalanches in neocortical circuits. J Neurosci 23: 11167–11177.
  26. 26. Lazar A, Pipa G, Triesch J (2007) Fading memory and time series prediction in recurrent networks with different forms of plasticity. Neural Networks 20: 312–322.
  27. 27. Gomez V, Kaltenbrunner A, Lopez V, Kappen HJ (2009) Self-organization using synaptic plasticity. pp. 513–520.
  28. 28. Lazar A, Pipa G, Triesch J (2009) SORN: a self-organizing recurrent neural network. Front Comput Neurosci 3:
  29. 29. Abbott LF, Valera JA, Sen K, Nelson SB (1997) Synaptic depression and cortical gain control. Science 275: 220–224.
  30. 30. Zador AM, Dobrunz LE (1997) Dynamic synapses in the cortex. Neuron 19: 1–4.
  31. 31. Dobrunz LE, Stevens CF (1997) Heterogeneity of release probability, facilitation, and depletion at central synapses. Neuron 18: 995–1008.
  32. 32. Zador A (1998) Impact of synaptic unreliability on the information transmitted by spiking neurons. J Neurophysiol 79: 1219–1229.
  33. 33. Wilson HR, Cowan JD (1972) Excitatory and inhibitory interactions in localized populations of model neurons. Biophys Journal 12: 1–24.
  34. 34. Gerstner W (2000) Population dynamics of spiking networks: fast transients, asynchronous states, and locking. Neural Comput 12: 43–89.
  35. 35. Gerstner W, Kistler W (2002) Spiking neuron models- Single neurons, population, plasticity. Cambridge Univ. Press.
  36. 36. Tsodyks MV, Markram H (1997) The neural code between neocortical pyramidal neurons depends on neurotransmitter release probability. Proc Natl Acad Sci USA 94: 719–723.
  37. 37. Faber DS, Young WS, Legendre P, Korn H (1992) Intrinsic quantal variability due to stochastic properties of receptor-transmitter interactions. Science 258: 1494–1498.
  38. 38. Kandel ER, Schwartz JH, Jessell TM (2000) Principles of neural science, 4th ed. McGraw-Hill, New York.
  39. 39. Scannell JW, Blakemore C, Young MP (1995) Analysis of connectivity in the cat cerebral cortex. J Neurosci 15: 1463–1483.
  40. 40. Pechukas P, Hanggi P (1994) Rates of activated processes with fluctuating barriers. Phys Rev Lett 73: 2772–2775.
  41. 41. Madureira AJR, Hanggi P, Buonomano V, Rodrigues WA Jr (1995) Escape from a fluctuating double well. Phys Rev E 51: 3849–3861.
  42. 42. Ping Z (2006) Dynamical properties o a bistable system driven by cross-correlated additive and multiplicative colored noises. Chines J Phys 44: 117–126.
  43. 43. Dong-Xi L, Wei X, Yong-Feng G, Gao-Jie L (2008) Transient properties of a bistable system with delay time driven by non-gaussian and gaussian noises: mean first-passage time. Commun Theor Phys 50: 669–673.
  44. 44. Tu Y, Grinstein G (2005) How white noise generates power-law switching in bacterial flagellar motors. Phys Rev Lett 94: 208101.
  45. 45. van Kampen NG (1990) Stochastic processes in physics and chemistry. North-Holland.
  46. 46. Chandrasekhar S (1943) Stochastic problems in physics and astronomy. Rev Mod Phys 15: 1–89.
  47. 47. Newman MEJ (2005) Power laws, Pareto distributions and Zipf's law. Contemporary Phys 46: 323–351.
  48. 48. Alili L, Patie P, Pedersen JL (2005) Representations of the first hitting time density of an Ornstein-Uhlenbeck process. Stochastic Models 21: 967–980.
  49. 49. de la Rocha J, Parga N (2005) Short-term synaptic depression causes a non-monotonic response to correlated stimuli. J Neurosci 25: 8416–8431.
  50. 50. Fuster JM, Alexander GE (1971) Neuron activity related to short-term memory. Science 173: 652–654.
  51. 51. Goldman-Rakic PS (1995) Cellular basis of working memory. Neuron 14: 477–485.
  52. 52. Seamari Y, Narvaez JA, Vico FJ, Lobo D, Sanchez-Vives MV (2007) Robust off- and online separation of intracellularly recorded up and down cortical states. PLoS One 2: e888.
  53. 53. Hasenstaub A, Shu Y, Haider B, Kraushaar U, Duque A, et al. (2005) Inhibitory postsynaptic potentials carry synchronized frequency information in active cortical networks. Neuron 47: 423–435.
  54. 54. Compte A (2006) Computational and in vitro studies of persistent activity: edging towards cellular and synaptic mechanisms of working memory. Neurosci 139: 135–151.
  55. 55. Brunel N (2000) Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. J Comp Neurosci 8: 183–208.
  56. 56. Kang S, Kitano K, Fukai T (2008) Structure of spontaneous up and down transitions self-organizing in a cortical network model. PLoS Comput Biol 4: e100022.
  57. 57. Rotstein HG, Pervouchine DD, Acker CD, Gillies MJ, White JA, et al. (2005) Slow and fast inhibition and an h-current interact to create a theta rhythm in a model of CA1 interneuron network. J Neurophysiol 94: 1509–1518.
  58. 58. Vierling-Claassen D, Siekmeier P, Stufflebeam S, Kopell N (2008) Modeling GABA alterations in schizophrenia: a link between impaired inhibition and altered gamma and beta range auditory entrainment. J Neurophysiol 99: 2656–2671.
  59. 59. Cortes JM, Torres JJ, Marro J, Garrido PL, Kappen HJ (2006) Effects of fast presynaptic noise in attractor neural networks. Neural Comput 18: 614–633.
  60. 60. Johnson S, Marro J, Torres JJ (2008) Functional optimization in complex excitable networks. Europhys Lett 83: 46006.
  61. 61. Puccini GD, Sanchez-Vives MV, Compte A (2006) Selective detection of abrupt input changes by integration of spike-frequency adaptation and synaptic depression in a computational network model. J Physiol-Paris 100: 1–15.
  62. 62. Mejias JF, Torres JJ (2008) The role of synaptic facilitation in spike coincidence detection. J Comput Neurosci 24: 222–234.
  63. 63. Mejias JF, Torres JJ (2010) Emergence of resonances in neural systems: the interplay between adaptive threshold and short-term synaptic plasticity. Submitted (arXiv:09060756).
  64. 64. Mejias JF, Torres JJ (2009) Maximum memory capacity on neural networks with short-term synaptic depression and facilitation. Neural Comp 21: 851–871.
  65. 65. Timofeev I, Grenier F, Steriade M (2000) Impact of intrinsic properties and synaptic factors on the activity of neocortical networks in vivo. J Physiol-Paris 94: 343–355.
  66. 66. de Franciscis S, Torres JJ, Sanchez-Vives MV (2010) In preparation.
  67. 67. Hodgkin AL, Huxley AF (1952) A quantitative description of membrane current and its application to conduction and excitation in nerve. J Physiol 117: 500–544.