Critical behaviour of the stochastic Wilson-Cowan model

Spontaneous brain activity is characterized by bursts and avalanche-like dynamics, with scale-free features typical of critical behaviour. The stochastic version of the celebrated Wilson-Cowan model has been widely studied as a system of spiking neurons reproducing non-trivial features of the neural activity, from avalanche dynamics to oscillatory behaviours. However, to what extent such phenomena are related to the presence of a genuine critical point remains elusive. Here we address this central issue, providing analytical results in the linear approximation and extensive numerical analysis. In particular, we present results supporting the existence of a bona fide critical point, where a second-order-like phase transition occurs, characterized by scale-free avalanche dynamics, scaling with the system size and a diverging relaxation time-scale. Moreover, our study shows that the observed critical behaviour falls within the universality class of the mean-field branching process, where the exponents of the avalanche size and duration distributions are, respectively, 3/2 and 2. We also provide an accurate analysis of the system behaviour as a function of the total number of neurons, focusing on the time correlation functions of the firing rate in a wide range of the parameter space.

the gain β is also an independent neuronal parameter, so we have not a single critical point but β c w c = α, that is, we have a critical line (hiperbolae) in the w vs β parameter plane. Of course, mathematically, the important variable for the transition is x = βw (as x = βJ in the Ising model). Indeed, in the Ising model, beta is chosen as the control parameter and the J is fixed. However, we must remember that w is a synaptic parameter and the gain β is a cell body excitability parameter (in the axonal initial segment (AIS)?), and the two are located differently in the neuron. That is, we have not only a fine tuning, but a non-local fine-tuning, since for criticality the synapses w in the presynaptic cells need information about the postsynaptic β. Would this merit a comment in the paper? Or all of this is irrelevant?
4) The (Cowan?) notation Σ 0 , w 0 , R 0 and s 0 is somewhat confusing to me because it suggests that these quantities refer to the zero activity (absorbing) phase. Also, the fixed points Σ 0 = 0 and the non zero Σ 0 > 0 receive the same name in the Methods section (and also in lines 125-126), which is a bit confusing because both exist above w 0c : one is the Σ 0 = 0 which is a true fixed point, but unstable, and the another is Σ 0 > 0, that is stable, as usual in transcritical bifurcations. Although not used by Cowan [17], I suggest, to aid the readers, the notation Σ + , R + and s + or other notation for the stable non-zero solution, reserving Σ 0 = 0 for the absorbing phase (that also exists above w c as an unstable fixed point). I also do not understand why to use w 0 in the definition of line 107, since w 0 is not a fixed point but a parameter and we could use w = w E − w I and Σ * = (w − w c )/w which is much more clean. Or the single letter w has been used before? Please, could you clarify the option for this notation? Is it only to preserve the notation of reference [17]?

Minor concerns
6)There is a typo in Eq. (1): it is j instead of ij . 7) A doubt: in Eq. (2), is the correct form β tanh(s) or tanh(βs)? I ask because Eq (4) from [17] has no beta, and it admits a beta inside the tanh(), as usual in other stochastic models, for example Gerstner model, Wulfram Gerstner and J Leo van Hemmen (1992) Associative memory in a network of 'spiking' neurons, Network: Computation in Neural Systems, 3:2, 139-164, and Boltzmann Machines. Or this is irrelevant due to the linear approximation?
8) It is not very clear where the authors change from the single neuron s i representation to the population representation s. Could the authors point better that to the reader? 9) Only a suggestion: in statistical physics it seems more common to define avalanche critical exponents as positive numbers, that is, P (S) = S −τ S instead of S τ S , so that τ S = 3/2 is positive. The minus sign also already signalizes that P (S) is a decreasing function of S, the reader need not think very much about that. But if the authors opt for negative exponents, then this must be corrected in Fig. 4e, where the exponents are given as positive, and also in the legend of Fig. 5. 10) In line 16, the three citations are from the same group and about the same self-organizing mechanism. Perhaps a very recent review of self-organizing mechanisms could be useful here [Kinouchi2020]. Of course, this is only a suggestion.
Observation about the limit h → 0 + : The fine tuning of h = 0 is a welcome emphasis of this paper, because it is poorly discussed in the neuronal avalanches literature. So, in the model, we must fine tune (or self-organize) two independent parameters, w and h. Although h = 0 is natural for magnetic spin systems, it is not so natural to neuronal networks where neurons are always bombarded with inputs external to the network. However, there exist an idea to obtain h = 0 (this explanation need not be included in the paper, it is only a clarification for the authors). The general form of Eq. (1) is s i = j w ij a j + I i − θ i , here I i are external inputs and θ i are the neuron firing thresholds (or biases). Now, suppose that the thresholds are adaptive so that their dynamics θ i (t) tend to cancel the inputs I i , as occur in perfect sensory adaptation and perfect firing rate adaptation. Then, although the biological external inputs are not vanishing, the field h = I − θ can be very close to zero, where I = I i and θ = θ i . This self-organization of θ i (t) to obtain a fixed point h * ≈ 0 has been proposed recently in [Girardi2020,Kinouchi2020] and the level h = 10 −6 or less can be easily achieved in these models.
Suggestion for future work, not for this paper: I think that if you use the function f (s) = βs/(1 + βs) instead of f (s) = β tanh(s), a lot of exact, instead of approximate, results can be obtained. In particular, an equation similar to Eq. (11) is exact (with the difference of some factors 2): 2βwΣ * 2 + (α − βw + 2βh)σ * − βh = 0. Since this equation is exact, Σ + = (w − w c )/w is valid for all (even large) w and is not a first order approximation for this particular f (s).