Skip to main content
Browse Subject Areas

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Adaptive scales of integration and response latencies in a critically-balanced model of the primary visual cortex

  • Keith Hayton ,

    Roles Conceptualization, Formal analysis, Investigation, Methodology, Software, Validation, Visualization, Writing – original draft, Writing – review & editing

    Affiliation Center for Studies in Physics and Biology, The Rockefeller University, New York, NY, United States of America

  • Dimitrios Moirogiannis,

    Roles Formal analysis, Investigation, Methodology, Validation, Visualization, Writing – review & editing

    Affiliation Center for Studies in Physics and Biology, The Rockefeller University, New York, NY, United States of America

  • Marcelo Magnasco

    Roles Conceptualization, Writing – review & editing

    Affiliation Center for Studies in Physics and Biology, The Rockefeller University, New York, NY, United States of America


The primary visual cortex (V1) integrates information over scales in visual space, which have been shown to vary, in an input-dependent manner, as a function of contrast and other visual parameters. Which algorithms the brain uses to achieve this feat are largely unknown and an open problem in visual neuroscience. We demonstrate that a simple dynamical mechanism can account for this contrast-dependent scale of integration in visuotopic space as well as connect this property to two other stimulus-dependent features of V1: extents of lateral integration on the cortical surface and response latencies.


Stimuli in the natural world have quantitative characteristics that vary over staggering ranges. Our nervous system evolved to parse such widely-ranging stimuli, and research into how the nervous system can cope with such ranges has led to considerable advances in our understanding of neural circuitry. For example at the sensory transduction level, the physical magnitudes encoded into primary sensors, such as light intensity, sound pressure level and olfactant concentration, vary over exponentially-large ranges, leading to the Weber-Fechner law [1]. As neuronal firing rates cannot vary over such large ranges, the encoding process must compress physical stimuli into the far more limited ranges of neural activity that represent them. These observations have stimulated a large amount of research into the mechanisms underlying nonlinearly compression of physical stimuli in the nervous system. Of relevance to our later discussion is the nonlinear compression of sound intensity in the early auditory pathways [24], where it has been shown that poising the active cochlear elements on a Hopf bifurcation leads to cubic-root compression.

But other characteristics besides the raw physical magnitude still vary hugely. The wide range of spatial extents and correlated linear structures present in visual scenery [57] leads to a more subtle problem, if we think of the visual areas as fundamentally limited by corresponding anatomical connectivity. Research into this problem has been focused on elucidating the nature of receptive fields of neurons in the primary visual cortex (V1) [813]. Studies have found that as the contrast of a stimulus is decreased, the receptive field [14, 15] size or area of spatial summation in visual space increases (Fig 1) [12, 13, 16, 17]. As an example of contextual modulation of neuronal responses, this problem has naturally received theoretical attention [1820]. However, current literature does not describe this phenomenon as structurally integral to the neural architecture but rather either highlight a different set of features or the contextual modulations are explicitly written in an ad hoc fashion. Our aim is to develop a model which displays this phenomenon structurally, as a direct consequence of the neural architecture. In our proposed models, multiple length scales emerge naturally without any fine tuning of the system’s parameters. This leads to length-tuning curves similar to the ones measured in Kapadia et al. over the entire range (Fig 1) [12].

Fig 1. Reprinted from [12] under a CC BY license, with permission from National Academy of Sciences, U.S.A., original copyright (1999). Measurements of single-neuron responses in the V1 area of monkeys to optimally oriented bars of light of different lengths and contrasts.

Panels a and b are measurements from two distinct neurons. The units of length along the horizontal axis are in minutes of arc. The solid, dotted, and dashed curves represent bars of light of 50% contrast, 15% contrast, and 50% contrast embedded in a textured background, respectively. The dashed curves are irrelevant to the focus of this paper.

The findings of Kapadia et al. demonstrate that receptive fields in V1 are not constant but instead grow and shrink, seemingly beyond naive anatomical parameters, according to stimulus contrast. The “computation” being carried out is not fixed but is itself a function of the input. Let us examine this distinction carefully. There are numerous operations in image processing, such as Gaussian blurs or other convolutional kernels, whose spatial range is fixed. It is very natural to imagine neural circuitry having actual physical connections corresponding to the nonzero elements of a convolutional kernel, and in fact a fair amount of effort has been expended trying to identify actual synapses corresponding to such elements [21, 22]. There are, however, other image-processing operations, such as floodfill (the “paint bucket”) whose spatial extent is entirely dependent on the input; the problem of “binding” of perceptual elements is usually thought about in this way, and mechanisms posited to underlie such propagation dynamics include synchronization of oscillations acting in a vaguely paint-bucket-like way [2325]. This dichotomy is artificial because these are only the two extremes of a potentially continuous range. While the responses of neurons in V1 superficially appear to be convolutional kernels, their strong dependence on input characteristics, particularly the size of the receptive field, demonstrates a more complex logic in which spatial extent is determined by specific characteristics of the input. What is the circuitry underlying this logic?

Neurons in the primary visual cortex are laterally connected to other neurons on the cortical surface and derive input from them. Experiments have shown that the spatial extent on the cortical surface from which neurons derive input from other neurons through such lateral interactions varies with the contrast of the stimulus [26]. In the absence of stimulus contrast, spike-triggered traveling waves of activity propagate over large areas of cortex. As contrast is increased, the waves become weaker in amplitude and travel over increasingly small distances. These experiments suggest that the change in spatial summation area with increasing stimulus contrast may be consistent with the change in the decay constants of the traveling wave activity. However, no extant experiment directly links changes in summation in visual space to changes in integration on the cortical surface, and no explicit model of neural architecture has been shown to simultaneously account for, and thus connect, the input-dependence of spatial summation and lateral integration in V1. The latter one is our aim, and a crucial clue will come from the input-dependence of latencies.

Recently, a critically-balanced network model of cortex was proposed to explain the contrast dependence of functional connectivity [27]. It was shown that in the absence of input, the model exhibits wave-like activity with an infinitely-long ranged susceptibility, while in the presence of input, perturbed network activity decays exponentially with an attenuation constant that increases with the strength of the input. These results are in direct agreement with Nauhaus et al. [26].

We will now demonstrate that a similar model also leads to adaptive scales of spatial integration in visual space. Our model makes two key assumptions. The first is a local, not just global, balance of excitation and inhibition across the entire network; all eigenmodes of the network are associated with purely imaginary eigenvalues. It has been shown that such a critically-balanced configuration can be achieved by simulating a network of neurons with connections evolving under an anti-Hebbian rule [28]. The second key assumption is that all interactions in the network are described by the connectivity matrix; nonlinearities do not couple distinct neurons in the network.

In dynamical systems theory, the existence of purely imaginary eigenvalues implies the existence of an invariant subspace of the activity known as a center manifold [29]. In contrast to hyperbolic fixed points [29], where the linearization of the system fully describes the topological structure of the local solution, dynamics on center manifolds are not dominated by the linearization. This leads to complex nonlinear behavior where nonlinear terms and input parameters play a crucial role in determining the properties of the system such as relaxation timescales and correlation lengths [27]. In the model presented in this paper, the center manifold is full dimensional, and thus the rich, complex behavior we will be discussing is not surprising. We postulate that, in general, neural systems utilize center manifolds in order to flexibly integrate sensory input. Our aim in this paper is not to provide a detailed neuroanatomical and physiological model of V1, but rather to construct a toy model which provides an existence proof that center manifold dynamics can account for and connect three input-dependent computational properties of V1. This approach of constructing a minimalist toy model to explain how a given mechanism can lead to a particular set of properties is common practice in theoretical physics and is the underlying philosophical approach of several well known theoretical neuroscience models, e.g. Wilson-Cowan equations, Hopfield networks, and Kuramoto models [30, 31].

There are a number of examples of dynamical criticality in neuroscience, including experimental studies in motor cortex [32], theoretical [33] and experimental studies [34] of line attractors in oculomotor control, line attractors in decision making [35], Hopf bifurcation in the auditory periphery [24, 36, 37], olfactory system [38], and theoretical work on regulated criticality [39]. More recently, Solovey et al. [40] performed stability analysis of high-density electrocorticography recordings covering an entire cerebral hemisphere in monkeys during reversible loss of consciousness. Performing a moving vector autoregressive analysis of the activity, they observed that the eigenvalues crowd near the critical line. During loss of consciousness, the numbers of eigenmodes at the edge of instability decrease smoothly but drift back to the critical line during recovery of consciousness.

Dynamical criticality is distinct from statistical criticality [41] which is related to the statistical mechanics of second-order phase transitions. It has been proposed that neural systems [42], and more generally biological systems [43], are statistically critical in the sense that they are poised near the critical point of a phase transitions [44, 45]. Statistical criticality is characterized by power law behavior such as avalanches [4648] and long-range spatiotemporal correlations [49, 50]. While both dynamical criticality and statistical criticality have had success in neuroscience, their relation is still far from clear [28, 43, 51].

We also examine the dynamics of the system and show that its activity exponentially decays to a limit cycle over multiple timescales, which depend on the strength of the input. Specifically, we find that the temporal exponential decay constants increase with increasing input strength. This result agrees with single-neuron studies which have found that response latencies in V1 decrease with increasing stimulus contrast [12, 5254]. We now turn to describing our model.


Let be the activity vector for a network of neurons which evolve in time according to the normal form equation: (1) In this model, originally proposed by Yan and Magnasco [27], neurons interact with one another through a skew-symmetric connectivity matrix A. The cubic-nonlinear term in the model is purely local and does not couple the activity states of distinct neurons, while the external input to the system may depend on time and have a complex spatial pattern.

The original model considered a 2-D checkerboard topology of excitatory and inhibitory neurons. For theoretical simplicity and computational ease, we will instead consider a 1-D checkerboard layout of excitatory and inhibitory neurons which interact through equal strength, nearest neighbor connections (Fig 2). In this case, Aij = (−1)j s(δi, j+1 + δi, j−1), where i, j = 0, 1, …, N − 1 and s is the synaptic strength. Boundary conditions are such that the activity terminates to 0 outside of the finite network.

Fig 2. Simplest connectivity matrix A.

A finite line of excitatory and inhibitory neurons. White nodes represent excitatory neurons. Black nodes are inhibitory. All connections have strength of the same magnitude.

We are specifically interested in the time-asymptotic response of the system, but explicitly integrating the stiff, high-dimensional ODE in (1) is difficult. Fortunately, we can bypass numerical integration methods by assuming periodic input of the form I(t) = Feiωt, where and look for solutions X(t) = Zeiωt, where . Substituting these into (1), we find that:


And define g(Z) to be equal to the right hand side of (2).

The solution of (2) can be found numerically by using the multivariable Newton-Raphson method in : (3) where and are the concatenations of the real and imaginary parts of Z and g, respectively. J is the Jacobian of with respect to


Following the lead of previous work [18] and experimental studies [55], we assume the input strength from lateral geniculate nucleus to V1 to be a linear function of the stimulus contrast. To then test how the response of a single neuron in our network varies with both the contrast and length of the stimulus, we select a center neuron at index c and then calculate, for a range of input strengths, the response of the neuron as a function of input length around it. Formally, for each input strength level , we solve (3) for: (4) where k = 0, …, N − 1, describes the spatial shape of the input, and 2l+ 1 is the length of the input in number of neurons. The response of the center neuron is taken as the modulus of Zc, and we focus on the case where ω is an eigenfrequency of A and v the corresponding eigenvector.

The results for a 1-D checkerboard network of 64 neurons is shown in Fig 3. Here we fix a center neuron and sweep across a small range of eigenfrequencies ω of A. The curves from bottom to top correspond to an ascending order of base-2 exponentially distributed input strengths C = 2i. For all eigenfrequencies, the peak of the response curves shift towards larger input lengths as the input strength decreases. In fact, for very weak input, the response curves rise monotonically over the entire range of input lengths without ever reaching a maximum in this finite network. This is in contrast to the response curves corresponding to strong input, which always reach a maximum but, depending on the eigenfrequency, exhibit varying degrees of response suppression beyond the maximum. This is consistent with variability of response suppression in primary visual cortex studies [12, 13]. In Fig 3, eigenfrequencies ω = 1.92, 1.96, 1.99 show the greatest amount of suppression while the others display little to none.

Fig 3. Length-response curves for different eigenfrequencies.

In each panel, which corresponds to a different eigenfrequency of A, we plot the response of a neuron (with index N/2) as a function of input length for a group of exponentially distributed input strengths. The blue arrow in the first plot indicates the direction of increasing input strength. The length of the input is recorded in number of neurons and the response is taken as the modulus of the amplitude of the time-asymptotic stable limit cycle.

To understand why certain eigenfrequencies lead to suppression, we fix the eigenfrequency to be ω = 1.92 and examine the response curves of different center neurons. The response of four center neurons (labeled by network position) and the modulus of the eigenfrequency’s corresponding eigenvector are plotted in Fig 4. The center neurons closest to the zeros of the eigenvector experience the strongest suppression for long line lengths. Neuron 38 closer to the peak of the eigenvector’s modulus experiences almost zero suppression. This generally holds for all eigenvectors and neurons in the network as all eigenvectors are periodic in their components with an eigenvalue-dependent spatial frequency. The periodicity of the eigenvectors arise from the fact that A2, which shares the same eigenvectors as A, is a circulant matrix.

Fig 4. Length-response curves of different neurons.

The top plot depicts the modulus of the eigenvector corresponding to eigenfrequency ω = 1.92. The 4 panels below are plots of the length-response curves for 4 different neurons in the network. The position of the neurons relative to the shape of the eigenvector are noted with the gray bars and arrows.

To strengthen the connection between model and neurophysiology, one can consider a critically-balanced network with an odd number of neurons so that 0 is now an eigenfrequency of the system. In our model, input associated with the 0-eigenmode represents direct current input to the system which is what neurophysiologists utilize in experiments; the visual input is not flashed [12, 13]. Contrary to the even case, long range connections must be added on top of the nearest neighbor connectivity in order to recover periodic eigenvectors and hence suppression past the response curves maximums.

Next, we show that the network not only selectively integrates input as a function of input strength but also operates on multiple time scales which flexibly adapt to the input. This behavior is not surprising given that in the case of a single critical Hopf oscillator, the half width of the resonance, the frequency range for which the oscillator’s response falls by a half, is proportional to the forcing strength of the input, where Γ is the half-width F the input strength [2]. Thus, decay constants in the case of a single critical oscillator should grow with the input forcing strength as .

Assuming input Feiωt, as described above, the network activity x(t), given by (1), decays exponentially in time to a stable limit cycle, X(t) = Zeiωt. This implies that for any neuron i in the network, |xi(t)| = ebt f(t)+ |Zi| during the approach to the limit cycle. We therefore plot log(||xi(t)| − |Zi||) over the transient decay period and estimate the slope of the linear regimes. We do this for a nearly network size input length (input length = 29, N = 32) and a range of exponentially distributed input strengths. In Fig 5, we plot representative transient periods of a single neuron corresponding to 3 input strengths: 2−10, 2−4, and 22. For weak input there is a fast single exponential decay regime (red) that determines the system’s approach to the stable limit cycle. As we increase the input, however, the transient period displays two exponential decay regimes: the fast decay regime (red) which was observed in the presence of weak input and a new slow decay regime (blue) immediately preceding the stable limit cycle. For very large input strength, the slow decay regime becomes dominant. The multiple decay regimes is a surprising result which doesn’t appear in the case of a single critical Hopf oscillator.

Fig 5. Input-strength-dependent timescales.

For neuron i in the network, we plot log(||xi(t)| − |Zi||) as a function of time for 3 different input strengths. Linear regions correspond to exponential decay. In the presence of weak input, a fast decay regime (red) guides the dynamics towards a stable limit cycle. For the intermediate input strength, a new, distinct slow decay regime appears (blue), which becomes dominant for strong input.

We estimate the exponential decay constants as a function of input strength and plot them on a log-log scale in Fig 6. The red circles correspond to the fast decay regime, while the blue circles correspond to the slow decay regime, which becomes prominent for large forcings. We separately fit both the slow and fast decay regimes with a best fit line. Unsurprisingly, the slopes of the lines are equal and approximately . Thus, the decay constants grow with the input as ∝ , where F is the input strength. This implies that the system operates on multiple timescales dynamically switching from one to another depending on the magnitude of the forcing. Larger forcings lead to faster network responses.

Fig 6. Input-strength dependence of exponential decay constants.

The temporal exponential decay constants for a range of input strengths is depicted above. A fast decay regime (red circles) is accompanied by a slow decay regime (blue circles) at large input strengths. Each decay regime is separately approximated by a least squares line (dashed lines) in log-log space.

In this paper, we consider a line of excitatory and inhibitory neurons, but our results hold equally well for a ring of neurons with periodic boundary conditions and appropriately chosen long range connections. Ring networks have extensively been studied as a model of orientation selectivity in V1 [5662]. In agreement with recent findings [63], the critically-balanced ring network exhibits surround suppression in orientation space when long range connections are added on top of nearest neighbor connectivity.


We have shown that a simple dynamical system poised at the onset of instability exhibits an input-strength-dependent scale of integration of the system’s input and input-strength-dependent response latencies. This finding strongly complements our previous results showing that a similar nonlinear process with fixed, nearest neighbor network connectivity leads to input-dependent functional connectivity. This system is thus the first proposed mechanism that can account for contrast dependence of spatial summation, functional connectivity, and response latencies. In this framework, these three characteristic properties of signal processing in V1 are intrinsically linked to one another. As our model is just a toy model of center manifold dynamics, we do not suggest that a ring or 1-D line topology is necessarily present in V1 anatomy; although, if the brain does indeed utilize center manifolds in the processing of sensory input, it might be the case that the full high dimensional phase space of cortical dynamics could be reduced to simple, low dimensional structures on the center manifold.

The theory of V1 dynamics presented in this paper makes two testable predictions. The first is the specific form of the relationship between spatial and temporal frequencies of neural activity in V1. In physics, the relationship between the spatial and temporal frequencies in a system is known as the dispersion relation. Yan and Magnasco [27] have shown that the dispersion relation of the system considered in this paper and described by Eq (1) is elliptical, c2 k2+ ω2 = 1, where k and ω are the spatial and temporal frequencies, respectively, and c is a constant. If our theory is correct, multielectrode array recordings in V1 should reveal elliptic dispersion relations. Unfortunately, we are unaware of any studies that have examined dispersion relations in V1. Our theory also makes testable predictions regarding temporal response latencies in V1. In particular, our theory implies that temporal decay constants in V1 should increase as a power law with the contrast level. The predicted exponent for the power law is 2/3, which could be tested for in single or multielectrode recordings. Experiments could also test for the presence of multiple relaxation timescales, which our model predicts.


  1. 1. Fechner G. 1966 Elements of Psychophysics. Howes DH, Boring EC, Adler HE Holt, Rinehart and Winston, New York ((Translated from German) Originally published in 1860). 1860.
  2. 2. Eguíluz VM, Ospeck M, Choe Y, Hudspeth AJ, Magnasco MO. Essential nonlinearities in hearing. Physical Review Letters. 2000 May 29;84(22):5232. pmid:10990910
  3. 3. Camalet S, Duke T, Jülicher F, Prost J. Auditory sensitivity provided by self-tuned critical oscillations of hair cells. Proceedings of the National Academy of Sciences. 2000 Mar 28;97(7):3183–8.
  4. 4. Kern A, Stoop R. Essential role of couplings between hearing nonlinearities. Physical Review Letters. 2003 Sep 19;91(12):128101. pmid:14525401
  5. 5. Field DJ. Relations between the statistics of natural images and the response properties of cortical cells. Josa a. 1987 Dec 1;4(12):2379–94.
  6. 6. Ruderman DL, Bialek W. Statistics of natural images: Scaling in the woods. Physical Review Letters. 1994 Aug 8;73(6):814–817. pmid:10057546
  7. 7. Sigman M, Cecchi GA, Gilbert CD, Magnasco MO. On a common circle: natural scenes and Gestalt rules. Proceedings of the National Academy of Sciences. 2001 Feb 13;98(4):1935–40.
  8. 8. Kapadia MK, Ito M, Gilbert CD, Westheimer G. Improvement in visual sensitivity by changes in local context: parallel studies in human observers and in V1 of alert monkeys. Neuron. 1995 Oct 31;15(4):843–56. pmid:7576633
  9. 9. Zipser K, Lamme VA, Schiller PH. Contextual modulation in primary visual cortex. Journal of Neuroscience. 1996 Nov 15;16(22):7376–89. pmid:8929444
  10. 10. Levitt JB, Lund JS. Contrast dependence of contextual effects in primate visual cortex. Nature. 1997 May 1;387(6628):73. pmid:9139823
  11. 11. Polat U, Mizobe K, Pettet MW, Kasamatsu T, Norcia AM. Collinear stimuli regulate visual responses depending on cell’s contrast threshold. Nature. 1998 Feb 5;391(6667):580–4. pmid:9468134
  12. 12. Kapadia MK, Westheimer G, Gilbert CD. Dynamics of spatial summation in primary visual cortex of alert monkeys. Proceedings of the National Academy of Sciences. 1999 Oct 12;96(21):12073–8.
  13. 13. Sceniak MP, Ringach DL, Hawken MJ, Shapley R. Contrast’s effect on spatial summation by macaque V1 neurons. Nature neuroscience. 1999 Aug 1;2(8):733–9. pmid:10412063
  14. 14. Hubel DH, Wiesel TN. Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex. The Journal of physiology. 1962 Jan 1;160(1):106–54. pmid:14449617
  15. 15. Kuffler SW. Discharge patterns and functional organization of mammalian retina. Journal of neurophysiology. 1953 Jan 1;16(1):37–68. pmid:13035466
  16. 16. DeAngelis GC, Robson JG, Ohzawa I, Freeman RD. Organization of suppression in receptive fields of neurons in cat visual cortex. Journal of Neurophysiology. 1992 Jul 1;68(1):144–63. pmid:1517820
  17. 17. DeAngelis GC, Freeman RD, Ohzawa IZ. Length and width tuning of neurons in the cat’s primary visual cortex. Journal of neurophysiology. 1994 Jan 1;71(1):347–74. pmid:8158236
  18. 18. Schwabe L, Obermayer K, Angelucci A, Bressloff PC. The role of feedback in shaping the extra-classical receptive field of cortical neurons: a recurrent network model. Journal of Neuroscience. 2006 Sep 6;26(36):9117–29. pmid:16957068
  19. 19. Lochmann T, Ernst UA, Deneve S. Perceptual inference predicts contextual modulations of sensory responses. Journal of neuroscience. 2012 Mar 21;32(12):4179–95. pmid:22442081
  20. 20. Zhu M, Rozell CJ. Visual nonclassical receptive field effects emerge from sparse coding in a dynamical system. PLoS computational biology. 2013 Aug 29;9(8):e1003191. pmid:24009491
  21. 21. Olshausen BA, Field DJ. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature. 1996 Jun;381(6583):607. pmid:8637596
  22. 22. Reid RC, Alonso JM. Specificity of monosynaptic connections from thalamus to visual cortex. Nature. 1995 Nov;378(6554):281. pmid:7477347
  23. 23. Rosenblatt F. Principles of neurodynamics. perceptrons and the theory of brain mechanisms. CORNELL AERONAUTICAL LAB INC BUFFALO NY; 1961 Mar 15.
  24. 24. Von der Malsburg C. The what and why of binding: the modeler’s perspective. Neuron. 1999 Sep 30;24(1):95–104. pmid:10677030
  25. 25. Lee TS, Mumford D. Hierarchical Bayesian inference in the visual cortex. JOSA A. 2003 Jul 1;20(7):1434–48. pmid:12868647
  26. 26. Nauhaus I, Busse L, Carandini M, Ringach DL. Stimulus contrast modulates functional connectivity in visual cortex. Nature neuroscience. 2009 Jan 1;12(1):70–6. pmid:19029885
  27. 27. Yan XH, Magnasco MO. Input-dependent wave attenuation in a critically-balanced model of cortex. PloS one. 2012 Jul 25;7(7):e41419. pmid:22848489
  28. 28. Magnasco MO, Piro O, Cecchi GA. Self-tuned critical anti-Hebbian networks. Physical review letters. 2009 Jun 22;102(25):258102. pmid:19659122
  29. 29. Wiggins S. Introduction to applied nonlinear dynamical systems and chaos. Springer Science and Business Media; 2003 Oct 1.
  30. 30. Ermentrout GB, Terman DH. Mathematical foundations of neuroscience. Springer Science and Business Media; 2010 Jul 1.
  31. 31. Hoppensteadt FC, Izhikevich EM. Weakly connected neural networks. Springer Science and Business Media; 2012 Dec 6.
  32. 32. Churchland MM, Cunningham JP, Kaufman MT, Foster JD, Nuyujukian P, Ryu SI, Shenoy KV. Neural population dynamics during reaching. Nature. 2012 Jul;487(7405):51. pmid:22722855
  33. 33. Seung HS. Continuous attractors and oculomotor control. Neural Networks. 1998 Nov 30;11(7):1253–8.
  34. 34. Seung HS, Lee DD, Reis BY, Tank DW. Stability of the memory of eye position in a recurrent network of conductance-based model neurons. Neuron. 2000 Apr 30;26(1):259–71. pmid:10798409
  35. 35. Machens CK, Romo R, Brody CD. Flexible control of mutual inhibition: a neural model of two-interval discrimination. Science. 2005 Feb 18;307(5712):1121–4. pmid:15718474
  36. 36. Choe Y, Magnasco MO, Hudspeth AJ. A model for amplification of hair-bundle motion by cyclical binding of Ca2+ to mechanoelectrical-transduction channels. Proceedings of the National Academy of Sciences. 1998 Dec 22;95(26):15321–6.
  37. 37. Kanders K, Lorimer T, Gomez F, Stoop R. Frequency sensitivity in mammalian hearing from a fundamental nonlinear physics model of the inner ear. Scientific Reports. 2017 Aug 30;7(1):9931. pmid:28855554
  38. 38. Freeman WJ, Holmes MD. Metastability, instability, and state transition in neocortex. Neural Networks. 2005 Aug 31;18(5):497–504. pmid:16095879
  39. 39. Bienenstock E, Lehmann D. Regulated criticality in the brain?. Advances in complex systems. 1998 Dec;1(04):361–84.
  40. 40. Solovey G, Alonso LM, Yanagawa T, Fujii N, Magnasco MO, Cecchi GA, Proekt A. Loss of consciousness is associated with stabilization of cortical activity. Journal of Neuroscience. 2015 Jul 29;35(30):10866–77. pmid:26224868
  41. 41. Beggs JM, Timme N. Being critical of criticality in the brain. Frontiers in physiology. 2012 Jun 7;3:163. pmid:22701101
  42. 42. Chialvo DR. Emergent complex neural dynamics. Nature physics. 2010 Oct;6(10):744.
  43. 43. Mora T, Bialek W. Are biological systems poised at criticality?. Journal of Statistical Physics. 2011 Jul 1;144(2):268–302.
  44. 44. da Silva L, Papa AR, de Souza AC. Criticality in a simple model for brain functioning. Physics Letters A. 1998 Jun 8;242(6):343–8.
  45. 45. Fraiman D, Balenzuela P, Foss J, Chialvo DR. Ising-like dynamics in large-scale functional brain networks. Physical Review E. 2009 Jun 23;79(6):061922.
  46. 46. Beggs JM, Plenz D. Neuronal avalanches in neocortical circuits. Journal of neuroscience. 2003 Dec 3;23(35):11167–77. pmid:14657176
  47. 47. Levina A, Herrmann JM, Geisel T. Dynamical synapses causing self-organized criticality in neural networks. Nature physics. 2007 Dec;3(12):857.
  48. 48. Gireesh ED, Plenz D. Neuronal avalanches organize as nested theta-and beta/gamma-oscillations during development of cortical layer 2/3. Proceedings of the National Academy of Sciences. 2008 May 27;105(21):7576–81.
  49. 49. Eguiluz VM, Chialvo DR, Cecchi GA, Baliki M, Apkarian AV. Scale-free brain functional networks. Physical review letters. 2005 Jan 6;94(1):018102. pmid:15698136
  50. 50. Kitzbichler MG, Smith ML, Christensen SR, Bullmore E. Broadband criticality of human brain network synchronization. PLoS computational biology. 2009 Mar 20;5(3):e1000314. pmid:19300473
  51. 51. Kanders K, Lorimer T, Stoop R. Avalanche and edge-of-chaos criticality do not necessarily co-occur in neural networks. Chaos: An Interdisciplinary Journal of Nonlinear Science. 2017 Apr;27(4):047408.
  52. 52. Carandini M, Heeger DJ. Summation and division by neurons in primate visual cortex. Science-AAAS-Weekly Paper Edition-including Guide to Scientific Information. 1994 May 27;264(5163):1333–5.
  53. 53. Gawne TJ, Kjaer TW, Richmond BJ. Latency: another potential code for feature binding in striate cortex. Journal of neurophysiology. 1996 Aug 1;76(2):1356–60. pmid:8871243
  54. 54. Albrecht DG, Geisler WS, Frazor RA, Crane AM. Visual cortex neurons of monkeys and cats: temporal dynamics of the contrast response function. Journal of Neurophysiology. 2002 Aug 1;88(2):888–913. pmid:12163540
  55. 55. Bauer U, Scholz M, Levitt JB, Obermayer K, Lund JS. A model for the depth-dependence of receptive field size and contrast sensitivity of cells in layer 4C of macaque striate cortex. Vision research. 1999 Feb 1;39(3):613–29. pmid:10341989
  56. 56. Ben-Yishai R, Bar-Or RL, Sompolinsky H. Theory of orientation tuning in visual cortex. Proceedings of the National Academy of Sciences. 1995 Apr 25;92(9):3844–8.
  57. 57. Hansel D, Sompolinsky H. 13 Modeling Feature Selectivity in Local Cortical Circuits.
  58. 58. Ermentrout B. Neural networks as spatio-temporal pattern-forming systems. Reports on progress in physics. 1998 Apr;61(4):353.
  59. 59. Bressloff PC, Bressloff NW, Cowan JD. Dynamical mechanism for sharp orientation tuning in an integrate-and-fire model of a cortical hypercolumn. Neural computation. 2000 Nov;12(11):2473–511. pmid:11110123
  60. 60. Bressloff PC, Cowan JD, Golubitsky M, Thomas PJ, Wiener MC. Geometric visual hallucinations, Euclidean symmetry and the functional architecture of striate cortex. Philosophical Transactions of the Royal Society of London B: Biological Sciences. 2001 Mar 29;356(1407):299–330. pmid:11316482
  61. 61. Dayan P, Abbott LF. Theoretical neuroscience. Cambridge, MA: MIT Press; 2001.
  62. 62. Shriki O, Hansel D, Sompolinsky H (2003) Shriki O, Hansel D, Sompolinsky H. Rate models for conductance-based cortical neuronal networks. Neural computation. 2003 Aug;15(8):1809–41. pmid:14511514
  63. 63. Rubin DB, Van Hooser SD, Miller KD. The stabilized supralinear network: a unifying circuit motif underlying multi-input integration in sensory cortex. Neuron. 2015 Jan 21;85(2):402–17. pmid:25611511