Skip to main content
Advertisement
  • Loading metrics

Selective consistency of recurrent neural networks induced by plasticity as a mechanism of unsupervised perceptual learning

  • Yujin Goto,

    Roles Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Resources, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing

    Affiliations Division of Neural Dynamics, Department of System Neuroscience, National Institute for Physiological Sciences, National Institutes of Natural Sciences, Okazaki, Aichi, Japan, Department of Physiological Sciences, School of Life Science, The Graduate University for Advanced Studies (SOKENDAI), Okazaki, Aichi, Japan

  • Keiichi Kitajo

    Roles Conceptualization, Data curation, Funding acquisition, Methodology, Project administration, Resources, Supervision, Validation, Writing – original draft, Writing – review & editing

    kkitajo@nips.ac.jp

    Affiliations Division of Neural Dynamics, Department of System Neuroscience, National Institute for Physiological Sciences, National Institutes of Natural Sciences, Okazaki, Aichi, Japan, Department of Physiological Sciences, School of Life Science, The Graduate University for Advanced Studies (SOKENDAI), Okazaki, Aichi, Japan

Abstract

Understanding the mechanism by which the brain achieves relatively consistent information processing contrary to its inherent inconsistency in activity is one of the major challenges in neuroscience. Recently, it has been reported that the consistency of neural responses to stimuli that are presented repeatedly is enhanced implicitly in an unsupervised way, and results in improved perceptual consistency. Here, we propose the term "selective consistency" to describe this input-dependent consistency and hypothesize that it will be acquired in a self-organizing manner by plasticity within the neural system. To test this, we investigated whether a reservoir-based plastic model could acquire selective consistency to repeated stimuli. We used white noise sequences randomly generated in each trial and referenced white noise sequences presented multiple times. The results showed that the plastic network was capable of acquiring selective consistency rapidly, with as little as five exposures to stimuli, even for white noise. The acquisition of selective consistency could occur independently of performance optimization, as the network’s time-series prediction accuracy for referenced stimuli did not improve with repeated exposure and optimization. Furthermore, the network could only achieve selective consistency when in the region between order and chaos. These findings suggest that the neural system can acquire selective consistency in a self-organizing manner and that this may serve as a mechanism for certain types of learning.

Author summary

This study explores how the brain can achieve stable information processing despite its inherent variability. Here we introduced the concept of "selective consistency"—the brain’s ability to enhance response consistency to stimuli experienced multiple times—and investigated how it is achieved in an unsupervised and self-organizing manner. Using a reservoir-based neural network model with Hebbian plasticity, we tested how it acquires selective consistency by exposing it to repeated white noise stimuli multiple times. Remarkably, our results demonstrate that the neural network can acquire this consistency quickly, with just five exposures to the stimuli. Additionally, the plastic neural network only achieved selective consistency when operating near the edge of chaos—a critical state maintaining a delicate balance between order and chaos. These findings suggest that a brain near criticality can self-organize its structure with plasticity to improve perceptual consistency without explicit supervision. This study underscores the potential for neural plasticity to drive selective consistency and suggests that such mechanisms may be fundamental to how the brain develops to achieve consistent information processing from experiences.

1. Introduction

Variability in perception and action is a prominent topic in neuroscience [14] and can be observed even when the external conditions, such as the sensory input or task goal, are consistent across experimental trials. It is known that trial-to-trial variability in motor function is inevitable even for well-trained experts, and similar arguments have been made regarding perception [5]. Variability can be attributed to a variety of factors, of which the most fundamental is the variability of the nervous system’s response. Therefore, the effects of neuronal variability on behavioral variability have been widely discussed in the context of neuroscience.

Neuronal variability is evident at the neuronal level from the observation of single-cell spikes [6] to electroencephalograms (EEG) [79] and can have various causes [10,11]. For instance, random fluctuations in voltage- or ligand-gated ion channels can cause membrane potential changes, and the resulting electrical noise can affect neuronal responses even in the absence of synaptic inputs [10,12]. Such stochastic noise is one of the factors that cause variability in nervous system responses, not only at rest but also during stimulus-evoked activity. Furthermore, deterministic fluctuations due to the nonlinearity of the nervous system also cause variability in neural activity. From a nonlinear dynamical systems theory viewpoint, it is natural that the neural system behaves inconsistently due to its complex connections. Nonlinear dynamical systems often exhibit complex and sensitive dependencies on initial conditions, meaning that even small differences in the starting state of a system can lead to significant differences in its behavior over time. This property holds for both the entire network and the microscopic level of the brain; neurons are known to perform nonlinear operations, and even small biochemical and electrochemical fluctuations can significantly alter whole-cell responses [10].

Nevertheless, despite these inherent inconsistencies in the neural system, our perceptual and motor experiences are relatively consistent. The same object is perceived as such across time and space, and we can control our body as we did yesterday. This raises questions about how the brain, a system with both deterministic and stochastic fluctuations, achieves consistent experiences, and how variability affect neuronal information processing [11].

Recent related studies have suggested that it is important for a healthy brain to show high variability in the resting state (with no input) and high consistency in its response to stimuli [7,13,14]. For instance, it has been reported that younger and healthier individuals exhibit a greater degree of variability in brain activity and that increased variability correlates with better performance in cognitive tasks [7]. Other studies have reported that neural variability quenching, i.e., large fluctuations in brain activity during the resting state and small trial-to-trial variability during specific tasks, are important for task performance [8,1316]. It has also been found that individuals with disabilities, such as those in a vegetative state, have a weaker degree of variability quenching than healthy individuals [14]. Additionally, several fMRI and EEG studies have reported a higher degree of neural variability in patients with psychiatric disorders (e.g., autism spectrum disorder and attention deficit hyperactivity disorder) and during certain perceptual (visual, auditory, or tactile) tasks compared with controls [1721]. Indeed, recent studies have reported that heterogeneity in cell types and activity, facilitated by within-cell-type diversity, serves as a mechanism of homeostatic control, dynamic stability and resilience against external fluctuations in population responses, and the reduction in cellular heterogeneity has been linked to pathological brain states, such as epileptic seizures [22,23].

In nonlinear dynamical systems, the property of identical responses to the same time-varying input regardless of differences in the initial state of the system is called "consistency" and is widely investigated in various nonlinear systems, such as laser systems [24], the Lorenz model [25], and the human brain [26]. These considerable results and the perspective of nonlinear dynamical systems theory suggest that the brain benefits from a delicate balance between more complex dynamics in the absence of input and more consistent responses in the presence of input.

Neural consistency of the response to stimuli is not a common characteristic of the system but depends on the pattern of input received. In a pioneering study of trial-to-trial variability in neural activity to the same input, neurons in rat neocortical slices showed different consistencies depending on the temporal pattern of input at the level of single-cell responses [6]. It has also been reported in macroscopic experiments that the brain shows selective, highly consistent responses to specific time-varying input patterns relative to other patterns [2729]. Additionally, it has been reported that the consistency of the nervous system is related to the consistency of perception and other abilities [14,21,29,30]. For instance, Schurger et al. reported that the cortical activity is more consistent across and within trials when sensory stimuli are consciously perceived in humans [14]. In addition, a previous study of our group showed that spatio-temporal consistency of EEG responses to repeatedly presented television commercials accounted for population preferences associated with the engagement and memory of viewers [31]. These studies suggest that the degree of neural response consistency depends on the characteristics of time-varying stimuli and affects cognitive and perceptual processes.

Here, we propose the term "selective consistency" to describe the phenomenon where neural activity exhibits high consistency and low variability in response to specific stimuli or time-varying patterns, even if those stimuli are not cognitively distinguishable. Neurophysiological studies have shown that highly variable neural activity augments the learning process and decreases to a well-learned pattern in the late learning phase [5]. Additionally, it has been reported that the trial-to-trial consistency of neural activity improves rapidly for sensory stimuli that are repeatedly presented [29], regardless of the subject’s consciousness, understanding of the experimental purpose, or attention [27,28,32]. Furthermore, a recent study found that the precision with which single neurons fire did not improve with learning, suggesting that changes in neural consistency induced by learning do not occur at the level of single neurons but rather at a network level [33]. These results suggest that improved selective consistency is implicitly achieved at the network level; however, the exact mechanism of this change is not clear.

Thus, we hypothesized that selective consistency is acquired in a self-organizing manner by plasticity within the nervous system. Self-organization is a phenomenon in which a system creates a structure with some kind of order as a result of individual autonomous behaviors and their interactions, even if the system cannot view itself as a whole or adjust to fulfill a specific purpose through supervised optimization. Changes in brain network connections also represent typical self-organization, which is mainly carried out by plasticity-induced renewal of synaptic connection strength between cells [34]. In the present study, we propose a mechanism by which changes in selective consistency are achieved in a self-organizing manner by the plasticity of neural networks.

2. Noise repetition-detection task

In this study, we modeled the experimental conditions of a human behavioral task known as the noise repetition-detection (NRD) task [35] using a neural network to elucidate the underlying mechanism of selective consistency. This task was chosen for three reasons. First, it uses specialized stimuli that are not encountered in everyday life, implying that the learning effect is confined to the experimental setting [3537]. Second, changes in selective consistency in this task have been observed not only in awake humans but also in anesthetized mice and sleeping humans, suggesting that the effect is independent of cognitive functions [2729,32]. Third, this learning effect is not modality-specific but widely observed in the nervous system [38].

2.1. Details of the NRD task

In the NRD task, listeners are presented with a white noise stimulus and asked to report whether the stimulus contains a repetition of noise tokens or not [35]. In basic NRD tasks, subjects were presented with either a 1 s sample of white noise (noise condition, N) or two identical and seamlessly concatenated 0.5 s white noise tokens (repeated noise condition, RN). For both N and RN stimuli, white noise realizations of sequences differed from trial to trial. Sporadically, one particular exemplar for both N and RN sounds reoccurred, interspersed throughout each experimental block and referred to as the referenced repeated noise (RefRN) and referenced noise (RefN), respectively (Fig 1). That is, for RefRN stimuli, not only was the same token of noise repeated (two times) within a trial, but also the same realization reoccurred over multiple trials. Listeners were asked to report whether the presented noise time series had a repetition of noise tokens (RN) or not (N) for each trial. Since the subjects were only required to describe the stimulus type (RN or N) and were not made aware of the presence of Ref stimuli in each block, the learning was implicit and unsupervised [32,35,3739].

thumbnail
Fig 1. The noise repetition–detection task proposed by Agus et al. (2010).

(a) A conceptual diagram of the task and stimuli types. Participants listen to a 1 s white noise stimulus and answer whether the first and second 0.5 s segments are the same. There are four stimuli types, divided by the presence or absence of repetition within the stimulus (RN/N) and the presence or absence of repetition throughout the session (Referenced). As RN and RefRN have a repetitive structure, the correct answer is “Yes”, whereas for N and RefN, the correct answer is “No”. (b) The concept art of repetition detection score time course in the NRD task. The detection score for RefRN improves to a nearly perfect score immediately, whereas the score for RN remains around chance–level. As the participants do not know about the presence of the referenced stimuli and the feedback for their answers is not given, this task can be regarded as implicit unsupervised perceptual learning.

https://doi.org/10.1371/journal.pcbi.1012378.g001

Previous studies have shown that listeners can perceive time-series repetitions in white noise for stimuli that were previously encountered many times (typically five to ten) but cannot perceive repetitions of unfamiliar stimuli. The detection sensitivity and reaction time of listeners to RefRN gradually improved throughout a block, whereas RN performances remained at a chance level [29,35,39,40]. This gradual improvement in RefRN performance can be observed with a variety of sensory modalities and stimuli types, e.g., random visual and tactile pulse sequences [38].

2.2. Neural correlates of NRD task

Neurophysiological studies have suggested a relationship between NRD performance and the inter-trial consistency of neural activity. For instance, a study demonstrated that as learning proceeds, the auditory evoked potential of EEG in areas responsive to auditory stimuli becomes stronger and aligned [32,41]. Event-related potentials are calculated by averaging hundreds of EEG waveforms for a specific event (in this case, white noise audio). Therefore, strong ERP components indicate that neural activity at that time is consistent within trials. Another study reported that the inter-trial phase coherence of the 3–8 Hz theta range of magnetoencephalogram after repeated exposure to stimuli was significantly higher than that for RN in the auditory cortex, even though they did not differ in early trials when learning was not sufficiently established [29] These changes in neural activities were also correlated with changes in behavioral performance [32]. Furthermore, the enhancement of selective consistency can also be observed in neural activities in sleeping humans and anesthetized mice simply through stimulus presentation [27,28]. Therefore, it was suggested that perceptual learning in the NRD task is associated with enhanced selective consistency, which occurs implicitly and distinctly from any cognitive function.

3. Methods

The model we used is based on the echo state network (ESN) [42] and has a plastic reservoir throughout unsupervised learning involving plasticity rules (Fig 2). ESNs are artificial recurrent neural network models, which are characterized by feedback ("recurrent") loops in their synaptic connection pathways. They can maintain ongoing activation even in the absence of input and thus exhibit spontaneous activity and dynamic memory. Biological neural networks are also typically recurrent networks. Therefore, ESNs have attracted attention as computational models of the information processing patterns of the brain [4244]. The classic ESN model has an input layer, a hidden recurrent layer, which is called a reservoir, and an output layer. The output of the reservoir is adjusted through the readout weights to match the target. This approach makes ESN unique and computationally lightweight, as it involves no direct training of the reservoir itself. However, for ESNs to function effectively, the reservoir is required to possess the echo state property (ESP), which can be regarded as a particular form of consistency [42,45,46]. When a drive subsystem and a response subsystem are represented by the states u and x, respectively, the ESP means that the response subsystem given the time-series input from the drive subsystem converges to the state where x is represented by x = Φ(u) regardless of the initial state of the reservoir [42]. In simple terms, the ESP is a condition of asymptotic state convergence of the reservoir network under the influence of a driving input.

thumbnail
Fig 2. The model descriptions.

(a) The process of input signal formation. There are four stimuli types: N and RefN stimuli consist of 1 s of white noise, whereas RN and RefRN stimuli consist of 0.5 s repetitions of white noise. The sampling frequency is 44 kHz. Subsequently, each time series is passed through an A–weighting filter, which reflects human auditory characteristics, peaking around 3,000 Hz, with high frequencies attenuated. The middle figure shows the resulting power spectra of before (gray) and after (black) A–weighting filter used in the simulation. After filter adaptation, each stimulus was resampled at 2,000 Hz to reduce computational costs. (b) An overview of the model. The resampled time series are presented to the neuron in the input layer as stimuli. W, the reservoir weights matrix is dynamic and maintained by Oja’s Hebbian plasticity rule. Wout, the weights between the reservoir and neurons in the output layer are optimized using the gradient descent method. The model’s output target is one step ahead of prediction of the input time series.

https://doi.org/10.1371/journal.pcbi.1012378.g002

In this study, we tested whether the ESN can acquire selective consistency (i.e., ESP depending on the input pattern) to a particular white noise realization via inducing plasticity into the reservoir. The ESP is connected to the algebraic properties of the reservoir weight matrix [42]. Consequently, alterations to the reservoir weight matrix affect the reservoir’s consistency (ESP). To explore whether synaptic plasticity in the reservoir enables it to obtain selective consistency for an identical input, we incorporated Oja’s Hebbian rule into the reservoir [34]. Note that changes to the reservoir weight matrix are implemented solely via internal activity-dependent and unsupervised plasticity, rather than through processes such as prediction error minimization. This is a crucial point for modeling the acquisition of selective consistency in the brain. Because NRD learning can be observed even in the absence of task demands or awareness, the learning mechanism must be given without any supervised optimizations. The primary simulation concept is as follows: if ESNs with plasticity exhibit greater consistency to the Ref stimuli than to non-Ref stimuli after exposure to the stimulus set in the NRD task, then self-organizing changes in the network may be a mechanism to acquire selective consistency in the neural system. Generally, the brain is considered a predictive machine, and its learning mechanisms are often explained by improvements in prediction accuracy. However, in the case of NRD learning, because the stimuli are random time series, such as white noise, it is fundamentally difficult to learn since past inputs contain no information about future inputs, ignoring the possible effect of the filter characteristics of the auditory system. This issue has been a challenge in understanding the mechanisms of NRD learning [47]. Here, we try to resolve the problem by introducing the perspective of considering the consistency of neural activity. In other words, even if the predictions remain inaccurate—or we can say traditional “optimization” does not work—as long as the ways of inaccurate prediction become consistent, participants should be able to perceive a repetition in the presented inputs.

3.1. Input signals

As our aim in this study was to simulate neural dynamics in the NRD task [35], the stimuli were generated in the same way (Fig 2). There were four stimuli types: N, RN, RefRN, and RefN. The N and RefN stimuli consisted of 1 s white noise at a sample rate of 44 kHz. The RN and RefRN stimuli were repeated noise consisting of a repeated identical noise segment concatenated twice. N and RN stimuli were generated anew for each trial. RefRN and RefN were generated in the same way as RN and N; however, their realizations were the same for all trials within each condition (i.e., RefRN and RefN) throughout an experimental block. The auditory signals were generated as follows. First, each stimulus type was generated for 1 s (44,000 points). For RN and RefRN, the first and second segments (22,000 points) were set as the same time series. Second, all stimuli were filtered through an A-weighting filter, which is the most commonly used simple human auditory filter [48]. Finally, sound signals were resampled to 2,000 Hz to reduce computational costs. For model training, five realizations for each stimulus type (20 stimuli in total) were presented as training data. The stimulus order was pseudorandom, except for the final presentation, which was set as one of N stimuli to avoid RefRN stimuli from being the last.

3.2. Neural network descriptions

The input signal, u(k), is presented to the reservoir through the input weight matrix Win from the input layer neuron following: (1) where xi(k), i = 1,…,N are the neural activations at time point k. ε replicates the internal gaussian noise and represents fluctuations of each neuron. f is the activation function of the neurons and is defined as a hyperbolic tangent. W∈ℝN×N is the synaptic weight matrix of the reservoir, which is fixed in the general ESN model. However, in this study, we arranged it to be dynamic and thus replaced it with W(k). Several studies have demonstrated that the introduction of a variable called the "leaking rate" can enhance the computational performance of ESN [42]. However, our research aims to examine changes in the system’s behavior induced by plasticity, rather than its computational performance; therefore, to simplify the problem, we did not introduce the leaking rate. The weight matrix, denoted by Win∈ℝN×1, connects an input neuron to the neurons in the reservoir and is generated as a set of uniformly distributed random numbers ranging from −1 to 1 and fixed throughout the simulation.

The number of reservoir neurons was set to 500, and their activation function was a hyperbolic tangent. The network was constructed as a sparse random network with a coupling density of d = 0.1. Non-zero elements of the weight matrix were defined as random numbers following a uniform distribution in the interval [−1,1]. To evaluate the impact of plasticity on reservoirs with varying degrees of consistency, we adjusted the spectral radius ρ(W) of the reservoirs using the following modification: (2)

Wrandom is the weight matrix that was initiated randomly without regard for a spectral radius, and ρdesired is the desired spectral radius. The spectral radius is an important hyperparameter that controls the connection strength in the reservoir. Specifically, it refers to the maximum absolute value of the eigenvalues of W. In general, if the activation function f of the reservoir is tanh, (3) is a necessary condition for a reservoir to have ESP. Therefore, by adjusting the value of the spectral radius, we prepared various reservoirs near the edge of chaos [49] (the edge of having and lacking ESP).

To simulate the adaption to the input signals, Oja’s Hebbian rule was applied to the reservoir. This rule can be derived as a simple Hebbian plasticity rule that includes a forgetting factor to limit the explosion of the weight [34]: (4) The synaptic learning rate parameter α was set to 10−7.

The hypothesis of this study is that selective consistency is acquired through the plasticity embedded in the RNN. Consequently, in principle, an output layer is not necessarily required. However, it is not self-evident whether selective consistency is acquired across all neurons in the reservoir, or limited to a few elements. In the latter case, merely assessing the consistency of the reservoir’s overall average or randomly selected neurons would not suffice to observe the phenomenon, and the optimization process may help detect the consistent neurons. Thus, we considered the possibility of achieving selective consistency by combining weight optimization in the output layer, as is common in reservoir computing, alongside plasticity.

The output layer has a neuron that has connections to reservoir neurons with a readout weight matrix Wout∈ℝN, and the output y of this model is calculated as a linear sum of the reservoir neurons’ dynamics: (5)

The computational task assigned to the model was to be one step ahead of predictions. Note that this study aimed to model how selective consistency is acquired implicitly. Neural selective consistency also occurs during sleep and in anesthetized mice, and thus we did not set the detection of noise repetition as the output task of the network in this simulation. The task assigned to the model was future predictions. This is based on the view of predictive coding, which assumes that the nervous system, especially the sensory system, works in a predictive manner [47,50]. Therefore, we regarded the output signal as the neural dynamics in the network, which can be the basis of repetition detection, but not perception itself.

To maximize the performance for the time-series prediction, Wout was optimized through the minibatch-based gradient descent (MBGD). The gradient descent method is an iterative first-order optimization algorithm used to find a local minimum/maximum of a given function. The error function that the gradient descent algorithm minimizes is written as follows: (6) where d(k) is the training data at time point k, and K is the total number of time points. The goal of the learning is to find the following: (7)

To solve the above problem, gradient ∇E is acquired. Then, the algorithm changes the variables W for the negative direction of the gradient −∇E proportionally to the learning rate η: (8)

Hyperparameter η controls how rapidly the descent progresses, which is generally set at 0.01–0.00001. For our model, 20 statistically independent noisy inputs were given to reproduce the perceptual learning of humans, and we repeatedly applied the gradient descent method for each trial. This kind of pseudo-online gradient descent method is called "the minibatch gradient descent method" because the training data are divided into "minibatches". The error of the minibatch number n, En, is calculated as follows: (9) where En reproduces the error of the minibatch number n, and Kn is the sample size of the minibatch data Dn. For normalization, the summation of error at each time point is divided by the sample size. The learning rate η was set to 0.01.

3.3. Simulation procedure

To investigate the effects of Hebbian plasticity, we trained multiple reservoirs with distinct spectral radii both with and without plasticity. The consistency of reservoir networks depends on their algebraic properties, particularly the spectral radius. A smaller spectral radius leads to consistent network behavior, whereas a larger one leads to chaos. We hypothesize that weak plasticity in the reservoir can fine-tune and make the reservoir have selective consistency to an identical input if the network is close to the edge of chaos. Therefore, we constructed reservoirs with spectral radii at values of 0.1–2.0 (see S1 Table for details). Subsequently, we exposed each reservoir to the stimuli of the NRD task and trained the reservoirs, both with and without plasticity, independently of each other. The training dataset and its order were set to be the same throughout the simulation.

3.4. Evaluation

We evaluated the changes in output time series, degree of selective consistency, and prediction error for each trained reservoir with 200 test runs. In the 200 test runs, N and RN stimuli used are different time series from those in the training runs, while RefN and RefRN used are the same time series as in the training runs. During the test runs, we halted the Hebbian rule and the gradient descent optimization of the output weights, and each trial was started with different initial values. This approach was chosen because in a nonlinear network, due to the network’s nonlinearity and ε—the internal Gaussian noise representing each neuron’s fluctuations—the network’s response exhibits different initial states and response trajectories in each trial. The reproducibility of these responses across trials can be the measure of consistency—the main topic of the present study.

3.4.1. Consistency of outputs.

Selective consistency was evaluated by Pearson correlations between 200 time points from the first and second halves of each output time series—the first 100 ms of each of the 500 ms segments that compose the stimulus. We do not use the entire time series for both the first and second halves because if the network exhibits ESP, it will eventually show a consistent response over time, while the duration of the transient period is different. The length of the window used for evaluation is based on the average duration of the transient period exhibited by the networks evaluated in this study—we set it at 200 points, which is one-fifth the length of the repetitive segment (see Fig 3).

thumbnail
Fig 3. Representative examples of output time–series data.

These are outputs for RefRN of Hebbian networks, although the similar tendency was observed for different network types and stimulus types. The results for three distinct spectral radii (ρ = 0.1, 1.2, 2.0) are plotted separately. Each graph plots the overlaid output values on the vertical axis against the first 200 time points on the horizontal axis. The different colors correspond to three different output trials. Time is not plotted in its entirety, but rather, the first 200 points are magnified and plotted. When ρ is low, the lines converge following the transient period, indicating the identical response trajectory regardless of variations in the initial states. As ρ increases, the network no longer has ESP and behaves completely differently across distinct trials.

https://doi.org/10.1371/journal.pcbi.1012378.g003

3.4.2. Prediction error.

We evaluated the accuracy of the time-series predictions learned in the ESN. Prediction accuracy of the time-series predictions were assessed using the root mean square error (RMSE) and normalized root mean square error (NRMSE) between the system output y(k) and desired output d(k). The RMSE and NRMSE time-series with a spectral radius ρ, which is replicated as RMSEρ(k) and NRMSEρ(k), are defined as follows: (10) (11) where K represents the total number of time points, and n stands for the number of trials. N represents the total number of trials. represents RMSE of RN of trial n at one spectral radius. The definition of NRMSE is slightly different from the general normalization of RMSE, which is done by simply dividing RMSE by the mean of y(k). This is because the output signal of the network tends to increase with larger spectral radii; thus, the normalization scale must be changed for each spectral radius so that this tendency does not affect the results. In addition, since the output time series is a white noise-like signal with zero mean, it was difficult to normalize by a simple mean, so we used RMSE for normalization. In this equation, normalization is performed by dividing by an averaged value of the RMSE of the four conditions: Hebbian vs. non-Hebbian and RefRN vs. RN stimuli.

3.4.3. Statistical analysis.

The statistical significance of selective consistency and the prediction error were tested with the nonparametric rank-order test based on the surrogate data technique [51]. First, the difference between the evaluated selective consistency and prediction error of Hebbian and non-Hebbian paired distribution for each spectral radius and stimuli type were calculated, respectively. Next, we randomly shuffled these two pair-wise distributions and evaluated the difference between these two distributions to get a surrogate difference value. This was done 5000 times to obtain a surrogate difference distribution. The statistical significance level of the real data is determined by the percentile rank (p) of the difference obtained from the original data on this surrogate difference distribution. The results of statistical tests were adjusted for multiple comparisons using the Bonferroni method, considering the number of spectral radius levels and the combinations of stimulus types. The notation for percentile ranks (PR) used in this paper is as follows: (*; PR < 5%, p < 0.05, **; PR < 1%, p < 0.01, ***; PR < 0.5%, p < 0.005, ****; PR < 0.1%, p < 0.001).

4. Results

4.1. Output signals

We first investigated whether the output signal of the network exhibited the correct behavior as a predictive signal for a white noise time series. We plotted three representative randomly selected time series of the responses (Fig 3). The behavior of the output signals was noise-like both for the Hebbian and non-Hebbian networks. We confirmed that the plasticity during the training session changed slightly but not significantly in spectral radii (S2 Table). Additionally, we found that even for the same network, the responses were different between distinct trials at the beginning of the output time series, which can be considered a transient period. This is attributed to the sensitivity of the network’s response to the initial state. This tendency became stronger as the spectral radius increased, and this is a typical characteristic of reservoir computing. The consistency of the system can be defined by the length of the transient period, and the initial state differences strongly influence the system outputs. If the system has strong consistency, its response will settle on the same trajectory after a short transient period. Therefore, we can say that both networks with and without plasticity show consistency in a range of relatively low spectral radii.

4.2. Selective consistency for RefRN

The inter-segment correlation analysis revealed that the plastic model obtained selective consistency for the RefRN stimulus, whereas the consistency for RN did not change. To evaluate the selective consistency for RefRN and RN, we compared the correlation between the first and second segments of output time series for RN and RefRN stimulus of both plastic and non-plastic models (Fig 4A) and time series of five randomly selected nodes of the reservoir (Fig 4B). Significant differences in consistency were observed between the non-plastic and plastic models for RefRN, whereas no differences were observed for RN between both models. Additionally, these tendencies were also confirmed at the level of each node of the recurrent network, indicating that the selective consistency was acquired at the level of the reservoir, not the output layer.

thumbnail
Fig 4. The evaluation of selective consistency.

The consistency was evaluated by correlation between the first and second segment time series for each test run with repeated noise (RN; left) and referenced repeated noise (RefRN; right). (a) The evaluation at the level of the output neuron. The violin plots show probability density distributions and interquartile ranges of Hebbian (right side; magenta and brown) and non–Hebbian (left side; green and cyan), respectively (****; PR < 0.01%, p < 0.001). The colored line plots connect the mean values for each condition. The black lines in the bottom windows show the difference between Hebbian and non–Hebbian models. The horizontal axis represents the spectral radius of the evaluated networks. (b) The evaluation of the five randomly selected reservoir neurons. Each dotted line represents five distinct neurons. The solid lines represent the mean value for these five neurons.

https://doi.org/10.1371/journal.pcbi.1012378.g004

Correlation decreased around where the spectral radius exceeded 1, regardless of the presence of plasticity. Generally, reservoir consistency is known to decrease as the spectral radius increases.

However, the consistency of RefRN is enhanced by the presence of plasticity, especially in a regime between chaos and order. Conversely, the results of the RN showed no differences between plastic and non-plastic models. These indicate that the plasticity induced in the reservoir made the network obtain selective consistency for the repeatedly presented input signals. Furthermore, the prominence of selective consistency varied depending on the spectral radius. Selective consistency was not acquired in less complex or more complex reservoirs with a smaller or larger spectral radius than the near-critical dynamical regime around 1.4 (Fig 5). The comparisons of the RefN and N conditions (S1 Fig) and a similar figure comparing RN and RefRN for the Hebbian network and the non-Hebbian system separately (S2 Fig) can be found in the S1S2 Figs. Since the segments in these two conditions consist of different time series, the correlation coefficients were consistently near zero, regardless of the presence of plasticity or the spectral radius, showing no significant difference between the conditions.

thumbnail
Fig 5. The edge of chaos and selective consistency.

Each bin of the histograms shows averaged inter–segment correlation for four conditions (magenta; RN of Hebbian network, green; RN of non–Hebbian network, brown; RefRN of Hebbian network, cyan; RefRN of non–Hebbian network). The histograms represent results from networks with spectral radii of 0.9, 1.4, and 1.9, from left to right—which can be described as stable, edge of chaos, and chaotic, respectively (****; PR < 0.01%, p < 0.001). The error bars are representing 95% confidence intervals. Notably, the edge of chaos region was chosen for its strong observation of selective consistency (see Fig 4). The statistical significances were tested with the nonparametric rank–order test based on the surrogate data model technique. It was found that in networks that are either stable or conversely chaotic, there was no difference between conditions, and differences were observed only in the edge of chaos region.

https://doi.org/10.1371/journal.pcbi.1012378.g005

4.3. Selective consistency without optimization

As the consistency of the reservoir’s response depends on the spectral radius and input scale, the optimization of readout weights to obtain prediction accuracy may not play a crucial role in acquiring selective consistency. To test this, we also conducted the same analysis for the reservoir without optimizing readout weights Wout. Fig 6 shows the selective consistency without the optimization of readout weights. Similar to the results with the optimization process, significant differences in consistency were observed between the non-plastic and plastic models for RefRN, whereas no differences were observed for RN.

thumbnail
Fig 6. The evaluation of selective consistency without optimizing the readout weights.

The figure styles are the same as Fig 4.The consistency was evaluated by the correlation between the first and second segment time series for each test run for repeated noise (RN; left) and referenced repeated noise (RefRN; right). The violin plots show probability density distributions and interquartile ranges of Hebbian (right side; magenta and brown) and non–Hebbian (left side; green and cyan) models, respectively (****; PR < 0.01%, p < 0.001). The colored line plots connect the mean values for each condition. The black lines in the bottom windows show the difference between Hebbian and non–Hebbian models. The horizontal axis represents the spectral radius of the evaluated networks.

https://doi.org/10.1371/journal.pcbi.1012378.g006

4.4. Prediction error

The prediction error for each stimulus type was not affected by the presence or absence of plasticity spectral radius or stimulus type. As we verified above, the changes in the output layer’s connections due to optimization do not affect selective consistency. However, it is still unclear how the model’s temporal prediction accuracy changes with different spectral radii and the presence or absence of plasticity. Therefore, we finally examined the impact of changes in the accuracy of the time-series prediction. Fig 7 shows the prediction errors for RN and RefRN for varying spectral radii with and without plasticity. Although higher spectral radii resulted in higher prediction errors for both RN and RefRN, there were no differences in prediction error between RN and RefRN, regardless of the spectral radii or plasticity (Fig 7A). This indicates that the prediction accuracy for repeatedly exposed stimuli (RefRN) is not selectively improved by introducing plasticity. Additionally, comparisons between NRMSE for each spectral radius did not show prominent selective minimization or maximization of prediction error for RefRN (Fig 7B).

thumbnail
Fig 7. Prediction error with varying spectral radii.

(a) Root mean squared error (RMSE) series and (b) normalized root mean square error (NRMSE) series. The plots represent the averaged RMSE or NRMSE between network prediction and the correct future time series. Non–plastic and plastic models are shown on the left and right, respectively. The results of RN and RefRN are represented by cyan and brown lines, respectively. The 95% confidence intervals are depicted as light cyan (RN) and pink (RefRN) filled areas.

https://doi.org/10.1371/journal.pcbi.1012378.g007

5. Discussion

We have demonstrated that the network between order and disorder changes its composition in a self-organizing manner when a signal is given and that it acquires selective consistency for stimuli presented multiple times. In this study, we propose plasticity-driven self-organization of network connectivity as a mechanism by which neural networks enhance their response consistency to repeatedly exposed signals. A recent experimental study reported that the consistency of neural responses to the same input varies depending on the stimulus and is enhanced for repeated input stimuli in an implicit and unsupervised manner [29]. We referred to this property as "selective consistency" and aimed to reveal its acquisition mechanism. It is not clear how neural networks achieve selective consistency; however, previous studies hypothesized that selective consistency could be acquired without any cognitive functions, such as attention and prediction [32], as it could be obtained in anesthetized mice [28] and sleeping human brains [27]. Therefore, we validated whether the plastic network could obtain selective consistency using a reservoir-based plastic model. Several studies have previously attempted to introduce plasticity into reservoir computing [5255]. However, these studies were primarily based on a machine learning perspective, examining how the computational performance or memory capacity of the reservoir changes with the incorporation of plasticity—despite the crucial role of consistency in computation for recurrent networks. To the best of our knowledge, this study is the first attempt to investigate how the selective consistency exhibited by the reservoir changes due to self-organization.

Our simulation results reveal that the network could acquire selective consistency through plastic changes of recurrent networks at both inter-trial (S3 Fig) and inter-segment levels. Prediction errors increased as the spectral radius increased; however, this trend did not differ between stimulus conditions and did not depend on plasticity.

Furthermore, the changes in prediction error were linear within the range of observed spectral radii, whereas the change in selective consistency for RefRN was obtained only at the edge of the chaos region. This indicates that the acquisition of selective consistency occurred independently of the improvement in time-series prediction. One possible explanation for this contradiction is that the acquisition of selective consistency is achieved at the level of the reservoir neurons without the optimization of readout weights. In fact, the evaluation of selective consistency for a model without the optimization of readout weights yielded similar results to those of a model with optimization. This is in line with the hypothesis from previous experimental results that the acquisition of selective consistency occurs without the need for task presence or any top-down adjustments [27,32].

Although output optimization was not directly important for the acquisition of selective consistency in the model proposed in the current study, this does not mean that selective consistency is irrelevant to information processing, but rather that it plays an important role. As we argued before, the optimization process for the output weight does not play a crucial role in the acquisition of selective consistency. However, how the presence of selective consistency affects the optimization process warrants further discussion. In general, the computational performance of the reservoir-based model is known to depend on its consistency, and the network is required to have an ESP [42]. This indicates that the improvement in the computational performance of the network (e.g., prediction accuracy) depends on the level of consistency. Therefore, in a situation in which the network has selective consistency for some input patterns, the optimization process is more affected by those reproducible patterns than by others with no reproducibility. This seems to be plausible for adapting to the environment and efficient information processing because the agent is not significantly affected by non-repeated noises that are likely to be ignorable. Thus, selective consistency may help the system reconcile the impact of statistically reproducible information with statistically non-reproducible information on model optimization based on its experience and consequently enable efficient information processing.

In addition, the prominence of selective consistency varied depending on the spectral radius and was strongest at the edge of chaos. The spectral radius adjusts the nonlinearity of the system, and in the case of an extremely small spectral radius, the system becomes linear and is not affected by the initial value. Therefore, a reservoir with a low spectral radius shows the same behavior for the same input regardless of the experiences. This is not in line with the behavioral experiment that shows consistency only for learned stimuli [35]. Furthermore, considering biological facts related to the neural network structure [56,57] and numerous recent studies reporting the importance of variability in brain activity [7,58,59], it becomes clear that overly linear responses are not suitable for neural system models. Conversely, if the spectral radius is extremely large, the system becomes chaotic and does not show consistency for learned and unlearned stimuli. Again, this is not suited for a neuronal activity model, which cannot adapt to the environment. From the results of the current study, it can be concluded that selective consistency through plasticity is not acquired when the spectral radius is too small or too large. This leads to the question as to why selective consistency is obtained only at a moderate spectral radius.

Generally, it is known that there is a critical point at which the behavior of the response significantly changes near a spectral radius of around 1.0 [42]. Below this regime, the reservoir responds linearly, whereas values above result in chaotic behavior. Therefore, the computational performance of reservoir computing is maximized at a critical point, i.e., the edge of chaos. Recent fascinating research reported that some kind of recurrent neural network at the edge of chaos behaves chaotically in the absence of input signals but shows order in the presence of input signals and the computational performance of such networks is maximized [60,61]. Our simulation also showed the best selective consistency for learned patterns near the edge of chaos. Therefore, our results may be interpreted as follows: the network that behaves chaotically in the absence of input can obtain ordered behavior selectively for a specific pattern. It is possible that consistency was obtained only for certain patterns due to the very weak level of plasticity introduced into the reservoir. This may have resulted in a very small change in the network, which, however, did not change the general consistency of the system (S2 Table).

In various contexts of different studies, it has been suggested that the brain exists at the boundary of order and disorder, the edge of chaos [62,63]. In addition to the brain, many other complex systems also exhibit a phenomenon called self-organized criticality, in which interactions between elements cause the entire system to transition to criticality [64,65]. It has been suggested that the brain is a system that maintains a self-organized critical state, which it utilizes for efficient information processing [66]. In particular, the discussion of neural avalanches, which argues that the distribution of neural network activity follows a power law, serves as strong evidence that the nervous system exists in a self-organized criticality [67]. In recent years, a model study using neural networks that reflect the physiological characteristics of the brain has reported conditions that realize both the edge of chaos and neuronal avalanches [68]. As we discussed above, there has been active research on how the brain achieves a critical state and utilizes it for information processing. In our simulation results, self-organization was achieved through plasticity in the reservoirs, and it was confirmed that the learning effect was the strongest near the criticality. Furthermore, it is noteworthy that this near-critical dynamical regime is not only realizing the strongest selectivity but also the only area that reproduces the experimental results of previous behavioral and neural activity measurements [29,35]. Therefore, our results support the hypothesis that the brain maintains a state close to criticality to adapt to the environment and preserve a high information-processing capability.

Finally, the results we discussed are not limited to a specific sensory modality but are a phenomenon that can occur in the nervous system in general. In this study, we developed simulations based on the human auditory behavioral task known as the NRD task. However, in practice, we simply gave white noise waveforms representing auditory stimuli to a network that mimics the nervous system and observed the changes that occurred due to the presence of internal synaptic plasticity. The same argument can be made for other sensory modalities as they are also input and processed as an electrical input waveform in the nervous system. Rapid perceptual learning similar to the NRD tasks has also been observed with visual and tactile stimuli [38,69]. In addition, Kang and his colleagues have postulated that the performance in one modality can be predicted from the performance in other modalities [38]. These considerations indicate that selective consistency itself and the ability or properties to acquire it are common and fundamental features of the entire brain.

It is important to consider the limitations of our research to properly contextualize our results. Despite its valuable findings, this study has certain limitations. First, our model does not fully elucidate the neural mechanisms of the NRD task. The aim of this study was to investigate the mechanism by which selective consistency is implicitly reinforced by experience, but not the mechanism by which the brain detects repetitions in auditory signals. Therefore, we did not make the model perform the NRD task explicitly. Researchers who are interested in the mechanism of repetition detection may need to investigate how the brain uses its selective consistency for selectively accurate information processing. Although several studies are relevant to the mechanism of the NRD task [32,70], the exact algorithm by which the brain solves these tasks is not yet specified—how to evaluate "identity", how to keep the memory? Consequently, incorporating the exact NRD task into the computational model would inherently increase its arbitrariness. If we were to explain the mechanisms of the NRD task, we first need to devise a physiologically plausible algorithm for repetition-detection through psychophysiological experimental studies. Again, although this study has utilized the NRD task, its aim was not to assess the task but to demonstrate that the consistency of network responses to the same input can be enhanced selectively, rapidly, and in a self-organizing manner.

Second, there are several possible neural network models and algorithms for reproducing plasticity in the nervous system other than the Hebbian rule, such as spike-timing dependent plasticity (STDP) in spiking neural network (SNN) [7072]. SNNs use integrate-and-fire units that, like biological neurons, emit spikes and transmit information when the membrane potential, altered by inputs from other cells, exceeds a threshold. They also possess excitatory and inhibitory neurons similar to those in biological systems, allowing for the manipulation of network behavior through the adjustment of the E/I balance. Moreover, STDP can be considered a more refined algorithm of the Hebbian rule used in this study. Therefore, models utilizing SNNs and STDP can be said to be more biologically plausible than the model used in this research. Different results may be obtained by using another model and plasticity algorithm. However, a recent STDP-based learning model at the level of a single postsynaptic neuron showed that neurons progressively became selective to the repeating pattern unsupervised under certain STDP parameters, even if the pattern was embedded in strong background noise [70,73]. Therefore, it can be said that the selective consistency presented in this study is not a narrowly model-dependent argument for plasticity. Then, is it possible to perform similar simulations using SNNs as well as RNNs and to compare the results? Unfortunately, there is a significant challenge—defining and comparing the "near critical dynamical regime" across multiple models. In our current study, we utilized a simple model employing the tanh activation function, allowing us to assess the system’s properties based on the well-known condition of the Echo State Property (ESP) using the spectral radius (Eq 3). However, this criterion is specifically valid only for networks where each neuron’s activation function is tanh. Although research has been conducted on general ESP conditions and metrics applicable to different networks, to our knowledge, no established methodologies universally apply yet [7476].

Third, we confirmed that implicit enhancement of neural selective consistency can theoretically occur, as suggested by previous neural activity measurements [29]. However, changes in consistency are possibly caused by some other mechanisms, and thus it cannot be claimed that all changes in consistency observed in the nervous system result from the manner we suggested. Finally, model simulations were used in this study, and thus it is still an open question whether such control of consistency exists in the actual nervous system. In a previous study that measured brain activity during the NRD task, the improved similarity of brain activity between trials was evaluated in terms of phase synchronization [29,77]. Our simulations suggest that not only inter-trial consistency but also inter-segment consistency for RefRN would selectively improve. As previous behavioral studies showed that the performance of repetition detection improved, we can assume that inter-segment selective consistency for RefRN also increases in the brain and is related to improved behavioral accuracy. To reveal the learning mechanism of the NRD task, it is necessary to evaluate whether biological neural activity also obtains selective consistency inter-segments. In this study, we regarded the constructed neural network as a particular form of a nonlinear dynamical system and investigated selective consistency. Given that the brain can be regarded as a similar nonlinear dynamical system, the analytical approach utilized in this study can be directly extended to the brain, enabling a direct comparison of our present investigation and empirical neural activity.

6. Conclusion

In the current study, we have demonstrated that a recurrent neural network with weak internal plasticity can acquire selective consistency, i.e., input-dependent consistency in its response. Despite many reports suggesting that the brain has overcome its inherent inconsistency to achieve selectively consistent activity and information processing, the neural basis of selective consistency remains unclear. Our computational simulations revealed that a reservoir-based plastic model could self-organize to acquire input-dependent consistency for frequently exposed stimuli. These findings indicate that a neural network near the edge of chaos can self-organize through weak plasticity to selectively make its responses to repeatedly presented stimuli more consistent. Considering network models that dynamically change their properties, such as the consistency of their responses to each input, rather than solely focusing on the "optimization" to maximize prediction or decoding accuracy, may play a role in understanding the brain’s information processing mechanisms and in developing brain-like computers.

Supporting information

S1 Table. Experimental setting. For all simulations, we chose the following common parameters for the network constructions, MBGD, and Hebbian learning in this table.

https://doi.org/10.1371/journal.pcbi.1012378.s001

(PDF)

S2 Table. The spectral radii of the Hebbian- and non-Hebbian networks after the training session averaged over 200 runs.

https://doi.org/10.1371/journal.pcbi.1012378.s002

(PDF)

S1 Fig.

The evaluation of selective consistency for RefN (right panel) and N (left panel) stimuli. The figure style is the same as Fig 4A. The consistency was evaluated by the correlation between the first and second segment time series for each test run for repeated noise (N; left) and referenced repeated noise (RefN; right). The violin plots show probability density distributions and interquartile ranges of Hebbian (right side; magenta and brown) and non-Hebbian (left side; green and cyan) models, respectively. The colored line plots connect the mean values for each condition. The black lines in the bottom windows show the difference between Hebbian and non-Hebbian models. The horizontal axis represents the spectral radius of the evaluated networks.

https://doi.org/10.1371/journal.pcbi.1012378.s003

(TIF)

S2 Fig.

The evaluation of selective consistency for RN and RefRN of non-Hebbian (left panel) and Hebbian (right panel) networks. The figure style is the same as Figs 4A and S1. Colors for each condition are as follows: non-Hebbian RN; green, non-Hebbian RefRN; cyan, Hebbian RN; magenta, and Hebbian RefRN; brown.

https://doi.org/10.1371/journal.pcbi.1012378.s004

(TIF)

S3 Fig.

The inter-trial level selective consistency for RefRN (right panel) and RN (left panel). The consistency was evaluated by the mean of the correlation between all time series. The violin plots show probability density distributions and interquartile ranges of Hebbian (right side; magenta and brown) and non-Hebbian (left side; green and cyan) models, respectively (****; PR < 0.01%, p < 0.001). The colored line plots connect the mean values for each condition. The black lines in the bottom windows show the difference between Hebbian and non-Hebbian models. The horizontal axis represents the spectral radius of the evaluated networks. Weaker but significant differences between conditions can be seen in the same way as the inter-segment level comparison.

https://doi.org/10.1371/journal.pcbi.1012378.s005

(TIF)

References

  1. 1. Leopold DA, Wilke M, Maier A, Logothetis NK. Stable perception of visually ambiguous patterns. Nat Neurosci. 2002;5:605–609. pmid:11992115
  2. 2. Schmidt RA, Zelaznik H, Hawkins B, Frank JS, Quinn JT Jr. Motor–output variability: A theory for the accuracy of rapid motor acts. Psychol Rev. 1979;86:415–451. pmid:504536
  3. 3. Lumer ED, Friston KJ, Rees G. Neural correlates of perceptual rivalry in the human brain. Science. 1998;280:1930–1934. pmid:9632390
  4. 4. Ditzinger T, Haken H. Oscillations in the perception of ambiguous patterns. Biol Cybern. 1989;61:279–287.
  5. 5. Dhawale AK, Smith MA, Ölveczky BP. The role of variability in motor learning. Annu Rev Neurosci. 2017;40:479–498. pmid:28489490
  6. 6. Mainen ZF, Sejnowski TJ. Reliability of spike timing in neocortical neurons. Science. 1995;268:1503–1506. pmid:7770778
  7. 7. Garrett DD, Kovacevic N, McIntosh AR, Grady CL. The importance of being variable. J Neurosci. 2011;31:4496–4503. pmid:21430150
  8. 8. Arazi A, Gonen–Yaacovi G, Dinstein I. The magnitude of trial–by–trial neural variability is reproducible over time and across tasks in humans. eNeuro. 2017;4:ENEURO.0292–17.2017. pmid:29279861
  9. 9. Arieli A, Sterkin A, Grinvald A, Aertsen A. Dynamics of ongoing activity: Explanation of the large variability in evoked cortical responses. Science. 1996;273:1868–1871. pmid:8791593
  10. 10. Faisal AA, Selen LPJ, Wolpert DM. Noise in the nervous system. Nat Rev Neurosci. 2008;9: 292–303. pmid:18319728
  11. 11. Dinstein I, Heeger DJ, Behrmann M. Neural variability: Friend or foe? Trends Cogn Sci. 2015;19:322–328. pmid:25979849
  12. 12. White JA, Rubinstein JT, Kay AR. Channel noise in neurons. Trends Neurosci. 2000;23: 131–137. pmid:10675918
  13. 13. Arazi A, Censor N, Dinstein I. Neural variability quenching predicts individual perceptual abilities. J Neurosci. 2017;37:97–109. pmid:28053033
  14. 14. Schurger A, Sarigiannidis I, Naccache L, Sitt JD, Dehaene S. Cortical activity is more stable when sensory stimuli are consciously perceived. Proc Natl Acad Sci U S A. 2015;112: E2083–E2092. pmid:25847997
  15. 15. Daniel E, Dinstein I. Individual magnitudes of neural variability quenching are associated with motion perception abilities. J Neurophysiol. 2021;125: 1111–1120. pmid:33534654
  16. 16. He BJ. Spontaneous and task–evoked brain activity negatively interact. Journal of Neuroscience. 2013;33:4672–4682. pmid:23486941
  17. 17. Dinstein I, Heeger DJ, Lorenzi L, Minshew NJ, Malach R, Behrmann M. Unreliable evoked responses in autism. Neuron. 2012;75:981–991. pmid:22998867
  18. 18. Weinger PM, Zemon V, Soorya L, Gordon J. Low–contrast response deficits and increased neural noise in children with autism spectrum disorder. Neuropsychologia. 2014;63:10–18. pmid:25107679
  19. 19. Haigh SM, Heeger DJ, Dinstein I, Minshew N, Behrmann M. Cortical variability in the sensory–evoked response in autism. J Autism Dev Disord. 2015;45:1176–1190. pmid:25326820
  20. 20. Gonen–Yaacovi G, Arazi A, Shahar N, Karmon A, Haar S, Meiran N, et al. Increased ongoing neural variability in ADHD. Cortex. 2016;81:50–63. pmid:27179150
  21. 21. Hornickel J, Kraus N. Unstable representation of sound: A biological marker of dyslexia. Journal of Neurosci. 2013;33:3500–3504. pmid:23426677
  22. 22. Hutt A, Rich SI, Valiante TA, Lefebvre J. Intrinsic neural diversity quenches the dynamic volatility of neural networks. Proc Natl Acad Sci U S A. 2023;120. e2218841120. pmid:37399421
  23. 23. Rich S, Moradi Chameh H, Lefebvre J, Valiante TA. Loss of neuronal heterogeneity in epileptogenic human tissue impairs network resilience to sudden changes in synchrony. Cell Rep. 2022;39:110863. pmid:35613586
  24. 24. Uchida A, McAllister R, Roy R. Consistency of nonlinear system response to complex drive signals. Phys Rev Lett. 2004;93:244102. pmid:15697817
  25. 25. Uchida A, Yoshimura K, Davis P, Yoshimori S, Roy R. Local conditional Lyapunov exponent characterization of consistency of dynamical response of the driven Lorenz system. Phys Rev E Stat Nonlin Soft Matter Phys. 2008;78:036203. pmid:18851117
  26. 26. Kitajo K, Sase T, Mizuno Y, Suetani H. Consistency in macroscopic human brain responses to noisy time–varying visual inputs. bioRxiv. 2020;645499.
  27. 27. Andrillon T, Pressnitzer D, Léger D, Kouider S. Formation and suppression of acoustic memories during human sleep. Nat Commun. 2017;8:179. pmid:28790302
  28. 28. Kang H, Auksztulewicz R, An H, Abi Chacra N, Sutter ML, Schnupp JWH. Neural correlates of auditory pattern learning in the auditory cortex. Front Neurosci. 2021;15:610978. pmid:33790730
  29. 29. Luo H, Tian X, Song K, Zhou K, Poeppel D. Neural response phase tracks how listeners learn new acoustic representations. Curr Biol. 2013;23: 968–974. pmid:23664974
  30. 30. Shishikura M, Tamura H, Sakai K. Correlation between neural responses and human perception in figure-ground segregation. Front Syst Neurosci. 2023;16:999575. pmid:36713684
  31. 31. Hoshi A, Hirayama Y, Saito F, Ishiguro T, Suetani H, Kitajo K. Spatiotemporal consistency of neural responses to repeatedly presented video stimuli accounts for population preferences. Sci Rep. 2023;13:5532. pmid:37015982
  32. 32. Andrillon T, Kouider S, Agus T, Pressnitzer D. Perceptual learning of acoustic noise generates memory-evoked potentials. Curr Biol. 2015;25:2823–2829. pmid:26455302
  33. 33. Costa RM, Ganguly K, Costa RM, Carmena JM. Emergence of coordinated neural dynamics underlies neuroprosthetic learning and skillful control. Neuron. 2017;93:955–970.e5. pmid:28190641
  34. 34. Oja E. A simplified neuron model as a principal component analyzer. J Math Biol. 1982;15: 267–273. pmid:7153672
  35. 35. Agus TR, Thorpe SJ, Pressnitzer D. Rapid formation of robust auditory memories: Insights from noise. Neuron. 2010;66:610–618. pmid:20510864
  36. 36. Agus TR, Pressnitzer D. Repetition detection and rapid auditory learning for stochastic tone clouds. J Acoust Soc Am. 2021;150:1735–1749. pmid:34598638
  37. 37. Kang H, Agus TR, Pressnitzer D. Auditory memory for random time patterns. J Acoust Soc Am. 2017;142:2219–2232. pmid:29092589
  38. 38. Kang H, Lancelin D, Pressnitzer D. Memory for random time patterns in audition, touch, and vision. Neuroscience. 2018;389:118–132. pmid:29577997
  39. 39. Kumar S, Bonnici HM, Teki S, Agus TR, Pressnitzer D, Maguire EA, et al. Representations of specific acoustic patterns in the auditory cortex and hippocampus. Proc R Soc B. 2014;281:20141000. pmid:25100695
  40. 40. Agus TR, Pressnitzer D. The detection of repetitions in noise before and after perceptual learning. J Acoust Soc Am. 2013;134:464–473. pmid:23862821
  41. 41. Ringer H, Schröger E, Grimm S. Perceptual learning of random acoustic patterns: Impact of temporal regularity and attention. Eur J Neurosci. 2023;57:2112–2135. pmid:37095717
  42. 42. Jaeger H. The" echo state" approach to analysing and training recurrent neural networks-with an erratum note’. GMD Report, 2001;148, German National Research Center for Information Technology.
  43. 43. Lukoševičius M, Jaeger H. Reservoir computing approaches to recurrent neural network training. Comput Sci Rev. 2009;3:127–149.
  44. 44. Enel P, Procyk E, Quilodran R, Dominey PF. Reservoir computing properties of neural dynamics in prefrontal cortex. PLoS Comput Biol. 2016;12:e1004967. pmid:27286251
  45. 45. Boedecker J, Obst O, Lizier JT, Mayer NM, Asada M. Information processing in echo state networks at the edge of chaos. Theory Biosci. 2012;131:205–213. pmid:22147532
  46. 46. Gallicchio C. Chasing the echo state property. arXiv: 2018;1811.10892.
  47. 47. Denham SL, Winkler I. Predictive coding in auditory perception: challenges and unresolved questions. Eur J Neurosci. 2020;51:1151–1160. pmid:29250827
  48. 48. Fletcher H, Munson WA. Loudness, Its definition, measurement and calculation. J Acoust Soc Am. 1933;5:82–108.
  49. 49. Legenstein R, Maass W. Edge of chaos and prediction of computational performance for neural circuit models. Neural Netw. 2007;20: 323–334. pmid:17517489
  50. 50. Rao RPN, Ballard DH. Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects. Nat Neurosci. 1999;2:79–87. pmid:10195184
  51. 51. Lancaster G, Iatsenko D, Pidde A, Ticcinelli V, Stefanovska A. Surrogate data for hypothesis testing of physical systems. Phys Rep. 2018;748:1–60.
  52. 52. Morales GB, Mirasso CR, Soriano MC. Unveiling the role of plasticity rules in reservoir computing. Neurocomputing. 2021;461:705–715.
  53. 53. Schrauwen B, Wardermann M, Verstraeten D, Steil JJ, Stroobandt D. Improving reservoirs using intrinsic plasticity. Neurocomputing. 2008;71:1159–1171.
  54. 54. Wardermann M, Steil J. Intrinsic plasticity for reservoir learning algorithms. in: Proceedings of the 15th European Symposium on Artificial Neural Networks (ESANN). 2007:pp. 513–518.
  55. 55. Yusoff MH, Chrol-Cannon J, Jin Y. Modeling neural plasticity in echo state networks for classification and regression. Inf Sci. 2016;364–365:184–196.
  56. 56. Friston KJ. Functional and effective connectivity in neuroimaging: A synthesis. Hum Brain Mapp. 1994;2:56–78.
  57. 57. Allen EA, Damaraju E, Plis SM, Erhardt EB, Eichele T, Calhoun VD. Tracking whole-brain connectivity dynamics in the resting state. Cereb Cortex. 2014;24:663–676. pmid:23146964
  58. 58. Lefebvre J, Hutt A, Frohlich F. Stochastic resonance mediates the state-dependent effect of periodic stimulation on cortical alpha oscillations. Elife. 2017;6:e32054. pmid:29280733
  59. 59. Hutt A, Lefebvre J, Hight D, Sleigh J. Suppression of underlying neuronal fluctuations mediates EEG slowing during general anaesthesia. Neuroimage. 2018;179:414–428. pmid:29920378
  60. 60. Sussillo D, Abbott LF. Generating coherent patterns of activity from chaotic neural networks. Neuron. 2009;63:544–557. pmid:19709635
  61. 61. Haruna T, Nakajima K. Optimal short-term memory before the edge of chaos in driven random recurrent networks. Phys Rev E. 2019;100:062312. pmid:31962477
  62. 62. Chua L, Sbitnev V, Kim H. Neurons are poised near the edge of chaos. Int J Bifurcat Chaos. 2012;22:1250098.
  63. 63. Kumar S, Strachan JP, Williams RS. Chaotic dynamics in nanoscale NbO2 Mott memristors for analogue computing. Nature. 2017;548:318–321. pmid:28792931
  64. 64. Bak P, Tang C, Wiesenfeld K. Self-organized criticality: An explanation of 1/f noise. Phys Rev Lett. 1987;59:381–384. pmid:10035754
  65. 65. Dorogovtsev SN, Goltsev AV, Mendes JFF. Critical phenomena in complex networks. Rev Mod Phys. 2008;80:1275–1335.
  66. 66. Plenz D, Ribeiro TL, Miller SR, Kells PA, Vakili A, Capek EL. Self-organized criticality in the brain. Front Phys. 2021;9:639389.
  67. 67. Beggs JM, Plenz D. Neuronal avalanches in neocortical circuits. J Neurosci. 2003;23: 11167–11177. pmid:14657176
  68. 68. Kuśmierz Ł, Ogawa S, Toyoizumi T. Edge of chaos and avalanches in neural networks with heavy-tailed synaptic weight distribution. Phys Rev Lett. 2020;125:028101. pmid:32701351
  69. 69. Gold JM, Aizenman A, Bond SM, Sekuler R. Memory and incidental learning for visual frozen noise sequences. Vision Res. 2014;99:19–36. pmid:24075900
  70. 70. Masquelier T. STDP allows close-to-optimal spatiotemporal spike pattern detection by single coincidence detector neurons. Neuroscience. 2018;389:133–140. pmid:28668487
  71. 71. Klampfl S, Maass W. Emergence of dynamic memory traces in cortical microcircuit models through STDP. J Neurosci. 2013;33:11515–11529. pmid:23843522
  72. 72. Gilson M, Masquelier T, Hugues E. STDP allows fast rate-modulated coding with Poisson-like spike trains. PLoS Comput Biol. 2011;7:e1002231. pmid:22046113
  73. 73. Masquelier T, Guyonneau R, Thorpe SJ. Spike timing dependent plasticity finds the start of repeating patterns in continuous spike trains. PLoS One. 2008;3:e1377 pmid:18167538
  74. 74. Buehner M, Young P. A tighter bound for the echo state property. IEEE Trans Neural Netw. 2006;17:820–824. pmid:16722187
  75. 75. Yildiz IB, Jaeger H, Kiebel SJ. Re-visiting the echo state property. Neural Netw. 2012;35:1–9. pmid:22885243
  76. 76. Manjunath G, Jaeger H. Echo state property linked to an input: Exploring a fundamental characteristic of recurrent neural networks. Neural Comput. 2013;25:671–696. pmid:23272918
  77. 77. Song K, Luo H. Temporal organization of sound information in auditory memory. Front Psychol. 2017;8:999. pmid:28674512