## Figures

## Abstract

Sensory deprivation has long been known to cause hallucinations or “phantom” sensations, the most common of which is tinnitus induced by hearing loss, affecting 10–20% of the population. An observable hearing loss, causing auditory sensory deprivation over a band of frequencies, is present in over 90% of people with tinnitus. Existing plasticity-based computational models for tinnitus are usually driven by homeostatic mechanisms, modeled to fit phenomenological findings. Here, we use an objective-driven learning algorithm to model an early auditory processing neuronal network, e.g., in the dorsal cochlear nucleus. The learning algorithm maximizes the network’s output entropy by learning the feed-forward and recurrent interactions in the model. We show that the connectivity patterns and responses learned by the model display several hallmarks of early auditory neuronal networks. We further demonstrate that attenuation of peripheral inputs drives the recurrent network towards its critical point and transition into a tinnitus-like state. In this state, the network activity resembles responses to genuine inputs even in the absence of external stimulation, namely, it “hallucinates” auditory responses. These findings demonstrate how objective-driven plasticity mechanisms that normally act to optimize the network’s input representation can also elicit pathologies such as tinnitus as a result of sensory deprivation.

## Author summary

Tinnitus or “ringing in the ears” is a common pathology. It may result from mechanical damage in the inner ear, as well as from certain drugs such as salicylate (aspirin). A common approach toward a computational model for tinnitus is to use a neural network model with inherent plasticity applied to early auditory processing, where the input layer models the auditory nerve and the output layer models a nucleus in the brain stem. However, most of the existing computational models are phenomenological in nature, driven by a homeostatic principle. Here, we use an objective-driven learning algorithm based on information theory to learn the feed-forward interactions between the layers, as well as the recurrent interactions within the output layer. Through numerical simulations of the learning process, we show that attenuation of peripheral inputs drives the network into a tinnitus-like state, where the network activity resembles responses to genuine inputs even in the absence of external stimulation; namely, it “hallucinates” auditory responses. These findings demonstrate how plasticity mechanisms that normally act to optimize network performance can also lead to undesired outcomes, such as tinnitus, as a result of reduced peripheral hearing.

**Citation: **Dotan A, Shriki O (2021) Tinnitus-like “hallucinations” elicited by sensory deprivation in an entropy maximization recurrent neural network. PLoS Comput Biol 17(12):
e1008664.
https://doi.org/10.1371/journal.pcbi.1008664

**Editor: **Roland Schaette,
University College London, UNITED KINGDOM

**Received: **December 31, 2020; **Accepted: **November 24, 2021; **Published: ** December 8, 2021

**Copyright: ** © 2021 Dotan, Shriki. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

**Data Availability: **There are no primary data in the paper; all codes are available at https://github.com/bci4cpl/Tinnitus-like-hallucinations-in-an-entropy-maximization-recurrent-neural-network.

**Funding: **OS received financial support from the Israel Science Foundation (https://www.isf.org.il; Grant No. 504/17). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

**Competing interests: ** The authors have declared that no competing interests exist.

## Introduction

Tinnitus is a common form of auditory hallucinations, affecting the quality of life of many people (≈ 10–20% of the population, [1–6]). It can manifest as a “ringing” or hissing sound across a certain frequency range, typically with a distinct spectral peak [7, 8]. An observable hearing loss, causing sensory deprivation over a band of frequencies, is present in >90% of people with tinnitus [1–4], and the remaining people with tinnitus are believed to suffer some damage in higher auditory processing pathways [5, 9] or have some cochlear damage that does not affect the audiogram [10].

From a neural processing point of view, hallucinations correspond to brain activity in sensory networks, which occurs in the absence of an objective external input. Hallucinations can occur in all sensory modalities, and can be induced by drugs, certain brain disorders, and sensory deprivation. For example, it is well known that visual deprivation (e.g., being in darkness for an extended period) elicits visual hallucinations, and, similarly, auditory deprivation elicits auditory hallucinations [11–13].

Although the causes of tinnitus can sometimes be mechanical (“objective tinnitus” [2, 14]), this is not the case in >95% of patients [6, 14]. This so-called “subjective tinnitus” is commonly associated with plasticity of feedback and recurrent neuronal circuits [2, 5, 10, 15–18].

The dorsal cochlear nucleus (DCN) is known to display tinnitus-related plastic reorganization following cochlear damage [19–22], and is thought to be a key player in the generation of tinnitus [23–26]. It is stimulated directly by the auditory nerve with a tonotopic mapping. Each output unit, composed of a group of different cells, receives inputs from a small number of input fibers and inhibits units of similar tuning [27, 28]. This connectivity pattern results in a sharp detection of specific notches [28]. As the DCN is the foremost anatomical structure in the auditory pathway in which tinnitus-related activity has been observed [19, 20], it is the structure most associated with the generation of tinnitus [23–26]. This choice is also supported by DCN hyperactivity following artificial induction of tinnitus [21, 22]. Interestingly, this induced hyperactivity seems to persist even if the DCN is later isolated from inputs other than the auditory nerve [29]. This suggests that tinnitus-related hyperactivity in the DCN is self-sustained and does not depend on feedback from higher order auditory networks.

The DCN also receives non-auditory inputs, such as somatosensory and vestibular projections [30–33]. The somatosensory projections, in particular, are known to be upregulated in tinnitus [22, 34–38]. Furthermore, somatosensory stimulation is known to affect the perceived tinnitus in >60% of the cases [37, 39, 40]. In light of these observations, the somatosensory projections are considered to play a major role in tinnitus [37]. A recent study used a bimodal auditory-sensory stimulation as a treatment paradigm in both guinea pigs and humans, successfully modulating the percept of tinnitus and reducing its loudness, though the effect did not last after terminating the treatment [41].

While existing computational models successfully account for some of the characteristics of tinnitus [42], many of them are based on lateral inhibition [43–45] or gain adaptation [46], and do not take into account long-term neural plasticity. Plasticity-based models for tinnitus are usually phenomenological models, where plasticity is described as a homeostatic process [47–53] or an amplification of central noise [54], and not as a process which serves a computational goal. Another computational model for tinnitus is based on stochastic resonance and suggests that tinnitus arises from an adaptive optimal noise level [55, 56]. This model successfully accounts for various aspects of tinnitus and other auditory phenomena related to sensory deprivation, but it is focused on a single auditory frequency and has yet to be further explored in a broader context.

In this work, we try to gain new insights into tinnitus by using information theoretic-driven plasticity. We implemented the entropy maximization (EM) approach in a recurrent neural network [57] to model the connection between the raw sensory input and its downstream representation. This approach was previously applied to model the feed-forward connectivity in the primary visual cortex, giving rise to orientation-selective Gabor-like receptive fields [58]. A later generalization of the algorithm to learning recurrent connectivity [57] was used to show that EM drives early visual processing networks toward critical behavior [59]. Furthermore, the evolved recurrent connectivity profile has a Mexican-hat shape; namely, neurons with similar preferred orientations tend to excite one another, while neurons with distant preferred orientations tend to inhibit one another, consistent with empirical data. While the aforementioned studies focused on the normal function of the visual system, EM-based neural networks were barely used to model abnormalities or to study the effect of changes in input statistics [60]. The relationship between EM-based adaptation and the emergence of tinnitus from sensory deprivation was previously discussed in the context of single neurons [61], yet it was never explored on a large-scale recurrent network.

Here, we trained a recurrent EM neural network to represent auditory stimuli, so it can stand as a simplified model for early auditory processing. Subsequently, to test the effect of sensory deprivation on the network’s output representation, we modified the input statistics by attenuating a certain range of frequencies. Our findings show that tinnitus-like hallucinations naturally arise in this model following sensory deprivation. Specifically, the recurrent interactions act to compensate for the attenuated input by increasing their gain, causing the network to cross a critical point into a regime of hallucinations. These findings suggest that hallucinations following sensory deprivation can stem from general long-term plasticity mechanisms that act to optimize the representation of sensory information.

## Results

To model the early stages of auditory processing (e.g., DCN), we used an EM approach to train a recurrent neural network (see Methods). The neurons obey first-order rate dynamics, and it is assumed that the network reaches a steady state following the presentation of each stimulus. The learning algorithm for the feed-forward and recurrent connectivity was based on the gradient-descent algorithm described in [57], with the addition of regularization. The network was trained in an unsupervised manner to represent simulated auditory stimuli (see Methods for more details). Figs 1 and 2A depict the network’s architecture and typical stimuli, respectively.

The architecture of an overcomplete recurrent neural network with *M* input neurons and *N* output neurons, where *N* > *M*.

A: Three examples of typical simulated stimuli, representing the activity of the input neurons as a function of their preferred frequency. B–F: The stimuli presented in A, after attenuation of different frequency ranges using different attenuation profiles. Attenuation was achieved by multiplying the original input vector by an attenuation profile, depicted in gray. The attenuation profiles in B–D and F were inverted sigmoidal functions with parameters *k*_{0} = 20, *β* = 10 for B, *k*_{0} = 30, *β* = 10 for C, *k*_{0} = 20, *β* = 1 for D and *k*_{0} = 20, *β* = −10 for F, where *k*_{0} represents the transition frequency (in the input neurons domain, between 1 and *M* = 40) and *β* represents the sharpness of the transition. The attenuation profile in E was composed of two sigmoidal functions with parameters *k*_{1} = 10, *k*_{2} = 30, *β* = 1. For further details, see Methods.

In all simulations described here, we used a network of 40 input neurons and 400 output neurons (an overcomplete representation). Regularization was achieved using a cost on the norm of the weights and was applied to both feed-forward (using *ℓ*_{1} norm) and recurrent (using *ℓ*_{2} norm) sets of connections (see Methods). The coefficients of the regularization terms were set to λ_{W} = 0.001 for the feed-forward connections and λ_{K} = 0.226 for the recurrent connections (for details regarding these choices, see below the subsection on the Regularization effect).

### Training using typical stimuli

First, we trained the network using typical auditory inputs, simulated as a combination of multiple narrow Gaussians in the log-scaled frequency domain with additional noise (see Methods and Fig 2A). After the convergence of the learning process, each output neuron had a specific and unique preferred frequency, as manifested in the feed-forward connectivity profiles (Fig 3A and 3B). The recurrent connections converged to a “Mexican-hat” profile with short-range excitation and longer-range inhibition (Fig 3C and 3D). This profile of connectivity causes neurons with adjacent frequencies to excite one another, while neurons with slightly more distant frequencies inhibit each other. The significance of this profile lies in its ability to reduce the width of the output response profile for a Gaussian input, thus, effectively reducing the noise. Similarly shaped spectral receptive fields were observed in various primary auditory networks [27, 28, 62, 63] including the DCN, suggesting similar connectivity patterns.

A–B: The feed-forward connectivity matrix and its average row profile. C–D: The recurrent connectivity matrix and its average row profile before sensory deprivation. E–F: The recurrent connectivity matrix and its average row profiles after sensory deprivation, averaged separately for neurons in the deprived zone and the non-deprived zone. Each row profile is obtained by aligning the presynaptic connections to every neuron according to its preferred frequency and then averaging. The x-axis in B, D and F describes the log-scaled difference in the preferred frequency between the presynaptic and postsynaptic neurons. The attenuation profile’s parameters were *k*_{0} = 20, *β* = 10 (see Fig 2B). The classification of output neurons into deprived and non-deprived zones in F is based on the level of attenuation at the preferred frequency of the neuron.

The network’s response to typical stimuli shows tonotopic responses, and the response in the absence of external stimuli is near spontaneous activity (Fig 4A and 4B). We note that the initial feed-forward connectivity was manually tuned to produce a tonotopic mapping (using weak Gaussian profiles with ordered centers). Although the feed-forward connections do change throughout the learning process, the tonotopic organization remains stable. The tonotopic mapping is a well-known property of all auditory processing stages between the cochlea and the auditory cortex in various species, including humans [64–68]. The preservation of the tonotopic organization throughout the learning process is in agreement with biological observations, suggesting that it is created in the embryonic stages of development and is preserved through plasticity processes [69].

A: Typical stimuli and a silent stimulus (zero input—right panel). B: The network’s response to the stimuli presented in A. C–G: The network’s response to the stimuli presented in A after training on stimuli with an attenuated frequency range. The attenuation profiles are depicted in gray. The spontaneous activity of the output neurons, defined here as the average activity in response to a silent stimulus before attenuation (as in the right panel of B), is indicated in B–G by a dashed line.

We noticed that spatial connectivity profiles hardly change throughout the learning, while their scale changes dramatically. In light of this observation, we quantified several global parameters of the network as a function of the scale of the recurrent connectivity matrix (Fig 5). We also used these measurements to gain insights into the effect of regularization on our results. First, note that the regularization caused the network learning process to converge to down-scaled recurrent interactions compared to the optimal scale in terms of the non-regularized objective function (Fig 5A, dashed vertical lines). This specific scale seems to play a role in determining the proximity of the network dynamics to the critical point. Specifically, the convergence time rises dramatically at this point (Fig 5B), reflecting the well-known phenomenon of “critical slowing down” [70–73]. In addition, at this scale, the population vector’s magnitude rises, reflecting the emergence of non-uniform activity profiles in the absence of a structured input (see Methods and Fig 5C). Finally, the average pairwise correlations obtain a minimum around this scale Fig 5D).

A: The network’s objective function, without the regularization terms. Low values of the objective function correspond to high network’s susceptibility. B: The convergence time of the network dynamics using Euler’s method; i.e., the number of time-steps until the simulation reaches a convergence criterion (see Methods). C: The population vector magnitude. D: The squared correlation coefficient between pairs of output neurons, averaged over all such pairs. All the above measures are displayed for different scaling factors of the recurrent connectivity matrix *K*^{tr}, as found by the training process; i.e., for each value of the scaling factor *σ*, the different measures were evaluated by replacing the recurrent connectivity matrix with *K* = *σK*^{tr}. In the left panels, we used the recurrent connectivity matrix *K*^{tr} trained on typical stimuli, while in the right panels, we used the recurrent connectivity matrix obtained after sensory deprivation. The operating point is at a scaling factor of 1, namely, the recurrent connectivity the learning process has converged to. The marked critical point (≈3.14 in the left panels and ≈1.07 in the right panels) is the scaling factor for which the spectral radius *ρ*(*K*) of the recurrent connectivity matrix is 4, i.e., 4/*ρ*(*K*^{tr}). The derivatives of B–C are also displayed for better visualization of transitions in values. The exact values of the objective function and convergence time displayed in A–B are completely arbitrary, therefore these figures should only be considered in a qualitative manner. The attenuation profile’s parameters were *k*_{0} = 20, *β* = 10 (see Fig 2B). For visualization purposes, different panels are displayed on different vertical scales.

All these results point to the same conclusion—without the regularization, the recurrent connectivity should have been scaled by ≈3.14, such that the spectral radius of the recurrent connectivity matrix would be ≈4. We note that the maximal derivative of the chosen activation function 1/(1 + exp(−*x*)) is ¼. Thus, having the spectral radius of the recurrent connectivity matrix near 4 indicates proximity to the critical point (see Methods). This means that the regularization keeps the recurrent connectivity below its optimal scale (in terms of the entropy term alone), and the network remains subcritical. We note that for different regularization coefficients, the scale of the interactions could obtain different values.

### Sensory deprivation

After the learning was stabilized for normal stimuli, we attenuated the inputs in a certain frequency range (Fig 2B–2F), and let the network’s recurrent connections adapt to the new input statistics. The resulting recurrent connectivity profile among the deprived neurons had a stronger central excitation and a wider inhibition (Fig 3E and 3F and S1 Fig). The stronger recurrent connectivity in the deprived region led to a phase transition, resulting in an inhomogeneous stationary activity pattern independent of the given input (Fig 4C–4G). We interpret those results as “hallucinations”, elicited by the sensory deprivation. Interestingly, the “hallucinations” in our model develop only in the deprived region of the output layer, consistent with certain types of tinnitus [3, 7, 61, 74]. Furthermore, the corresponding activity profile has a single peak, in line with the most common forms of tinnitus [7, 8, 75]. The network’s sensitivity to external inputs in the deprived frequencies is lower, as reflected by the elevated hearing thresholds in the simulated audiograms (S11 Fig).

Following the induction of sensory deprivation, we evaluated the criticality measures once again (Fig 5 right panels, S2 and S3 Figs). The results for the objective function, convergence time and population vector remained qualitatively similar, but the optimal scale moved much closer to 1 (≈1.07). Thus, the network converged to a point much closer to its critical point, compared to its state before the induction of sensory deprivation. Interestingly, the average pairwise correlations now exhibit a maximum rather than a minimum. This finding is qualitatively consistent with the observed increase in synchrony following the induction of tinnitus [76]. We note that following sensory deprivation, the effect of learning on the recurrent connections is not limited to scaling. Hence, the different measures exhibit different patterns in the supercritical domain (above the scale of ≈1.07).

### Regularization effect

As discussed above, to keep the dynamics from crossing into the supercritical domain, we added regularization to the network’s weights. For each type of connectivity matrix (feed-forward and recurrent), we tested regularization both by *ℓ*_{1} and *ℓ*_{2} norms of the connections. Applying *ℓ*_{1} regularization is known to lead to sparse connectivity [77]; however, applying it to the recurrent connectivity matrix ended in nullifying all connections but a few, which were still strong enough to turn the dynamics into the supercritical domain (see S5 and S6 Figs). Because recurrent connectivity is present in most biological neural networks, we chose to focus only on simulations where the recurrent connections were regularized by their *ℓ*_{2} norm. Using either the *ℓ*_{1} or *ℓ*_{2} norm to regularize the feed-forward connectivity did not have a dramatic effect on the results. Since using the *ℓ*_{1} norm leads to a more biological sparse feed-forward connectivity, as found experimentally in the DCN [28], we chose to focus on this option.

The stability of the network’s fixed point is determined by the sign of the eigenvalues of the matrix that controls the linearized dynamics. In this case, the corresponding matrix is (*I* − *GK*), where *K* is the recurrent connectivity matrix and *G* is a diagonal matrix containing the derivatives of the activation function for each output neuron (see Methods). Since the maximal derivative of the chosen activation function (1/(1 + exp(−*x*))) is ¼, the critical point is characterized by having the spectral radius of the recurrent connectivity matrix, *K*, near 4. We used this result as an efficient surrogate to the actual critical point.

In our simulations, the spectral radius of the recurrent connectivity matrix *K* decreased with the respective regularization coefficient λ_{K}, with a characteristic sharp drop (Fig 6). Generally, the value of λ_{K} where this drop occurs depends mainly on the number of output neurons; however, in our simulations, sensory deprivation caused this value to rise. This phenomenon created an interval of λ_{K} values, where sensory deprivation drives the dynamics much closer to the critical point, thus, eliciting the hallucination-like responses described before. Interestingly, we found that the results depicted in Fig 6 were robust to changes in the attenuation profile of the inputs (see S4 Fig), suggesting that they depend only on the network’s size and feed-forward connectivity. In all simulations above we used a regularization coefficient near the upper bound of this interval (λ_{K} = 0.226), as higher values within the interval tended to yield results more consistent with biological findings, such as the single-peaked “hallucination” profile [8, 75].

The spectral radius of the recurrent connectivity matrix *K* decreases with the regularization coefficient λ_{K}, before and after the induction of sensory deprivation. Due to the chosen sigmoidal activation function, the sharp drop in the spectral radius from ≈4 to ≈2 determines the border between near-critical and subcritical dynamics. After the induction of sensory deprivation, this border moves to higher values of the regularization coefficient, hence, creating an interval (from ≈0.183 to ≈0.228) of regularization coefficient values where sensory deprivation causes “hallucinations”. The attenuation profile’s parameters were *k*_{0} = 20, *β* = 10 (see Fig 2B).

## Discussion

In this work, we used an EM approach to train a recurrent neural network to represent simulated auditory stimuli, and examined the effect of input statistics on the evolved representation. For typical inputs, the network developed connectivity patterns and exhibited output responses similar to biological findings regarding the auditory system in general [78–81] and, more specifically, the DCN [27, 28]. Interestingly, sensory deprivation elicited tinnitus-like “hallucinations” in the network, resembling the characteristics of common types of tinnitus [3, 7, 8, 13, 61, 74]. Although we focused here on tinnitus, this qualitative phenomenon is independent of the input modality and can be used to explain how other kinds of “phantom” sensations are caused by neural plasticity and involve the specific region in the sensory input space, which was deprived of input [82, 83].

The DCN is known to receive various non-auditory inputs [30–33]. In particular, somatosensory projections to the DCN are known to be upregulated in tinnitus [22, 34–38], and sensory stimulation modulates the perceived tinnitus in most cases [37, 39, 40]. Conceptually, these findings are in line with the EM approach—strengthening external inputs to a deprived output neuron will tend to increase its entropy. Such upregulation of connections from one sensory modality to another resembles acquired synaesthesia, namely the triggering of sensations in a sensory deprived modality by stimulation of another modality [84]. For example, following visual deafferentation, visual sensations can be elicited by auditory or somatosensory stimuli [85–87]. Indeed, the relationship between tinnitus and acquired somatosensory-auditory synaesthesia was proposed previously [84]. The emergence of such acquired synaesthesia following sensory deprivation has been demonstrated in a network model based on the same EM approach used here [60]. Thus, the proposed computational framework can naturally account for the effect of non-auditory projections.

Nevertheless, the strengthening of feed-forward connections, such as the somatosensory projections, cannot explain the emergence of tinnitus by itself. First, while the perception of tinnitus can be modulated by external feed-forward projections, such projections cannot maintain persistent activity by themselves in the absence of non-auditory stimulation. Second, the perceived tinnitus typically has a distinct spectral profile, whereas a simple enhancement of feed-forward somatosensory inputs would be expected to elicit a homogeneous profile within the deprived frequency range. Recurrent networks, on the other hand, can naturally give rise to and maintain inhomogeneous persistent activity in the absence of external stimulation [88, 89]. Thus, the emergence of tinnitus is likely to rely on changes in recurrent circuitry, although it may also involve additional changes in feed-forward interactions. This study focused on the role of recurrent interactions in the emergence of tinnitus. We note, however, that the corresponding recurrent network may go beyond the DCN and incorporate other brain areas, such as the ventral cochlear nucleus (VCN) and the inferior colliculus (IC), which are known to undergo plastic changes during tinnitus [90–93]. Future work can generalize the current model to also include different non-auditory inputs and model their effect on the perceived tinnitus.

Previous computational models rely on phenomenological homeostasis-driven plasticity to demonstrate tinnitus elicited by sensory deprivation [47–52]. Here, we use an objective-driven plasticity, namely, the main mechanism underlying the network’s plasticity is optimizing an explicit computational goal. Specifically, the network maximizes the entropy of its output, which corresponds to increasing input sensitivity [59]. The general resemblance of our model to biological findings supports the hypothesis that EM serves as a computational objective for primary sensory processing networks in the brain (e.g., [58, 59]). However, as described in the Methods section, the vanilla EM learning rules drive the network into a phase transition. This process may lead the network away from a stable fixed point and into dynamical states with poor information representation. Thus, some regularization should be used to keep the network subcritical. To this end, we used a penalty on the *ℓ*_{2} norm of the recurrent connections as a regularization method, which can be thought of as a kind of homeostatic mechanism [94–98]. Following sensory deprivation, the network increases the gain of its recurrent connectivity to compensate for the attenuated inputs and operates much closer to its critical point, giving rise to tinnitus-like “hallucinations”. In this model, the emergence of tinnitus depends on the interplay between the computational objective and the homeostatic regularization, in contrast to models driven by a single phenomenological homeostatic mechanism. Future studies might employ different types of regularization methods (e.g., firing-rate-based rather than weight-based) and examine their effect on the dynamics of the network.

While most of the hyper-parameters of the model can be chosen arbitrarily without having any qualitative effect on the results, the regularization coefficient for the recurrent connectivity, λ_{K}, is an exception; if it is too small, numerical instabilities might accidentally drive the network into a supercritical domain, but if it is too large, the network will always remain subcritical. In the first case, the output may no longer be dependent on the input, while in the second case, the input may have little effect on the output—in both cases, moving away from the critical point leads to poor sensitivity. In practice, there is a specific range of values which yields the qualitative results demonstrated in this paper (see Fig 6) and, according to our observations, it is independent on the chosen attenuation profile (see S4 Fig). Here, we used a grid search to find the corresponding range, and the results were obtained using a near-maximal value within it. This choice maximized the cost of regularization relative to the EM objective, while still allowing a sensory deprivation to drive the dynamics away from the subcritical regime. This choice of λ_{K} has driven the network towards single-peaked “hallucinations”, matching empirical findings [8, 75].

These results are interesting to discuss in light of a plethora of studies from recent years, suggesting near-critical dynamics in biological neural networks across various scales, from neuronal cultures to large-scale human brain activity [99–107]. In particular, it is proposed that healthy neural dynamics are poised near a critical point, yet within the subcritical domain [108]. Changes in the input statistics can drive the network to transition into supercritical dynamics, which may manifest as hallucinations. Our study portrays a concrete, albeit simplified, network model that experiences a transition from healthy to pathological neural dynamics as a consequence of inherent plasticity and sensory deprivation. We note that the network dynamics here are too simplified to enable a direct comparison with the rich dynamics observed in cortical networks and with common hallmarks of criticality (e.g., [99]).

An illuminating perspective on the emergence of hallucinations, such as tinnitus, as a consequence of sensory deprivation comes from the framework of Bayesian inference [109–111]. According to this framework, sensory systems generate perception by combining the incoming stimuli with prior expectations in a way that takes into account the relative uncertainty of each. Under sensory deprivation, the uncertainty about the input is very large; hence, the weight of the prior expectations becomes more dominant. This process may eventually lead to a state in which prior expectations dominate perception, which can be interpreted as a hallucination [112]. If this perception is maintained long enough, it will turn into a strong prior by itself, thus, giving rise to a chronic hallucination—namely, tinnitus [110]. Although our model does not use the Bayesian framework explicitly, it can be thought of in similar terms. Here, the prior expectations are effectively encoded in the evolved recurrent connectivity. Under sensory deprivation, these recurrent interactions dominate the network’s response and can be thought of as an enhanced prior. The advantage of the model described here lies in its mechanistic nature, namely, that it is cast in the language of neuronal networks with long-term plasticity of recurrent interactions. Thus, it can be more straightforward to interpret and compare to experimental data.

It is important to note that this model is relatively simplified in terms of the network architecture and dynamics. For example, the steady-state response used here reflects an assumption of slowly modulated inputs (compared to the network dynamics), which is usually reasonable in the case of the auditory system, but it does not hold for all cases. As a consequence, the model cannot fully capture some of the underlying details, such as the spectral response properties of DCN neurons and dynamical aspects like bursting and synchrony; however, such simplifications are currently necessary to allow the derivation of EM-based learning rules for the recurrent connections [57]. Developing suitable EM-based learning rules for non-stationary inputs and outputs is an interesting and challenging task by itself, and its application to scenarios of sensory deprivation may lead to further insights, but such derivation lies beyond the scope of the current work. We believe that the underlying principle of EM leading to hallucinations under sensory deprivation does not depend on such details. Future work can use the same computational principles with a more biologically-detailed network model to better account for other aspects as well.

To summarize, we have demonstrated how the EM approach can be used as a model of early auditory processing and the phenomenon of tinnitus. Previous works have demonstrated that EM-based neural networks can serve as models for early visual processing [58, 59] and the phenomenon of synaesthesia [60]. We believe that this framework can be used for modeling other modalities and phenomena as well. It is also important to extend this framework to more biologically plausible network models, which could account for more detailed aspects of the underlying neural dynamics.

## Methods

### The model

We modeled an early auditory processing neural network (e.g., the DCN) using the overcomplete recurrent EM neural network described in [57], with the addition of regularization on strong connectivity.

#### Network architecture and dynamics.

Our system is composed of *M* input neurons, **x**, and *N* output neurons, **s**. Each output neuron’s activity through time is given by the dynamic equation:
(1)
where *W* is the feed-forward connectivity matrix, *K* is the recurrent connectivity matrix, *T* are the output neurons’ thresholds, and *g*(*x*) = 1/(1 + exp(−*x*)) is the activation function of the neurons. For overcomplete transformations, we assume *M* < *N* (Fig 1).

The fixed points of Eq 1 are given implicitly by:
(2)
These fixed points are stable iff all of the eigenvalues of the linearized dynamics matrix (*I* − *GK*) have positive real parts [59] (*G* is a diagonal matrix defined by ). Since the values of *G* are upper-bounded by max_{x} *g*′ (*x*) = ¼, for a matrix *K* with eigenvalues <4, the fixed points are indeed stable. In practice, when fixed points exist at all, there will usually be only one such stable fixed point.

Numerically, the steady state can be found via integrating Eq 1 using Euler’s method for a long time-period until the activities stabilize; however, this method is highly inefficient. In this work, we found the steady state by solving Eq 2 directly using the Newton-Raphson method.

When the eigenvalues of *K* are near 4, the eigenvalues of (*I* − *GK*) might get close to zero. Crossing this point will result in instability of the fixed point and a phase transition. Near this phase transition, the decrease in the eigenvalues of (*I* − *GK*) will cause the effective time constants to rise—a phenomenon termed “critical slowing down”. To gain some insight into the actual effective time constant, we evaluated the convergence time of Eq 1 by integrating it using Euler’s method, and counting the number of time-steps until a convergence criterion was met.

Furthermore, such a phase transition is expected to be characterized by a spontaneous symmetry breaking [113], which can be measured by several metrics. Here, we used the population vector for that purpose, calculated as where *ϕ*_{k} ≡ 2*πk*/*N* and *k* is the index of the output neuron. Although in our case the boundary conditions are not periodic, we assume their effect to be negligible since *N* ≫ 1 and treat the preferred frequencies of the neurons as preferred angles.

#### Learning rules.

The goal of the network is to find the set {*W**, *K**, *T**} which maximizes the entropy *H*(**s**) of the steady state outputs. To do so, we used the objective function described in [57], with additional regularization terms on the *ℓ*_{1} and *ℓ*_{2} norms of *W* and *K*, respectively:
(3)
where is the Jacobian of the transformation given by *χ* = *ϕW*, and *ϕ* ≡ (*I* − *GK*)^{−1} *G* [57].

This objective function, without the regularization terms, would lead to an increase in the singular values of *χ*. One way to achieve that goal is to decrease the eigenvalues of (*I* − *GK*) to zero, which may lead one of them to turn slightly negative due to numerical errors. This will result in instability of the fixed point and a phase transition, as discussed above. The goal of the regularization terms is to prevent this phenomenon, which is a general property of unregularized entropy maximization systems of continuous variables [114].

The learning rules were derived using the gradient descent method, as in [57]:
(4)
where , , *S*(*A*) is defined by (*S*(*A*))_{ij} ≡ sign (*A*_{ij}) and *χ*^{+} stands for the pseudo-inverse of *χ* (in the overcomplete case used here, *χ*^{+} = (*χ*^{T} *χ*) *χ*^{T}).

### Auditory inputs

The input stimuli were chosen according to certain heuristics to emulate the system’s response to tones of varying frequencies and amplitudes. Each input sample embodies the reaction of the auditory hair cells to a combination of tones of certain frequencies. As the cochlea maps the frequencies on a logarithmic scale, we assumed each pair of adjacent input neurons, representing inner hair cells, to represent equally log-spaced frequencies. The input profile for a pure tone is centered on the neuron that best matches that frequency, and drops off to neighboring neurons to form a narrow Gaussian response curve. The frequency of each pure tone was chosen at random with a uniform distribution (in the log-spaced frequency domain) within the permitted range. The amplitude of each pure tone was randomly drawn from a uniform distribution, reflecting the unimodal distribution of the logarithms of amplitudes in natural sounds (e.g., [115]). Other unimodal distributions, e.g., the normal distribution, may also be used to model the logarithms of the amplitudes. To account for the logarithmic response of hair cells and the auditory nerve to different amplitudes [116, 117], we modeled the distribution of the logarithms of the amplitudes rather than that of the raw amplitudes. In addition to the input response, all neurons feature some spontaneous random activity that is irrespective of the inputs, to model the neurons’ reaction to background noises and non-stimulated motion of the hair cells (Fig 2A).

The amplitudes of natural sounds are not uniformly distributed, loud sounds being exponentially less common; however, the response of the inner hair cells is determined not only by the absolute amplitude of the sound, but also by the reactivity of the basilar membrane, as controlled by the outer hair cells. This serves as an automatic gain control mechanism, giving the inner hair cells use of their full motion capacity for normal inputs. Therefore, we hold the uniform distribution to be a good approximation to the output of the inner hair cells when presented with natural sounds [118, 119].

To model sensory deprivation, we attenuated a part of the frequency domain by applying a (monotonically decreasing) sigmoid envelope to all stimuli. The choice of attenuating the higher frequencies in most attenuation profiles was based on the most common type of hearing loss [120, 121], but attenuation was also applied to other frequency bands (Fig 2B–2F).

### Implementation details

#### Input generation.

Each input sample was composed of up to 5 different tones, uniformly distributed in the input domain. The response to each tone was a Gaussian, with a folded-normally distributed standard deviation (the standard deviations themselves have a standard deviation of half the input domain) and a uniformly distributed amplitude between 7 and 10 (arbitrary units). An additive uniformly distributed noise between 0 and 1 was added to each simulated input sample. Finally, all input samples were divided by twice the highest activation obtained over all samples and input neurons, such that the new activations were in the range [0, 0.5].

#### Attenuation profiles.

Input attenuation of high frequencies was simulated by multiplying each input neuron’s activity by a factor between 0 and 1. This factor was chosen according to a sigmoid function: *a*(*k*) = 1/(1 + exp(−*β*(*k*_{0} − *k*))), where *k* is the input neuron’s index, *k*_{0} represents the cutoff frequency in the input neurons domain (analogous to the log-scaled frequency domain) and *β* controls the attenuation profile’s steepness. Here we chose *k*_{0} to be at either ½ (Fig 2B, 2D and 2F) or ¾ (Fig 2C) of the number of input neurons, and *β* to be either 10 (Fig 2B and 2C), 1 (Fig 2D) or -10 (a non-inverted sigmoid; Fig 2F). To simulate a hearing loss at a certain frequency band, we combined two sigmoidal functions to get the attenuation profile: *a*(*k*) = 1 − (1 − 1/(1 + exp(−*β*(*k*_{1} − *k*)))) ⋅ (1 − 1/(1 + exp(−*β*(*k* − *k*_{2})))), where *k*_{1} and *k*_{2} are the edges of the frequency band, defined similarly to *k*_{0} in the previous cases. Here, we chose *k*_{1} and *k*_{2} to be at ¼ and ¾ of the number of input neurons, respectively, and *β* to be 1 (Fig 2E).

#### Training schedule and hyper-parameters.

The network was trained in an on-line manner using 1,000,000 samples randomly drawn as described in the Input generation subsection. The training process was divided into three phases:

**Feed-forward training**: Only the feed-forward connections (*W*) and the thresholds (*T*) were trained using unattenuated inputs for 50,000 iterations. The learning rate was*η*= 0.1 and the feed-forward regularization coefficient was set to λ_{W}= 0.001. During this phase the recurrent connections were set to zero.**Recurrent training**: Only the recurrent connections (*K*) were trained using unattenuated inputs for 1,000,000 iterations. The learning rate was*η*= 0.001 and the regularization coefficient was λ_{K}= 0.226 (see Regularization effect). During training, auto-synapses (from an output neuron to itself) were manually truncated to zero.**Attenuated inputs training**: The training continued exactly as in the previous recurrent training phase (phase 2) for another 1,000,000 iterations, but now the inputs were attenuated.

We note that the different number of iterations in each phase was chosen to be large enough to implicate full convergence of the learning process. In practice, the learning usually converges after much fewer iterations.

While the second learning phase was meant to simulate a normal development of the recurrent connectivity prior to the sensory deprivation, similar results to those displayed throughout the paper are also obtained without it (see S7–S10 Figs).

## Supporting information

### S1 Fig. The network’s recurrent connectivity before and after sensory deprivation for different attenuation profiles.

Each row of panels depicts the recurrent connectivity matrix and its average row profile after sensory deprivation, averaged separately for neurons in the deprived zone and the non-deprived zone. Each row match the attenuation profiles from panels C–F in Fig 2, respectively. See Fig 3 for further details.

https://doi.org/10.1371/journal.pcbi.1008664.s001

(TIF)

### S2 Fig. Global measures for different scaling of the recurrent connections.

A: The network’s objective function, without the regularization terms. B: The convergence time of the network dynamics using Euler’s method. C: The population vector magnitude. D: The squared correlation coefficient between pairs of output neurons, averaged over all such pairs. All the above measures are displayed for different scaling factors of the recurrent connectivity matrix *K*^{tr}, as found by the training process; i.e., for each value of the scaling factor *σ*, the different measures were evaluated by replacing the recurrent connectivity matrix with *K* = *σK*^{tr}. The recurrent connectivity matrices used here were obtained after sensory deprivation. The left and right panels correspond to attenuation profiles with *k*_{0} = 30, *β* = 10 and *k*_{0} = 20, *β* = 1, respectively (Fig 2C and 2D). The operating point is at a scaling factor of 1, namely, the recurrent connectivity the learning process has converged to. The marked critical point is the scaling factor for which the spectral radius *ρ*(*K*) of the recurrent connectivity matrix is 4, i.e., 4/*ρ*(*K*^{tr}). See Fig 5 for further details.

https://doi.org/10.1371/journal.pcbi.1008664.s002

(TIF)

### S3 Fig. Global measures for different scaling of the recurrent connections.

A: The network’s objective function, without the regularization terms. B: The convergence time of the network dynamics using Euler’s method. C: The population vector magnitude. D: The squared correlation coefficient between pairs of output neurons, averaged over all such pairs. All the above measures are displayed for different scaling factors of the recurrent connectivity matrix *K*^{tr}, as found by the training process; i.e., for each value of the scaling factor *σ*, the different measures were evaluated by replacing the recurrent connectivity matrix with *K* = *σK*^{tr}. The recurrent connectivity matrices used here were obtained after sensory deprivation. The left and right panels correspond to the last two attenuation profiles from Fig 2 (panels E and F, respectively). The operating point is at a scaling factor of 1, namely, the recurrent connectivity the learning process has converged to. The marked critical point is the scaling factor for which the spectral radius *ρ*(*K*) of the recurrent connectivity matrix is 4, i.e., 4/*ρ*(*K*^{tr}). See Fig 5 for further details.

https://doi.org/10.1371/journal.pcbi.1008664.s003

(TIF)

### S4 Fig. Regularization effect on the spectral radius of the recurrent connectivity matrix.

The spectral radius, *ρ*(*K*), of the recurrent connectivity matrix *K* as a function of the regularization coefficient λ_{K}, before and after the induction of different sensory deprivation profiles. See Fig 6 for further details.

https://doi.org/10.1371/journal.pcbi.1008664.s004

(TIF)

### S5 Fig. The network’s recurrent connectivity before and after sensory deprivation using *ℓ*_{1} regularization.

A–C: The recurrent connectivity matrix and its average row profile and connectivity distribution, before sensory deprivation. D–F: Same as A–C, but after sensory deprivation. In E, the row profiles were averaged separately for neurons in the deprived zone and the non-deprived zone. The attenuation profile’s parameters were *k*_{0} = 20, *β* = 10 (see Fig 2B). See Fig 3 for further details.

https://doi.org/10.1371/journal.pcbi.1008664.s005

(TIF)

### S6 Fig. The network’s response to different stimuli before and after sensory deprivation using *ℓ*_{1} regularization.

A: Typical stimuli and a silent stimulus (zero input—right panel). B: The network’s response to the stimuli presented in A. C: The network’s response to the stimuli presented in A after training on stimuli with attenuated high frequencies. The attenuation profile is depicted in gray. The spontaneous activity of the output neurons, defined here as the average activity in response to a silent stimulus before attenuation (as in the right panel of B), is indicated in B–C by a dashed line. See Fig 4 for further details.

https://doi.org/10.1371/journal.pcbi.1008664.s006

(TIF)

### S7 Fig. The network’s recurrent connectivity after sensory deprivation without pretraining the recurrent connections on normal stimuli.

A: The recurrent connectivity matrix. B: The average row profile of the recurrent connectivity matrix, averaged separately for neurons in the deprived zone and the non-deprived zone. The attenuation profile’s parameters were *k*_{0} = 20, *β* = 10 (see Fig 2B). See Fig 3 for further details.

https://doi.org/10.1371/journal.pcbi.1008664.s007

(TIF)

### S8 Fig. The network’s response to different stimuli before and after sensory deprivation, without pretraining the recurrent connections on normal stimuli.

A: Typical stimuli and a silent stimulus (zero input—right panel). B: The network’s response to the stimuli presented in A after training only the feed-forward connections. C: The network’s response to the stimuli presented in A after training on stimuli with attenuated high frequencies. The attenuation profile is depicted in gray. The spontaneous activity of the output neurons, defined here as the average activity in response to a silent stimulus before attenuation (as in the right panel of B), is indicated in B–C by a dashed line. See Fig 4 for further details.

https://doi.org/10.1371/journal.pcbi.1008664.s008

(TIF)

### S9 Fig. Global measures for different scaling of the recurrent connections, without pretraining the recurrent connections on normal stimuli.

A: The network’s objective function, without the regularization terms. B: The convergence time of the network dynamics using Euler’s method. C: The population vector magnitude. D: The squared correlation coefficient between pairs of output neurons, averaged over all such pairs. All the above measures are displayed for different scaling factors of the recurrent connectivity matrix *K*^{tr}, as found by the training process; i.e., for each value of the scaling factor *σ*, the different measures were evaluated by replacing the recurrent connectivity matrix with *K* = *σK*^{tr}. The recurrent connectivity matrix used here was obtained after sensory deprivation. The attenuation profile used had the parameters *k*_{0} = 20, *β* = 10. The operating point is at a scaling factor of 1, namely, the recurrent connectivity the learning process has converged to. The marked critical point is the scaling factor for which the spectral radius *ρ*(*K*) of the recurrent connectivity matrix is 4, i.e., 4/*ρ*(*K*^{tr}). See Fig 5 for further details.

https://doi.org/10.1371/journal.pcbi.1008664.s009

(TIF)

### S10 Fig. Regularization effect on the spectral radius of the recurrent connectivity matrix, without pretraining the recurrent connections on normal stimuli.

The spectral radius, *ρ*(*K*), of the recurrent connectivity matrix *K* as a function of the regularization coefficient λ_{K}, after the induction of sensory deprivation. See Fig 6 for further details.

https://doi.org/10.1371/journal.pcbi.1008664.s010

(TIF)

### S11 Fig. Simulated audiograms for different attenuation profiles.

A: A simulated audiogram without sensory deprivation. B–F: Simulated audiograms for different attenuation profiles, matching the ones in Fig 2B–2F, respectively. To simulate subjective hearing thresholds, the threshold of each frequency represents the input activity required to produce a difference of 0.01 (measured by *ℓ*_{∞}-norm) between a silent input and an input where only the specific frequency is active. The thresholds were found using the bisection method in the interval [0, 100], with a tolerance of 10^{−6}.

https://doi.org/10.1371/journal.pcbi.1008664.s011

(TIF)

## Acknowledgments

The authors wish to thank Avishalom Shalit, Israel Nelken, Miriam Guendelman and Jennifer Resnik for helpful discussions and valuable comments on the manuscript. The authors also thank the Israel Arts and Science Academy for providing the right environment in early stages of this study.

## References

- 1. Nicolas-Puel C, Akbaraly T, Lloyd R, Berr C, Uziel A, Rebillard G, et al. Characteristics of tinnitus in a population of 555 patients: Specificities of tinnitus induced by noise trauma. The International Tinnitus Journal. 2006;12(1):64–70. pmid:17147043
- 2. Saunders JC. The role of central nervous system plasticity in tinnitus. Journal of Communication Disorders. 2007;40(4):313–334. pmid:17418230
- 3. Roberts LE, Eggermont JJ, Caspary DM, Shore SE, Melcher JR, Kaltenbach JA. Ringing ears: The neuroscience of tinnitus. Journal of Neuroscience. 2010;30(45):14972–14979. pmid:21068300
- 4. Shargorodsky J, Curhan GC, Farwell WR. Prevalence and characteristics of tinnitus among US adults. The American Journal of Medicine. 2010;123(8):711–718. pmid:20670725
- 5. Langguth B, Kreuzer PM, Kleinjung T, Ridder DD. Tinnitus: Causes and clinical management. The Lancet Neurology. 2013;12(9):920–930. pmid:23948178
- 6. McCormack A, Edmondson-Jones M, Somerset S, Hall D. A systematic review of the reporting of tinnitus prevalence and severity. Hearing Research. 2016;337:70–79. pmid:27246985
- 7. Noreña A, Micheyl C, Chéry-Croze S, Collet L. Psychoacoustic characterization of the tinnitus spectrum: Implications for the underlying mechanisms of tinnitus. Audiology and Neurotology. 2002;7(6):358–369. pmid:12401967
- 8. Roberts LE, Moffat G, Baumann M, Ward LM, Bosnyak DJ. Residual inhibition functions overlap tinnitus spectra and the region of auditory threshold shift. Journal of the Association for Research in Otolaryngology. 2008;9(4):417–435. pmid:18712566
- 9. Weisz N, Hartmann T, Dohrmann K, Schlee W, Noreña A. High-frequency tinnitus without hearing loss does not mean absence of deafferentation. Hearing Research. 2006;222(1):108–114. pmid:17079102
- 10. Noreña AJ, Farley BJ. Tinnitus-related neural activity: Theories of generation, propagation, and centralization. Hearing Research. 2013;295:161–171. pmid:23088832
- 11. Zuckerman M, Cohen N. Sources of reports of visual and auditory sensations in perceptual-isolation experiments. Psychological Bulletin. 1964;62(1):1–20. pmid:14176649
- 12. Merabet LB, Maguire D, Warde A, Alterescu K, Stickgold R, Pascual-Leone A. Visual hallucinations during prolonged blindfolding in sighted subjects. Journal of Neuro-Ophthalmology. 2004;24(2). pmid:15179062
- 13. Schaette R, Turtle C, Munro KJ. Reversible induction of phantom auditory sensations through simulated unilateral hearing loss. PLoS One. 2012;7(6):1–6. pmid:22675466
- 14. Blom JD, Sommer IEC. Auditory hallucinations: nomenclature and classification. Cognitive and Behavioral Neurology. 2010;23(1). pmid:20299866
- 15. Eggermont JJ, Roberts LE. The neuroscience of tinnitus. Trends in Neurosciences. 2004;27(11):676–682. pmid:15474168
- 16. Rauschecker JP, Leaver AM, Mühlau M. Tuning out the noise: Limbic-auditory interactions in tinnitus. Neuron. 2010;66(6):819–826. pmid:20620868
- 17. Wang H, Brozoski TJ, Caspary DM. Inhibitory neurotransmission in animal models of tinnitus: Maladaptive plasticity. Hearing Research. 2011;279(1):111–117. pmid:21527325
- 18. Henry JA, Roberts LE, Caspary DM, Theodoroff SM, Salvi RJ. Underlying mechanisms of tinnitus: Review and clinical implications. Journal of the American Academy of Audiology. 2014;25(01):5–22. pmid:24622858
- 19. Brozoski TJ, Bauer CA, Caspary DM. Elevated fusiform cell activity in the dorsal cochlear nucleus of chinchillas with psychophysical evidence of tinnitus. Journal of Neuroscience. 2002;22(6):2383–2390. pmid:11896177
- 20. Brozoski TJ, Bauer CA. The effect of dorsal cochlear nucleus ablation on tinnitus in rats. Hearing Research. 2005;206(1):227–236. pmid:16081010
- 21. Wang H, Brozoski TJ, Turner JG, Ling L, Parrish JL, Hughes LF, et al. Plasticity at glycinergic synapses in dorsal cochlear nucleus of rats with behavioral evidence of tinnitus. Neuroscience. 2009;164(2):747–759. pmid:19699270
- 22. Dehmel S, Pradhan S, Koehler S, Bledsoe S, Shore S. Noise overexposure alters long-term somatosensory-auditory processing in the dorsal cochlear nucleus—Possible basis for tinnitus-related hyperactivity? Journal of Neuroscience. 2012;32(5):1660–1671. pmid:22302808
- 23. Kaltenbach JA. The dorsal cochlear nucleus as a participant in the auditory, attentional and emotional components of tinnitus. Hearing Research. 2006;216–217:224–234. pmid:16469461
- 24. Kaltenbach JA, Godfrey DA. Dorsal cochlear nucleus hyperactivity and tinnitus: Are they related? American Journal of Audiology. 2008;17(2):S148–S161. pmid:18978198
- 25. Tzounopoulos T. Mechanisms of synaptic plasticity in the dorsal cochlear nucleus: Plasticity-induced changes that could underlie tinnitus. American Journal of Audiology. 2008;17(2):S170–S175. pmid:18978197
- 26. Baizer JS, Manohar S, Paolone NA, Weinstock N, Salvi RJ. Understanding tinnitus: The dorsal cochlear nucleus, organization and plasticity. Brain Research. 2012;1485:40–53. pmid:22513100
- 27. Spirou GA, Davis KA, Nelken I, Young ED. Spectral integration by Type II interneurons in dorsal cochlear nucleus. Journal of Neurophysiology. 1999;82(2):648–663. pmid:10444663
- 28. Oertel D, Young ED. What’s a cerebellar circuit doing in the auditory system? Trends in Neurosciences. 2004;27(2):104–110. pmid:15102490
- 29. Zhang JS, Kaltenbach JA, Godfrey DA, Wang J. Origin of hyperactivity in the hamster dorsal cochlear nucleus following intense sound exposure. Journal of Neuroscience Research. 2006;84(4):819–831. pmid:16862546
- 30. Shore SE, Zhou J. Somatosensory influence on the cochlear nucleus and beyond. Hearing Research. 2006;216-217:90–99. pmid:16513306
- 31. Koehler SD, Pradhan S, Manis PB, Shore SE. Somatosensory inputs modify auditory spike timing in dorsal cochlear nucleus principal cells. European Journal of Neuroscience. 2011;33(3):409–420. pmid:21198989
- 32. Smith PF. Interactions between the vestibular nucleus and the dorsal cochlear nucleus: Implications for tinnitus. Hearing Research. 2012;292(1):80–82. pmid:22960359
- 33. Wu C, Stefanescu RA, Martel DT, Shore SE. Listening to another sense: somatosensory integration in the auditory system. Cell and Tissue Research. 2015;361(1):233–250. pmid:25526698
- 34. Shore SE, Koehler S, Oldakowski M, Hughes LF, Syed S. Dorsal cochlear nucleus responses to somatosensory stimulation are enhanced after noise-induced hearing loss. European Journal of Neuroscience. 2008;27(1):155–168. pmid:18184319
- 35. Shore SE. Plasticity of somatosensory inputs to the cochlear nucleus—Implications for tinnitus. Hearing Research. 2011;281(1):38–46. pmid:21620940
- 36. Zeng C, Yang Z, Shreve L, Bledsoe S, Shore S. Somatosensory projections to cochlear nucleus are upregulated after unilateral deafness. Journal of Neuroscience. 2012;32(45):15791–15801. pmid:23136418
- 37. Shore SE, Roberts LE, Langguth B. Maladaptive plasticity in tinnitus—triggers, mechanisms and treatment. Nature Reviews Neurology. 2016;12(3):150–160. pmid:26868680
- 38. Wu C, Stefanescu RA, Martel DT, Shore SE. Tinnitus: Maladaptive auditory–somatosensory plasticity. Hearing Research. 2016;334:20–29. pmid:26074307
- 39. Levine RA, Abel M, Cheng H. CNS somatosensory-auditory interactions elicit or modulate tinnitus. Experimental Brain Research. 2003;153(4):643–648. pmid:14600798
- 40. Sanchez TG, Rocha CB. Diagnosis and management of somatosensory tinnitus. Clinics. 2011;66(6):1089–1094. pmid:21808880
- 41. Marks KL, Martel DT, Wu C, Basura GJ, Roberts LE, Schvartz-Leyzac KC, et al. Auditory-somatosensory bimodal stimulation desynchronizes brain circuitry to reduce tinnitus in guinea pigs and humans. Science Translational Medicine. 2018;10(422). pmid:29298868
- 42. Schaette R, Kempter R. Computational models of neurophysiological correlates of tinnitus. Frontiers in Systems Neuroscience. 2012;6:34. pmid:22586377
- 43. Gerken GM. Central tinnitus and lateral inhibition: An auditory brainstem model. Hearing Research. 1996;97(1):75–83. pmid:8844188
- 44. Kral A, Majernik V. On lateral inhibition in the auditory system. General Physiology and Biophysics. 1996;15(2):109–127. pmid:8899416
- 45. Bruce IC, Bajaj HS, Ko J. Lateral-inhibitory-network models of tinnitus. IFAC Proceedings Volumes. 2003;36(15):359–363.
- 46. Parra LC, Pearlmutter BA. Illusory percepts from auditory adaptation. The Journal of the Acoustical Society of America. 2007;121(3):1632–1641. pmid:17407900
- 47. Dominguez M, Becker S, Bruce I, Read H. A spiking neuron model of cortical correlates of sensorineural hearing loss: Spontaneous firing, synchrony, and tinnitus. Neural Computation. 2006;18(12):2942–2958. pmid:17052154
- 48. Schaette R, Kempter R. Development of tinnitus-related neuronal hyperactivity through homeostatic plasticity after hearing loss: a computational model. European Journal of Neuroscience. 2006;23(11):3124–3138. pmid:16820003
- 49. Schaette R, Kempter R. Predicting tinnitus pitch from patients’ audiograms with a computational model for the development of neuronal hyperactivity. Journal of Neurophysiology. 2009;101(6):3042–3052. pmid:19357344
- 50. Schaette R, McAlpine D. Tinnitus with a normal audiogram: Physiological evidence for hidden hearing loss and computational model. Journal of Neuroscience. 2011;31(38):13452–13457. pmid:21940438
- 51. Chrostowski M, Yang L, Wilson HR, Bruce IC, Becker S. Can homeostatic plasticity in deafferented primary auditory cortex lead to travelling waves of excitation? Journal of Computational Neuroscience. 2011;30(2):279–299. pmid:20623168
- 52. Gault R, McGinnity TM, Coleman S. A computational model of thalamocortical dysrhythmia in tinnitus sufferers. IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2018.
- 53. Gault R, McGinnity TM, Coleman S. Perceptual modeling of tinnitus pitch and loudness. IEEE Transactions on Cognitive and Developmental Systems. 2020;12(2):332–343.
- 54. Zeng FG. An active loudness model suggesting tinnitus as increased central noise and hyperacusis as increased nonlinear gain. Hearing Research. 2013;295:172–179. pmid:22641191
- 55. Krauss P, Tziridis K, Metzner C, Schilling A, Hoppe U, Schulze H. Stochastic resonance controlled upregulation of internal noise after hearing loss as a putative cause of tinnitus-related neuronal hyperactivity. Frontiers in Neuroscience. 2016;10:597. pmid:28082861
- 56.
Schilling A, Tziridis K, Schulze H, Krauss P. Chapter 6—The stochastic resonance model of auditory perception: A unified explanation of tinnitus development, Zwicker tone illusion, and residual inhibition. In: Langguth B, Kleinjung T, De Ridder D, Schlee W, Vanneste S, editors. Tinnitus—An Interdisciplinary Approach Towards Individualized Treatment: Towards understanding the complexity of tinnitus. vol. 262 of Progress in Brain Research. Elsevier; 2021. p. 139–157.
- 57.
Shriki O, Sompolinsky H, Lee DD. An information maximization approach to overcomplete and recurrent representations. In: Leen TK, Dietterich TG, Tresp V, editors. Advances in Neural Information Processing Systems 13. MIT Press; 2001. p. 612–618.
- 58. Bell AJ, Sejnowski TJ. The “independent components” of natural scenes are edge filters. Vision Research. 1997;37(23):3327–3338. pmid:9425547
- 59. Shriki O, Yellin D. Optimal information representation and criticality in an adaptive sensory recurrent neuronal network. PLoS Comput Biol. 2016;12(2):e1004698. pmid:26882372
- 60. Shriki O, Sadeh Y, Ward J. The emergence of synaesthesia in a neuronal network model via changes in perceptual sensitivity and plasticity. PLoS Comput Biol. 2016;12(7):e1004959. pmid:27392215
- 61. Noreña AJ. An integrative model of tinnitus based on a central gain controlling neural sensitivity. Neuroscience & Biobehavioral Reviews. 2011;35(5):1089–1109. pmid:21094182
- 62. Depireux DA, Simon JZ, Klein DJ, Shamma SA. Spectro-temporal response field characterization with dynamic ripples in ferret primary auditory cortex. Journal of Neurophysiology. 2001;85(3):1220–1234. pmid:11247991
- 63. Nagel KI, Doupe AJ. Organizing principles of spectro-temporal encoding in the avian primary auditory area field L. Neuron. 2008;58(6):938–955. pmid:18579083
- 64. Rubel E, Parks T. Organization and development of brain stem auditory nuclei of the chicken: Tonotopic organization of n. magnocellularis and n. laminaris. The Journal of Comparative Neurology. 1975;164(4):411–433. pmid:1206127
- 65. Yu X, Sanes DH, Aristizabal O, Wadghiri YZ, Turnbull DH. Large-scale reorganization of the tonotopic map in mouse auditory midbrain revealed by MRI. Proceedings of the National Academy of Sciences. 2007;104(29):12193–12198. pmid:17620614
- 66. Morel A, Garraghty PE, Kaas JH. Tonotopic organization, architectonic fields, and connections of auditory cortex in macaque monkeys. J Comp Neurol. 1993;335(3):437–459. pmid:7693772
- 67. Romani GL, Williamson SJ, Kaufman L. Tonotopic organization of the human auditory cortex. Science. 1982;216(4552):1339–1340. pmid:7079770
- 68. Formisano E, Kim DS, Di Salle F, van de Moortele PF, Ugurbil K, Goebel R. Mirror-symmetric tonotopic maps in human primary auditory cortex. Neuron. 2003;40(4):859–869. pmid:14622588
- 69. Kandler K, Clause A, Noh J. Tonotopic reorganization of developing auditory brainstem circuits. Nature Neuroscience. 2009;12(6):711–717. pmid:19471270
- 70. Hohenberg PC, Halperin BI. Theory of dynamic critical phenomena. Rev Mod Phys. 1977;49:435–479.
- 71.
Strogatz SH. Nonlinear dynamics and chaos: with applications to physics, biology, chemistry, and engineering. CRC Press; 2018.
- 72. Djurberg C, Svedlindh P, Nordblad P, Hansen MF, Bødker F, Mørup S. Dynamics of an interacting particle system: Evidence of critical slowing down. Phys Rev Lett. 1997;79:5154–5157.
- 73. Dai L, Vorselen D, Korolev KS, Gore J. Generic indicators for loss of resilience before a tipping point leading to population collapse. Science. 2012;336(6085):1175–1177. pmid:22654061
- 74. König O, Schaette R, Kempter R, Gross M. Course of hearing loss and occurrence of tinnitus. Hearing Research. 2006;221(1):59–64. pmid:16962270
- 75. Kaltenbach JA, Zacharek MA, Zhang J, Frederick S. Activity in the dorsal cochlear nucleus of hamsters previously tested for tinnitus following intense tone exposure. Neuroscience Letters. 2004;355(1):121–125. pmid:14729250
- 76. Wu C, Martel DT, Shore SE. Increased synchrony and bursting of dorsal cochlear nucleus fusiform cells correlate with tinnitus. Journal of Neuroscience. 2016;36(6):2068–2073. pmid:26865628
- 77.
Hastie T, Tibshirani R, Friedman J. The elements of statistical learning: data mining, inference, and prediction. 2nd ed. Springer Series in Statistics. Springer, New York, NY; 2009.
- 78. Reale RA, Imig TJ. Tonotopic organization in auditory cortex of the cat. Journal of Comparative Neurology. 1980;192(2):265–291. pmid:7400399
- 79. Escabí MA, Schreiner CE. Nonlinear spectrotemporal sound analysis by neurons in the auditory midbrain. Journal of Neuroscience. 2002;22(10):4114–4131. pmid:12019330
- 80. Humphries C, Liebenthal E, Binder JR. Tonotopic organization of human auditory cortex. NeuroImage. 2010;50(3):1202–1211. pmid:20096790
- 81. Levy RB, Reyes AD. Spatial profile of excitatory and inhibitory synaptic connectivity in mouse primary auditory cortex. Journal of Neuroscience. 2012;32(16):5609–5619. pmid:22514322
- 82. Pons TP, Garraghty PE, Ommaya AK, Kaas JH, Taub E, Mishkin M. Massive cortical reorganization after sensory deafferentation in adult macaques. Science. 1991;252(5014):1857–1861. pmid:1843843
- 83. Grüsser SM, Mühlnickel W, Schaefer M, Villringer K, Christmann C, Koeppe C, et al. Remote activation of referred phantom sensation and cortical reorganization in human upper extremity amputees. Experimental Brain Research. 2004;154(1):97–102. pmid:14557916
- 84. Ward J. Synesthesia. Annual Review of Psychology. 2013;64(1):49–75. pmid:22747246
- 85. Jacobs L, Karpik A, Bozian D, Gøthgen S. Auditory-visual synesthesia sound-induced photisms. Archives of Neurology. 1981;38(4):211–216. pmid:7213144
- 86. Armel KC, Ramachandran VS. Acquired synesthesia in retinitis pigmentosa. Neurocase. 1999;5(4):293–296.
- 87. Afra P, Funke M, Matsuo F. Acquired auditory-visual synesthesia: A window to early cross-modal sensory interactions. Psychology Research and Behavior Management. 2009;2:31. pmid:22110319
- 88. Ben-Yishai R, Bar-Or RL, Sompolinsky H. Theory of orientation tuning in visual cortex. Proceedings of the National Academy of Sciences. 1995;92(9):3844–3848. pmid:7731993
- 89. Ben-Yishai R, Hansel D, Sompolinsky H. Traveling waves and the processing of weakly tuned inputs in a cortical network module. Journal of Computational Neuroscience. 1997;4(1):57–77. pmid:9046452
- 90. Ma WLD, Hidaka H, May BJ. Spontaneous activity in the inferior colliculus of CBA/J mice after manipulations that induce tinnitus. Hearing Research. 2006;212(1):9–21. pmid:16307852
- 91. Mulders WHAM, Robertson D. Hyperactivity in the auditory midbrain after acoustic trauma: dependence on cochlear activity. Neuroscience. 2009;164(2):733–746. pmid:19699277
- 92. Gu JW, Herrmann BS, Levine RA, Melcher JR. Brainstem auditory evoked potentials suggest a role for the ventral cochlear nucleus in tinnitus. Journal of the Association for Research in Otolaryngology. 2012;13(6):819–833. pmid:22869301
- 93. Martel DT, Shore SE. Ventral cochlear nucleus bushy cells encode hyperacusis in guinea pigs. Scientific Reports. 2020;10(1):20594. pmid:33244141
- 94. Turrigiano GG, Leslie KR, Desai NS, Rutherford LC, Nelson SB. Activity-dependent scaling of quantal amplitude in neocortical neurons. Nature. 1998;391(6670):892–896. pmid:9495341
- 95. Turrigiano GG, Nelson SB. Hebb and homeostasis in neuronal plasticity. Current Opinion in Neurobiology. 2000;10(3):358–364. pmid:10851171
- 96. Turrigiano GG, Nelson SB. Homeostatic plasticity in the developing nervous system. Nature Reviews Neuroscience. 2004;5(2):97–107. pmid:14735113
- 97. Turrigiano GG. The self-tuning neuron: Synaptic scaling of excitatory synapses. Cell. 2008;135(3):422–435. pmid:18984155
- 98. Styr B, Slutsky I. Imbalance between firing homeostasis and synaptic plasticity drives early-phase Alzheimer’s disease. Nature Neuroscience. 2018;21(4):463–473. pmid:29403035
- 99. Beggs JM, Plenz D. Neuronal avalanches in neocortical circuits. Journal of Neuroscience. 2003;23(35):11167–11177. pmid:14657176
- 100. Tagliazucchi E, Balenzuela P, Fraiman D, Chialvo D. Criticality in large-scale brain fMRI dynamics unveiled by a novel point process analysis. Frontiers in Physiology. 2012;3:15. pmid:22347863
- 101. Palva JM, Zhigalov A, Hirvonen J, Korhonen O, Linkenkaer-Hansen K, Palva S. Neuronal long-range temporal correlations and avalanche dynamics are correlated with behavioral scaling laws. Proceedings of the National Academy of Sciences. 2013;110(9):3585–3590. pmid:23401536
- 102. Shriki O, Alstott J, Carver F, Holroyd T, Henson RNA, Smith ML, et al. Neuronal avalanches in the resting MEG of the human brain. Journal of Neuroscience. 2013;33(16):7079–7090. pmid:23595765
- 103. Massobrio P, Pasquale V, Martinoia S. Self-organized criticality in cortical assemblies occurs in concurrent scale-free and small-world networks. Scientific Reports. 2015;5(1):10578. pmid:26030608
- 104. Arviv O, Goldstein A, Shriki O. Near-critical dynamics in stimulus-evoked activity of the human brain and its relation to spontaneous resting-state activity. Journal of Neuroscience. 2015;35(41):13927–13942. pmid:26468194
- 105. Arviv O, Medvedovsky M, Sheintuch L, Goldstein A, Shriki O. Deviations from critical dynamics in interictal epileptiform activity. Journal of Neuroscience. 2016;36(48):12276–12292. pmid:27903734
- 106. Fekete T, Omer DB, O’Hashi K, Grinvald A, van Leeuwen C, Shriki O. Critical dynamics, anesthesia and information integration: Lessons from multi-scale criticality analysis of voltage imaging data. NeuroImage. 2018;183:919–933. pmid:30120988
- 107. Arviv O, Goldstein A, Shriki O. Neuronal avalanches and time-frequency representations in stimulus-evoked activity. Scientific Reports. 2019;9(1):13319. pmid:31527749
- 108. Priesemann V, Wibral M, Valderrama M, Pröpper R, Le Van Quyen M, Geisel T, et al. Spike avalanches in vivo suggest a driven, slightly subcritical brain state. Frontiers in Systems Neuroscience. 2014;8:108. pmid:25009473
- 109.
Doya K, Ishii S, Pouget A, Rao RPN. Bayesian brain: probabilistic approaches to neural coding. 1st ed. Computational Neuroscience. MIT Press; 2007.
- 110. Sedley W, Friston KJ, Gander PE, Kumar S, Griffiths TD. An integrative tinnitus model based on sensory precision. Trends in Neurosciences. 2016;39(12):799–812. pmid:27871729
- 111. Noda K, Kitahara T, Doi K. Sound change integration error: An explanatory model of tinnitus. Frontiers in Neuroscience. 2018;12:831. pmid:30538615
- 112. Corlett PR, Horga G, Fletcher PC, Alderson-Day B, Schmack K, Powers AR. Hallucinations and strong priors. Trends in Cognitive Sciences. 2019;23(2):114–127. pmid:30583945
- 113.
Arodz H, Dziarmaga J, Zurek WH. Patterns of symmetry breaking. vol. 127. Springer Science & Business Media; 2003.
- 114.
Dayan P, Abbott LF. Theoretical neuroscience: computational and mathematical modeling of neural systems. Computational Neuroscience. The MIT Press; 2001.
- 115. Mishra AP, Harper NS, Schnupp JWH. Exploring the distribution of statistical feature parameters for natural sound textures. PLOS ONE. 2021;16(6):1–21. pmid:34161323
- 116. Sachs MB, Abbas PJ. Rate versus level functions for auditory-nerve fibers in cats: tone-burst stimuli. The Journal of the Acoustical Society of America. 1974;56(6):1835–1847. pmid:4443483
- 117. Dallos P. Response characteristics of mammalian cochlear hair cells. Journal of Neuroscience. 1985;5(6):1591–1608. pmid:4009248
- 118. Hudspeth A, Corey D. Sensitivity, polarity, and conductance change in the response of vertebrate hair cells to controlled mechanical stimuli. Proceedings of the National Academy of Sciences. 1977;74(6):2407–2411. pmid:329282
- 119. Russell I, Sellick P. Intracellular studies of hair cells in the mammalian cochlea. The Journal of Physiology. 1978;284(1):261–290. pmid:731538
- 120. Cruickshanks KJ, Wiley TL, Tweed TS, Klein BEK, Klein R, Mares-Perlman JA, et al. Prevalence of hearing loss in older adults in Beaver Dam, Wisconsin: The epidemiology of hearing loss study. American Journal of Epidemiology. 1998;148(9):879–886. pmid:9801018
- 121. Agrawal Y, Platz EA, Niparko JK. Prevalence of hearing loss and differences by demographic characteristics among us adults: Data from the national health and nutrition examination survey, 1999-2004. Archives of Internal Medicine. 2008;168(14):1522–1530. pmid:18663164