A Calcium-Dependent Plasticity Rule for HCN Channels Maintains Activity Homeostasis and Stable Synaptic Learning

Theoretical and computational frameworks for synaptic plasticity and learning have a long and cherished history, with few parallels within the well-established literature for plasticity of voltage-gated ion channels. In this study, we derive rules for plasticity in the hyperpolarization-activated cyclic nucleotide-gated (HCN) channels, and assess the synergy between synaptic and HCN channel plasticity in establishing stability during synaptic learning. To do this, we employ a conductance-based model for the hippocampal pyramidal neuron, and incorporate synaptic plasticity through the well-established Bienenstock-Cooper-Munro (BCM)-like rule for synaptic plasticity, wherein the direction and strength of the plasticity is dependent on the concentration of calcium influx. Under this framework, we derive a rule for HCN channel plasticity to establish homeostasis in synaptically-driven firing rate, and incorporate such plasticity into our model. In demonstrating that this rule for HCN channel plasticity helps maintain firing rate homeostasis after bidirectional synaptic plasticity, we observe a linear relationship between synaptic plasticity and HCN channel plasticity for maintaining firing rate homeostasis. Motivated by this linear relationship, we derive a calcium-dependent rule for HCN-channel plasticity, and demonstrate that firing rate homeostasis is maintained in the face of synaptic plasticity when moderate and high levels of cytosolic calcium influx induced depression and potentiation of the HCN-channel conductance, respectively. Additionally, we show that such synergy between synaptic and HCN-channel plasticity enhances the stability of synaptic learning through metaplasticity in the BCM-like synaptic plasticity profile. Finally, we demonstrate that the synergistic interaction between synaptic and HCN-channel plasticity preserves robustness of information transfer across the neuron under a rate-coding schema. Our results establish specific physiological roles for experimentally observed plasticity in HCN channels accompanying synaptic plasticity in hippocampal neurons, and uncover potential links between HCN-channel plasticity and calcium influx, dynamic gain control and stable synaptic learning.


Introduction
Theoretical and computational frameworks for synaptic plasticity have a long and cherished history, with proven utilities ranging from understanding the underlying biophysical and biochemical mechanisms to solving complex engineering problems [1][2][3][4][5][6][7][8]. A central question in synaptic learning systems is on how they retain their ability to learn in the future and maintain stability, in the face of the adaptations that they undergo during the learning process. A prominent postulate, with a large body of experimental and theoretical evidence in support, is that neural systems accomplish such stability through concurrent regulatory mechanisms that recruit plasticity in synaptic and/or intrinsic neuronal properties [1,[9][10][11][12][13][14][15][16][17]. Whereas several computational approaches have been helpful in enhancing our understanding of synaptically mediated mechanisms for stable learning [1,7,14,[17][18][19][20][21], mechanisms for stable synaptic learning mediated by intrinsic plasticity have not been studied in quantitative detail. Furthermore, biophysically and biochemically rooted plasticity rules, which have been demonstrably elucidative in the synaptic plasticity literature [3,4,6,17,22,23], have no counterparts in the intrinsic plasticity literature, thus contributing to the lacuna in models for stability through intrinsic plasticity.
The hyperpolarization-activated, cyclic nucleotide-gated (HCN) channels, that mediate the hyperpolarization-activated h current, have been postulated as a prominent mechanism that can mediate activity homeostasis in hippocampal neurons [13,[24][25][26][27]. In this study, we quantitatively examine the validity of this postulate employing conductance-based models and biophysically rooted plasticity rules for the h conductance. In doing this, we first posed the question on what changes in the h conductance would be required for it to maintain activity homeostasis, when perturbed by a calcium-dependent bidirectional synaptic plasticity mechanism.
We employed the answer to this question to arrive at a calciumdependent plasticity rule (CDPR) for the h conductance such that firing rate homeostasis was maintained when h-channel plasticity accompanied synaptic plasticity. Finally employing calcium-dependent plasticity rules for synaptic strength and h conductance magnitude, we demonstrate that the synergy between these two plasticity rules accomplish much more than firing rate homeostasis. Specifically, we show that the co-occurrence of the two forms of plasticity enabled retention of synaptic weights within a useful dynamic range, introduced metaplasticity in the synaptic learning rule so that the positive feedback introduced by repeated synaptic potentiation was nullified, and facilitated reliable rate-based information transfer across the neuron when faced with positive feedback introduced by repeated synaptic potentiation.

Plasticity in h Conductance Maintained Firing Rate Homeostasis after Bidirectional Synaptic Plasticity
What changes in the h conductance are required for it to maintain firing rate homeostasis, when it is perturbed by bidirectional synaptic plasticity? What should be the relationship between HCN channel plasticity and synaptic plasticity for the former to counteract the perturbation in the input-output relationship that was imposed by the latter? To answer these questions, we employed a conductance-based model of a hippocampal pyramidal neuron with ion channel kinetics derived from experimental measurements and inserted a synapse made of colocalized AMPAR-NMDAR in the model [13]. The synaptic drive to the model neuron ( Fig. 1A) was modeled as Poissondistributed pre-synaptic action potentials arriving at various stimulus frequencies (SF), with the output defined by the neuron's firing frequencies (FF). We presented the model with 100 trials of inputs at each SF and measured the FF (Fig. 1B) to construct the input-output relationship of the baseline (target) model (Fig. 1C). Then, we induced synaptic plasticity in the model synapse through a biophysical plasticity rule that was driven by Ca 2+ influx through NMDARs [6,13]. This yielded a BCM-like plasticity profile as a function of induction frequency, f i , when plasticity was induced through 900 synaptic stimuli ( Fig. 1D-F) at the given f i [6,13].
When long-term potentiation (LTP) was induced in the model synapse through 900 stimuli at 25 Hz (Fig. 1F), the FF-SF plot shifted towards the left ( Fig. 2A) as a direct consequence of increased AMPAR conductance. This constitutes a perturbation in the FF of the neuron for given synaptic drive, and activity homeostasis requires that FF returned to its target levels for all SFs. Our goal was to quantitatively assess the validity of the postulate that changes in the h conductance were sufficient for such compensation. To do this, we employed an iterative plasticity rule (IPR) for h-conductance plasticity based on a gradient descent algorithm to minimize the mean-squared error (MSE) between the target FF and post-LTP FF for all considered SFs [28]. We derived the IPR through a parameterization of FF-SF curve as a sigmoid (with slope parameter a, and shift parameter b) and empirically arriving at the dependence of the sigmoidal parameters on the h conductance, g h (see MODELS AND METHODS for the derivation of the IPR). Consequently, after the induction of LTP, we implemented the IPR over several iterations (k), involving an update of g h as in equation (24). The learning rate parameter g IPR (Equation 24) was set to be lesser than a threshold, below which the IPR would stably converge (see MODELS AND METHODS for the derivation based on the Lyapunov stability criterion; Equation 40).
When we applied IPR on the h conductance of the model after the synapse underwent LTP ( Fig. 2A), we found that the minimum learning rate (Equation 40) evolved as a function of activity (Fig. 2B) and h-conductance increased in the process of reducing the MSE (Fig. 2C). This reduction in MSE translated into maintenance of firing rate homeostasis, whereby FF at all SFs converged towards their target rates as an effect of plasticity in h conductance (Fig. 2D). Finally, whereas HCN channel plasticity through IPR was geared towards maintaining the input-output relationship with the synapses forming the input end, experimental results show a post-LTP reduction in neuronal intrinsic excitability assessed by direct, pulse-current injections to the neuron [25,26]. Conforming to these experimental findings, our results also showed reduced excitability with LTP ( Fig. 2E-F), implying a reduction in overall neuronal excitability that spans all synapses in the neuron.
Our results above demonstrated that an increase in h conductance was sufficient to compensate for perturbations to the input-output relationship caused by LTP induction. However, a homeostatic role for HCN channels could be assigned only when such compensations were bidirectional. To assess this, we induced LTD through 900 synaptic stimuli at 15 Hz, and asked if changes in h conductance through IPR could compensate for the rightward shift to the FF-SF plot caused by LTD induction (Fig. 3A). We found that a reduction in h conductance was sufficient to compensate for this perturbation (Fig. 3B-C), thus accounting for bidirectionality of such plasticity and validating the postulate on a homeostatic role for HCN channels (Fig. 3D). Further, when we assessed firing frequency as a function of pulse-current injections, we found an increase in neuronal excitability with LTD ( Fig. 3E-F), consistent with experimental findings [24]. We also performed sensitivity analyses across various baseline values for synaptic and HCN channel conductances, and found that the IPR retained homeostasis across a range of these baseline parameters (Fig. S1).

HCN Channel Plasticity is Linearly Dependent on Synaptic Plasticity for Maintaining Firing Rate Homeostasis
With these results providing quantitative evidences in support of the postulate for a homeostatic role for HCN channels, we next investigated the relationship between plasticity in the h conductance in maintaining activity homeostasis and the underlying synaptic plasticity that caused the perturbation to the activity. To do this, we induced synaptic plasticity using several values for f i (Fig. 1E-F), and employed the IPR to assess the amount of HCN channel plasticity required for maintaining activity homeostasis for each value of f i . We found that the amount of HCN channel plasticity required for maintaining activity homeostasis was linearly related to the amount of synaptic plasticity that induced the perturbation (Fig. 4A). Such linear relationship was surprising because of all the nonlinearities that underlie the model under consideration, especially in terms of the FF-SF relationship (Fig. 1C) and the voltage-dependence of the HCN channel.
A Calcium-dependent, h-conductance Update Mechanism that Accompanied Bidirectional Synaptic Plasticity Maintained Firing Rate Homeostasis The analysis employing the IPR was geared towards the purpose of validating the postulate on a homeostatic role for the h conductance and towards understanding the relationship between synaptic and HCN channel plasticity. To do that, we had computed the change required in h conductance to maintain firing rate homeostasis after perturbing it through synaptic plasticity. However, experiments have demonstrated that synaptic plasticity and plasticity in measurements dependent on HCN channels evolve together as a function of time, with both forms of plasticity dependent on postsynaptic Ca 2+ influx [24,25,26]. Taking these experimental results and the linear relationship between synaptic and h-conductance plasticity (Fig. 4A)  Our results with the IPR provided us a quantitative foundation for testing this postulate. We employed the linear relationship between h-channel and synaptic plasticity (Fig. 4A) in conjunction with the calcium-dependent synaptic plasticity rule that we have employed [6,13] to arrive at a calcium-dependent plasticity rule (CDPR) for h-conductance plasticity (see MODELS AND METHODS for the derivation of the CDPR). In deriving this rule, we noted that the slope of the linear relationship between h-conductance and synaptic plasticity was variable and depended on the values of baseline HCN channels conductance and values of permeability of the AMPAR. We accounted for this variability and derived a generalized rule that worked under various baseline values (see MODELS AND METHODS for more details). The CDPR that we used for the normalized h conductance w h was as in equation (41). As a consequence of the differences in slope of the linear relationship across different baseline parameters (Fig. 4A), the V h (½Ca i ) function displayed different levels of saturation for different baseline parameters (Fig. 4B).
We then tested if the CDPR was effective in terms of maintaining firing rate homeostasis when synaptic plasticity was induced through 900 pulses of various values of f i . To do this, we updated both the synaptic weight and the h conductance simultaneously through respective Ca 2+ -dependent mechanisms. Consistent with the linear relationship that was the basis for the CDPR, we found that h conductance evolved in a manner (Fig. 4C) similar to synaptic weight (Fig. 4D) across different values for f i . We noted that plasticity in h conductance concurrent with plasticity in synapses through the induction protocol would mean a metaplastic shift in the synaptic plasticity profile, given the ability of HCN channels to modulate Ca 2+ influx [13]. As expected, implementing CDPR in parallel to synaptic plasticity induced a shift in the synaptic plasticity profile (Fig. 4E-F), in opposite directions for LTP and LTD owing to the direction of change in h conductance. Finally, given that CDPR was implemented in parallel with synaptic plasticity, we found that h-channel plasticity had compensated for the perturbation induced by synaptic plasticity, across the range of tested values of f i and over a range of values for baseline conductances (Fig. 4G-I; Fig. S2). Finally, similar to our results with the IPR (Figs. 2 and 3), we also confirmed that CDPR induced an increase in intrinsic excitability with synaptic depression, and a reduction in intrinsic excitability with synaptic potentiation (Fig. S2). In summary, our results demonstrated that firing rate homeostasis was maintained by HCN channel plasticity when CDPR accompanied bidirectional synaptic plasticity.

Synaptic Weights were Retained in a Useful Range as a Consequence of CDPR Accompanying Synaptic Plasticity
The CDPR for HCN channel plasticity was derived with a goal of retaining firing rate homeostasis in the face of bidirectional synaptic plasticity. Are there any other consequences of implementing firing rate homeostasis through HCN channel plasticity? We already noted that neuronal intrinsic excitability changed in a direction opposite to synaptic plasticity (Fig. S2), and that metaplasticity was induced when CDPR was implemented simultaneously along with synaptic plasticity (Fig. 4E-F). What are the consequences of this metaplasticity that accompanies synaptic plasticity? Could the shift in synaptic plasticity profile introduce stable synaptic learning, by retaining weights in a useful dynamic range?
We addressed these questions through multiple approaches. First, we assessed the temporal evolution of synaptic plasticity when the SF was abruptly switched [17], with and without CDPR being implemented in parallel ( Fig. 5A-B). Doing this, we found that synaptic weight values were closer to saturating limits when CDPR did not accompany synaptic plasticity. However, when CDPR accompanied synaptic plasticity, the synaptic weight was spared from reaching its saturating limit value (Fig. 5A), thus retaining its ability to change in response to further plasticityinducing stimuli. It may be noted that amount of change with LTD was minimal (Fig. 5B), owing to design of the synaptic plasticity protocol (Fig. 1D-F). Second, we directly tested if the presence of CDPR and the consequent metaplasticity ( Fig. 4E-F) can alleviate the instability caused by positive feedback loop induced by repeated induction of LTP. Specifically, it is known that synaptic potentiation increases AMPA receptor density, which results in increased Ca 2+ influx, thus causing more potentiation for the same LTP-inducing stimulus that arrives after the first potentiation [6,13,17], thus leading to a saturation in synaptic weight. When CDPR was not accompanying LTP-inducing stimuli, repeated presentation of the stimulus led to a runaway increase in synaptic weight. However, when CDPR accompanied the consecutive LTP-inducing stimuli, the synaptic weight did not undergo runaway increase, owing to the metaplastic effects of CDPR discussed above (Fig. 5C).
How do the metaplastic effects manifest themselves when presented with repeated LTP-inducing stimulus? To answer this, we repeatedly induced LTP and generated the synaptic plasticity profile after each LTP-inducing stimulus. As increase in AMPAR permeability induces a leftward shift in the plasticity profile [6,13], in the absence of synergistic HCN channel plasticity, repeated induction of LTP and the associated runaway increase in AMPAR permeability (Fig. 5C) shifted the plasticity profile to an extent where all values of f i led only to LTP (Fig. 5D). This hampers the bidirectional nature of synaptic plasticity, thus rendering the plasticity profile incapable of stable learning. However, when CDPR accompanied LTP, the leftward shift introduced by increased AMPAR (which was not as high as the case without CDPR; Fig. 5C) was largely compensated by the rightward shift introduced by increased h-conductance [13]. This led to a homeostasis in plasticity profile, where the bidirectional nature of the BCM-like plasticity profile was conserved and the profile remained largely invariant through the successive LTP inductions ( Fig. 5E-F). Thus, HCN channel plasticity, accompanying Firing frequencies for various amplitudes of non-synaptic pulse current injections (of 500 ms) shown for cases before (black) and after IPR (green). Note that the induction of LTP alone does not alter neuronal response to non-synaptic pulse current injection. P P AMPAR = 10 nm/s and baseline g g h = 0.05 mS/cm 2 . doi:10.1371/journal.pone.0055590.g002 synaptic plasticity through CDPR, apart from fulfilling its designed purpose of maintaining firing rate homeostasis, also retained synaptic weights within a useful range and maintained plasticity homeostasis to conserve the stable bidirectional nature of the plasticity profile.

Information Transfer Across the Neuron was Robust When CDPR Accompanied Synaptic Plasticity
What are the consequences of HCN channel plasticity and the consequent stability to neural coding? To answer this, we considered a rate-coding scheme, where the model neuron encoded incoming information, arriving as stimulus firing rate, by modulating its own output firing rate. Mutual information was then calculated as an estimate of the accuracy with which the stimulus can be encoded by the neuron's response. The input presented to such a system was through the stimulation of the model synapse with Poisson-distributed spike trains at different SFs, and mutual information was computed by considering the neuronal output frequency as the response. Under such a ratecoding scheme, we presented the synapse with repeated LTPinducing stimuli, and asked how mutual information evolved with every run of LTP. In the absence of HCN channel plasticity, given the runaway excitation that accompanied repeated LTP (Fig. 5C) and the consequent shift in the FF-SF plot, we found that the response frequency was rendered non-discriminatory (across SFs) with increase in LTP runs ( Fig. 6A-C). This, accompanied by a reduction in the dynamic range of the response (Fig. 6E) meant that information transfer across the neuron deteriorated with increase in LTP runs (Fig. 6F). However, when CDPR accompanied LTP, the discriminatory capability ( Fig. 6D) and the dynamic range of the response frequency ( Fig. 6E) were conserved, thus enabling robust information transfer across the neuron in the face of repeated LTP. Finally, we performed sensitivity analyses to ascertain if our results on the ability of CDPR to avert runaway excitation (Fig. 5), and to retain robustness of information transfer ( Fig. 6) in the face of repeated potentiation were robust to baseline parametric variation. Our results (Fig. S3) suggest that these results held for a range of baseline parameters, thus establishing the robustness of CDPR.

Discussion
In this study, we have quantitatively established a homeostatic and stability-promoting role for HCN channel plasticity, which has experimentally been demonstrated to accompany synaptic plasticity. In establishing this, we derived biophysically rooted plasticity rules for the HCN channel, through which we present our hypothesis that the direction and magnitude of HCN channel plasticity are dependent on the levels of postsynaptic Ca 2+ influx. We also demonstrated that HCN channel plasticity accompanying synaptic plasticity could play multiple parallel roles, in terms of retaining firing rate homeostasis, altering intrinsic excitability, retaining synaptic weights in a useful dynamic range by inducing metaplasticity, and enabling robust information transfer across a neuron under a rate coding model. These, accompanied by other well-known physiological roles for HCN channels [29], establishes the HCN channel as a powerful regulator of neural coding, learning and homeostasis (Fig. 7).

Physiological Relevance of the Calcium-dependent Update Rule of HCN Channels
Our Ca 2+ -dependent plasticity rule for HCN channels states that beyond a certain threshold, moderate levels of Ca 2+ influx induce h-conductance depression, whereas higher levels of Ca 2+ influx induce h-conductance potentiation. Although this rule was derived purely from the perspective of maintaining firing rate homeostasis, there are multiple lines of experimental evidence to support this plasticity rule. Prominent among these is the existence of bidirectional plasticity in the h current, wherein a low-frequency pairing protocol that induces LTD in the hippocampus was accompanied by a reduction in the h current [24], whereas a highfrequency pairing protocol that induces LTP was accompanied by an increase in the h current [25,26]. In another line of evidence, when plasticity in h current was assessed as a function of the magnitude of LTP induced, it was found that weaker and stronger LTP were accompanied by a reduction and an increase in the h current, respectively [30]. These, accompanied by the large body of literature on Ca 2+ -dependent synaptic plasticity and how different levels of Ca 2+ modulate synaptic plasticity in the hippocampus [4,6], provide considerable evidence to our postulate on HCN channel plasticity as a function of Ca 2+ levels. Further, based on the design of the CDPR, the temporal evolution of both the homeostatic plasticity driven by HCN channels and the synaptic plasticity are concurrent. Whereas this is contrary to the dogma that homeostatic plasticity is slower than mnemonic plasticity, there is evidence in the literature where both such forms of plasticity can evolve concurrently [15]. More importantly, synaptic efficacy and measurements that are sensitive to HCN channels evolve concurrently in cases where both have been shown to accompany each other [24][25][26], thus providing experimental evidences to our conclusions on concurrent and dynamic gain control accompanying synaptic plasticity.

Implications for Calcium-dependent Plasticity in HCN Channels
The coexistence of synaptic and intrinsic plasticity mechanisms within a neuron is now well established, with several postulates on how these two mechanisms could synergistically interact under several physiological and pathophysiological conditions. One viewpoint has been that intrinsic plasticity could be broadly classified into two categories: mnemonic, where plasticity in intrinsic properties participate in the encoding process; and homeostatic, where its role is in promoting stability during the learning process [10]. Based on experimental evidences and prior hypotheses, we postulated a homeostatic role for HCN channel plasticity to quantitatively validate the postulate and assessed the other consequences of such plasticity. In doing so, our analysis of HCN plasticity was limited to changes in excitability and activity homeostasis, to stable synaptic learning and to information transfer under a rate-coding schema (Fig. 7). Future studies could expand such analysis to the impact of HCN plasticity on other physiological aspects governed by these channels, including their regulation of resting membrane potential [31,32,33,34], of thetafrequency resonance and phase modulation [26,35,36], and of post inhibitory rebound [37].
Although HCN channel plasticity is relatively well studied, it is just one among the several VGICs that have been demonstrated to change during learning protocols or under pathological conditions. Future studies could focus on deriving rules for these other channels, and assess the synergy of plasticity across different channels, including synaptic receptors, exploring their homeostatic/mnemonic roles with implications to learning theory and to neural encoding [10,16,28,38,39]. These studies could also account for possible local plasticity in various ion channels, which would require morphologically equivalent models, as opposed to global plasticity [24,26] in HCN channels studied here. As recent literature has made it abundantly clear that the same physiological properties could be obtained through several nonunique combinations of underlying VGICs [31,33,34,[40][41][42], it is also important to understand how different VGICs and receptors interact with each other, and how their plasticity rules are linked to each other towards achieving the twin goals of information encoding and homeostasis [43]. Finally, considering parallels from the synaptic plasticity literature [3], such studies should also endeavor to go beyond a simple Ca 2+ -dependent model for plasticity, and account for Ca 2+ kinetics, downstream signaling pathways, microdomains associated with these signaling pathways, protein synthesis and trafficking to understand plasticity in various VGICs and their synergies with synaptic plasticity. This is extremely critical because plasticity in different channels and receptors occur in unison with the same plasticity inducing protocols [26,35,44,45], because they are linked to each other by specific signaling molecules, their kinetics and their subcellular locations [3,46].

Emergent Features of the Calcium-dependent Plasticity Rule for HCN Channels
Although our calcium-dependent plasticity rule was designed for achieving firing rate homeostasis, features emergent from the interactions between HCN plasticity and synaptic plasticity implied several additional functional implications for the synergy between the two forms of plasticity (Fig. 7). First, when HCN plasticity accompanied synaptic plasticity, the saturation of synapses that followed repeated synaptic plasticity was aborted, ensuring that synaptic strengths remained in a useful dynamic range (Fig. 5). In synaptic learning systems, it is critical that neurons and their networks retain their ability to learn, and this in turn depends on the learning process retaining synaptic weights within a useful dynamic range. Whereas our results indicate that plasticity in VGICs could act as a mechanism that enforces this in single neuron models through metaplasticity (Fig. 5), it would be important to understand the implications for HCN plasticity in learning networks and their homeostasis. Although network homeostasis has been analyzed through synaptically driven mechanisms [1,7,14,[17][18][19][20][21], roles for VGICs and their plasticity in network homeostasis need exploration through computational models. Thus, future studies could explore the role of HCN plasticity through CDPR in network homeostasis employing biophysically constrained models for neurons and for synaptic plasticity. In conjunction with experimentally constrained activity patterns, this would provide insights into the roles of HCN channels and their plasticity in regulating synaptic homeostasis in networks under physiological conditions as well as pathophysiological conditions such as depression and epilepsy [47][48][49][50].
The second feature that emerges from the interactions between the two forms of plasticity is that HCN plasticity enables neurons to preserve the robustness of information transfer in the face of repeated synaptic plasticity (Fig. 6). The ability of neurons to encode information, and their ability to maximize mutual information has been an active area for investigation across various neuroscience disciplines [51][52][53][54][55][56][57][58][59]. Although the role of VGICs in assessing information maximization has been suggested and analyzed through generic intrinsic plasticity mechanisms [53,60], experimentally driven rules for specific VGICs have not been employed to analyze information maximization in underlying neuronal systems. Our results suggest that plasticity in HCN channels in conjunction with synaptic plasticity preserves the robustness of such information transfer in the face of repeated synaptic plasticity (Fig. 6). These results present several avenues for future explorations linking various forms of plasticity in specific VGICs to neural coding in general and information maximization in particular. Specifically, future studies could focus on assessing the relationship between learning rules that maximize mutual information and those that maintain firing rate homeostasis, with reference to the VGIC that is being studied. For instance, whereas our update rule for HCN channels was designed to maintain firing rate homeostasis, we also observed that mutual information is preserved in the face of repeated synaptic plasticity. Broadly, these results suggest that homeostasis in input-output relationship is essential to maintain robustness in the transfer of information across the neuron. It would be of interest to test if the converse, Figure 7. Summary diagram illustrating the physiological implications for the calcium-dependent plasticity rule governing changes in the h current, and its interactions with calcium-dependent synaptic plasticity. (A) In the absence of a homeostatic mechanism, the calcium-dependent synaptic plasticity rule acts as a positive feedback loop. Increase in synaptic weight through LTP leads to further enhancement of synaptic strength given the direct dependence of calcium influx on synaptic strength, and the dependence of synaptic plasticity on the calcium influx. Thus repeated potentiation leads to saturation of synaptic strength. Whereas the diagram depicts the case for LTP, a similar diagram would follow for LTD as well, and synapses would die in the case of repeated depression. This positive feedback loop would thus lead to a loss of firing rate homeostasis (Figs. 2 and 3), saturation/death of synapses (during repeated potentiation/depression respectively) ensuring that synapses do not retain a useful dynamic range (Fig. 5), and a loss in the robustness of information transfer across the neuron under a rate-coding schema (Fig. 6). (B) Under the scenario where the HCN channel conductance (g h ) was updated using CDPR, and was updated in parallel to changes in synaptic strength, change in h current (I h ) alters excitability and temporal summation thereby modulating calcium influx into neurons. This change in calcium influx introduces a metaplastic shift to the synaptic plasticity profile [13] and forms a negative feedback mechanism that counters the positive feedback associated with the calcium-dependent synaptic plasticity in (A). Thus, HCN channel plasticity through CDPR acts as a homeostatic feedback mechanism ensuring that there is retention of firing rate homeostasis (Figs. 2, 3, and 4), of synapses in a useful dynamic range (Fig. 5), and of the robustness of information transfer across the neuron (Fig. 6). doi:10.1371/journal.pone.0055590.g007 that retention of robust information transfer across neurons would establish homeostasis in the input-output relationship, is also true so as to establish a quantitative tight relationship between neural coding and homeostasis. Specifically, future studies could focus on the question on whether an update rule for HCN channels derived specifically to maximize mutual information would also maintain firing rate, and ask if a generalized relationship exists between rules that maximize mutual information and those that maintain firing rate [53,60]?
From the standpoint of neural coding, analyses of information maximization in neural systems have led to a class of algorithms that perform independent component analysis, and independent components that emerge from such algorithms have been linked to efficient encoding in neural systems [52,54,57,61,62,63,64]. In this context, although algorithms have been proposed for independent component analysis through synaptic and intrinsic plasticity mechanisms [54,57,65], none of them consider experimentally constrained plasticity rules for specific VGICs and their interactions with synaptic plasticity. Therefore, future explorations relating information maximization and VGIC plasticity could answer questions on whether experimentally constrained plasticity rules for specific VGICs in conjunction with synaptic plasticity could serve as substrates for information maximization and efficient coding of information.
In summary, in this study, we derived experimentally grounded plasticity rules for one of the most well studied VGIC, and demonstrated that the synergy between synaptic and VGIC plasticity retains stability in the synaptic learning system, and enhances the robustness of information transfer across the neuron. We also provided specific quantitative relationships between plasticity in this VGIC and the influx of calcium, for such stability to be maintained, and presented experimental evidence in support of the specific quantitative relationship that we had derived. Our study establishes a broad framework for the coexistence of synaptic and VGIC plasticity, and emphasizes the need for considering all forms of plasticity in a holistic manner in assessing neural coding, learning theory, homeostasis, and the pathophysiology of neurological disorders [10,43,50,66]. Such analyses should also dissect the contributions of different forms of plasticity to specific physiological roles, by deriving specific rules for plasticity in each of these components and understanding the interactions between these components and their update rules.

Models and Methods
A single compartmental, conductance-based neuronal model of length 50 mm and diameter 50 mm was used for all simulations. The reasons behind the choice of a single compartmental model for our simulations were two fold. First, given the computational complexity associated with the plasticity paradigms (detailed below), incorporating these into a morphological realistic model would constitute an enormous increase in computational cost given the increase in the number of neuronal compartments. Second, bidirectional plasticity in the h current has been demonstrated to be spatially widespread, encompassing synapses that have not been subjected to plasticity [24,25,26]. This global nature of h-current plasticity has also been shown to change measurements sensitive to h current by an equal percentage across the stretch of the apical trunk [26]. Therefore, we reasoned that implementing a morphologically realistic neuron and introducing uniform percentage of change in h conductances distributed throughout the morphology was conceptually equivalent to employing a single compartment model with an h conductance that was updated according to the plasticity rule. Therefore, it was the global nature of the experimentally observed h current plasticity that provided us the avenue to employ a singlecompartmental model and use it as an abstraction to arrive at the plasticity rules that we present below. In employing this model for our simulations, we constrained parameters and kinetics of constitutive components with experimental measurements from CA1 pyramidal neurons, the details of which are provided below.
The passive parameters associated with the model were: R m = 28 kV cm 2 and C m = 1 mF/cm 2 . Fast Na + , delayed rectifier K + (KDR), A-type K + (KA) and HCN channels were introduced, with kinetics adopted from experimental measurements of these channels in hippocampal pyramidal neurons [13,[67][68][69][70][71]. Default values of maximum conductance densities (in mS/cm 2 ) were set to qualitatively match (see Fig. 2F) the firing frequency vs. current (f-I) curves of hippocampal pyramidal neurons [26], g g Na = 42, g g KDR = 5, g g KA = 1, g g h = 0.35. Reversal potentials for the h, Na + and K + channels, respectively were (in mV), E h = -30, E K = -90, E Na = 55. All simulations were performed at -65 mV in the NEURON simulation environment [72], with an integration time constant of 25 ms.

Synapse Model
A synapse was modeled as a co-localized combination of NMDA and AMPA receptor currents. A default value of NMDAR:AMPAR ratio was set at 1.5 and synaptic plasticity was induced in the model synapse by presenting 900 pulses at a various induction frequencies [6,13,73,74]. The current through the NMDA receptor, as a function of voltage and time, was dependent on three ions: sodium, potassium and calcium. Consequently, as per the Goldman-Hodgkin-Katz convention [13,[75][76][77][78]: where, Calcium-Dependent Plasticity in HCN Channels PLOS ONE | www.plosone.org where P P NMDA is the maximum permeability of the NMDA receptor. The relative permeability ratios P Ca = 10.6, P Na = 1, P K = 1, owing to the permeability of the NMDA receptor to different ions being P Ca :P Na :P K = 10.6:1:1. Default values of concentrations were (in mM): ½Na i = 18, ½Na o = 140, ½K i = 140, ½K o = 5, ½Ca i = 100 6 10 26 , ½Ca o = 2. The concentration for sodium was set such that equilibrium potential was at +55 mV and that for potassium was at -90 mV. Evolution of intracellular calcium with NMDA-dependent calcium current I Ca NMDA and its buffering was modeled as in [13,79]: where F is Faraday's constant, t Ca = 30 ms is the calcium decay constant, dpt = 0.1 mm is the depth of the shell, and ½Ca ? = 100 nM is the steady state value of ½Ca i . MgB(v)governs governs the magnesium dependence of the NMDA current, given as [80]: with the default value of ½Mg o set at 2 mM. s(t) governs the kinetics of the NMDA current, and is given as: where a is a normalization constant, making sure that 0# s(t) #1, t d is the decay time constant, t r is rise time, with t r = 5 ms, and t d = 50 ms [13,81]. Current through the AMPA receptor was modeled as the sum of currents carried by these sodium and potassium ions: where, where P P AMPAR is the maximum permeability of the AMPA receptor, whose default value was set at 10 nm/s. The relative permeability ratiosP Na andP K were equal and set to 1 [81]. wis the weight parameter that undergoes activity-dependent plasticity (see below). s(t) was the same as that for the NMDA receptor, but with t r = 2 ms and t d = 10 ms [13,81].

Induction of Synaptic Plasticity
Synaptic weight parameter w (see Equations 9 and 10) was updated as a function of intracellular calcium, following the calcium control hypothesis [4,6,13,82]. Specifically, where, g w ½Ca i À Á is the calcium dependent learning rate, inversely related to the learning time constant g w ½Ca i À Á : with P 1 = 1s, P 2 = 0.1s, P 3~P2 |10 {4 , P 1 = 3. V w ½Ca i À Á had the following form: with a 1~0 :35, a 2~0 :55, b 1~8 0, b 2~8 0. In all of the above weight update equations, for compatibility, ½Ca i is set as ½Ca i -100 nM. Unless otherwise stated, the default initial value of w, w int , was set at 0.25. Using this framework, the direction and strength of plasticity were analyzed by presenting stimuli made up of 900 pulses at various induction frequencies (f i spanning 0.5-25 Hz), an experimentally well-established plasticity protocol [73,74]. The evolution of weights given by equation (11) was monitored and the final weight at the end of the induction protocol was noted down for each frequency. The percentage difference between this final weight and the initial weight (0.25) was plotted against the induction frequency of the stimulus pulses to obtain the synaptic plasticity profile as a function of induction frequency [6,13]. Figure 1F provides an example of such a plasticity profile generated with our model. For LTP saturation experiments (Figs. 5 and 6), unless otherwise stated, we induced LTP in every trial through a 15 Hz/900 pulses protocol. We did not assess the implications of LTD saturation given that LTD-induced change in synaptic weight was minimal (e.g., Fig. 1F), owing to the definition of the V w ½Ca i À Á function (Equation 14).

Derivation of Iterative Plasticity Rule (IPR) for g h Update
In our model, we employed neuronal firing frequency (FF) as the output variabley, with the stimulus frequency (SF) constituting the input variablex. When input spike trains of n different SFs x~x 1 x 2 Á Á Á x n ð Þ T were fed to the synapse, our model postsynaptic neuron responded with corresponding FFs y~y 1 y 2 Á Á Á y n ð Þ T , when n different SFs were employed in generating the FF-SF plot (Fig. 1B-C). In order to sustain firing rate homeostasis in the face of synaptic plasticity, this FF-SF plot ought to remain constant across different values of synaptic weights. To implement this, we defined y tar~ytar 1 y tar 2 Á Á Á y tar n À Á T as the target firing frequency for the corresponding stimulus frequencies, and defined firing rate homeostasis as the ability of the model neuron to this target firing frequency across stimulus frequencies. Note that this target firing frequency is a function of the active and passive properties of the neuron, and the properties of the synaptic receptors, and would vary if any of the baseline parameters (e.g., baseline P P AMPAR and g g h ) were altered. Now, if synaptic plasticity were induced and the FF-SF plot (y) changed as a consequence of this altered synaptic weight (e.g., Fig. 2A), then, to maintain homeostasis, we need to minimize the mean square error (MSE), j, between yand y tar : To minimize this function using the standard gradient descent algorithm, we needed a differentiable, parameterized form for the FF-SF curve [28]. Based on the structure of the FF-SF curve (Fig. 1C), we employed a sigmoid as a functional form to represent this input-output relationship. Specifically, the output firing frequency y i as a function of the input stimulus frequency x i was written as a sigmoidal function: where y max was the maximum firing frequency, w was the weight of the synapse under consideration, a quantified the slope of the linear part of this sigmoid and b represented the offset in the curve. Under such parameterization, we chose slope a as the parameter over which j was minimized, because we empirically observed that the induction of synaptic plasticity significantly altered the slope in comparison to the offset of this function. In what follows, we derive the update equations for this parameter such that j was minimized. First, differentiating (15) with respect to a: Evaluating Ly i La after plugging in the functional form for y i from (16), we have: From (16), this reduces to: Rewriting (17), where To follow a gradient descent algorithm to minimize j with respect to a, at each iteration, a had to be updated such that we proceeded along the negative of this gradient direction. To be specific, at each iteration k, we implemented the following plasticity rule: where g IPR was the learning rate parameter for updating a, and Da was as in (20): Whereas the derivation thus far has focused on minimizing j with reference to the slope parameter a, our goal was to minimize j, and thus achieve homeostasis, by changing the maximal h conductance g h . As we already had the update equations for a, we needed the relationship between a and g h to derive the update equation for g h . We estimated this relationship empirically by obtaining FF-SF profiles for various different values of g h , and parameterized these profiles as in (16) to obtain the corresponding values of a. Upon doing this, we found that changes in a and b were linearly related to g h with different slopes m a and m b respectively. We employed this linear relationship to update g h at each iteration k as follows: where Dg h was given by: As changes in g h altered the FF-SF curve, this gradient descent approach enabled the minimization of j, thus maintaining firing rate homeostasis after it was perturbed by the induction of synaptic plasticity. Although we updated HCN channel conductance to alter the FF-SF curve, and derived our optimization procedure for g h , it should be noted that plasticity could be implemented through changes in the half-maximal activation voltage of the h conductance [13,83]. As the goal was to alter the h current, we used conductance as a means to do this, although the procedure outlined above is generalizable to the half-maximal activation voltage of the h conductance.

Derivation of Learning Rate Based on Lyapunov Stability Criterion
Under what constraints will the algorithm derived in the previous section converge? Is there a limit on the learning rate parameter g IPR such that this algorithm stably converges? To answer these questions, we sought to derive constraints on the trajectory of our error function (Equation 15) to achieve stability of convergence in our dynamical update system. To do this, we considered the mean squared error function (Equation 15) as the Lyapunov function [84,85] because the trajectory of this error with the adaptation in g h would determine the convergence of the iterative algorithm. Under such a framework, k, the iteration number during the gradient descent optimization procedure (above), would constitute the state of the system, and j(k) be the dynamics of the system, which would represent the energy function of the system, v(k).

v(k)~j(k) ð26Þ
where, and where e i (k) is the error function of the system. Lyapunov theory states that if there is a function v(:) such that v(:) is positive definite and the gradient of the function, Dv(k), satisfied the criterion Dv(k)v0 Vk[Z, k=0 then, the system is globally asymptotically stable over the entire state space [84,85]. Based on our choice of v(k) as the error function: Writing De i (k)~e i (kz1){e i (k), we obtained: which was rewritten as: As the gradient descent algorithm derived for g h plasticity in (25) was based on minimizing the error between y and y tar , the change in error De i can be calculated from the change in g h by writing: Dg h was obtained from the gradient descent algorithm (above): For a given SF i and given equation (27), we write.
Substituting the value of Dg h in (32), Substituting De i (k) in (31), Simplifying this, we got, To satisfy the Lyapunov stability criterion, this gradient, Dv(k), of the Lyapunov function has to be negative, which implies: From this, we derived the constraint for the system to be globally asymptotically stable and ensuring convergence in terms of the learning rate parameter g IPR as: Lei(k) Lg h was computed using the definition of e i (k) in (28), and in a manner similar to arriving at (19), and noting that changes in parameters a and b in (16) were linearly related to g h with different slopes m a and m b respectively.
Thus, the Lyapunov stability criterion would be satisfied, ensuring convergence of IPR, if the learning rate parameter satisfied the inequality provided in (40). To ensure this, in implementing the IPR, we always set g IPR lesser than this value.

Derivation of Calcium Dependent Plasticity Rule (CDPR)
The linear interaction between intrinsic and synaptic plasticity forms the basis for the derivation of the calcium dependent g h plasticity rule. The parameter w h in (41) is similar to the calcium dependent synaptic weight update given in (11), this relation is implicated from the linear interaction between intrinsic and synaptic plasticity.
where, g CDPR ½Ca i À Á is the calcium dependent learning rate, inversely related to the learning time constant with P 1 = 1 s, P 2 = 0.1 s, P 3~P2 |10 {4 , P 4 = 3. V h ½Ca i À Á had the following form (Fig. 4B): with a 1~0 :35, a 2~0 :55, b 1~8 0, b 2~8 0. In all of the above weight update equations, for compatibility, ½Ca i is set as ½Ca i -100 nM. Unless otherwise stated, the default initial value of w h , w hinit , was set at 0.25. f is the normalized h conductance as given as (Fig. S1E): where g base h was the baseline h conductance, and Dg max h was the change in h conductance required for retaining homeostasis after the induction of maximal LTP (300%).
The rationale behind using f was as follows. If the relationship between the change in synaptic weight, Dw, and the change required in h-conductance, Dg h , to maintain firing rate homeostasis after such synaptic plasticity were linear with a slope of 1, then Dw would be exactly equal to Dg h without any requirement for a offset parameter. However, from our results from analyzing the sensitivities of the IPR (Fig. 4A), it was evident that the slope was not always 1, and varied as a function of P P AMPA and g base h , even though relationship between Dw and Dg h was linear across a range of parameters (Fig. 4A). Therefore, if Dw were 300%, then Dg h required for maintaining homeostasis would not be 300%. In order to account for such non-unity slopes for the linear relationship shown in Figure 4A, and thereby generalize our plasticity rule for a range of underlying parameters, we introduced this offset correction parameter f. To arrive at f for any given parametric combination, we induced LTP (Dw = 300%) with a 25 Hz/900 pulse stimulus and asked what %Dg h was required to retain firing rate homeostasis after this, using Figure 4A, and assigned that as Dg max h . We then computed f as in (45) (Fig. S1E), and V h should be updated accordingly. As Dg h vs Dw was a linear relationship, and we had two points on this straight line ((0,0) and (Dg max h , 300%)), we adjusted V h appropriately employing the linear relationship (Equation (44)-(45)).
Furthermore, as we wanted to retain w hinit , the initial value of w h , at 0.25 given the steady state conditions associated with this dynamical system, we employed the offset parameter in our final update equation as well: If the linear relationship between Dw and Dg h had unit slope, f would be 0.25, and (46) reduced to g h~g base h zDg max h w h , which was similar to the synaptic plasticity rule for updating AMPAR conductance based on w and the baseline value P P AMPAR (Equations (9-11)). When the slope was not unity, and f deviated from 0.25, the offset presented in (46) ensured that the update was consistent with the change in slope observed in Figure 4A. Thus, incorporating the offset parameter f into equations (44) and (46) ensured that our rule was generalized across a range of baseline values of P P AMPAR and g g h , rather than having it work only for the case where the linear relationship between Dw and Dg h had unit slope.
Finally, our computation of Dg max h was from Figure 4A, which was obtained employing the IPR. We wanted to make this computation independent of IPR. In doing this, we searched for a function that reflected the sensitivities of h-channel plasticity to underlying parameters (Fig. S1A). Such a function would enable us to arrive at the relationship between Dw and Dg h (Fig. 4A) without employing IPR. We empirically found that the function y 25 {j 2 =100 (Fig. S1F; y 25 is the FF for 25 Hz SF, and j was as in (15)) reflected the sensitivities portrayed in (Fig. S1A), over a large range of underlying parameters. We employed this function to arrive at Dg max h through an optimization procedure to match it with the sensitivity analysis presented for IPR (Fig. S1A where H(:) was the Heaviside function, that ensured that no plasticity occurred if the mean square error j (Equation 15) was lesser than a minimum threshold value, j min . Once Dg max h was obtained for a given set of parameters, the updated value of g h was computed from (41)(42)(43)(44)(45)(46), as previously.

Computation of Mutual Information
Mutual information between the output spikes and the input stimulus was employed as a measure of encoding capabilities of a neuron, and was computed as the difference between the total response entropy and the noise entropy [58,59]: where I m is the mutual information, H is the total response entropy, a measure of response variability and H noise is the noise entropy. H was calculated as: where p½r is the response probability distribution of a firing rate represented by r, over the entire range of input stimulus frequencies. The input stimulus frequency, s, fed to the synapse was varied from 5 to 25 Hz in the steps of 1 Hz. The input spike timings were Poisson distributed, thus providing variability in response frequency for the same stimulus frequency, and the distribution of response frequencies for a given stimulus frequency s was represented as p½rDs. Each stimulus frequency was presented for 900 trials (each of 1 s duration) and the firing rate for each trial was calculated from the response of the neuron. For each stimulus frequency, p½rDs was then plotted from the first and second order statistics of these responses across trials, with an implicit assumption of a normal distribution for p½rDs (Fig. 6A). This procedure was repeated for each stimulus frequency (Fig. 6A). The response probabilityp½r, for each response frequency r, was then computed as ( Fig. 6E): In computing this, we considered p½s to be uniformly distributed as the presentation of different stimulus frequencies was equally probable. Plugging p½r into equation (49) yielded the response entropy. To compute the noise entropy, we first computed the entropy of the responses for a given stimulus, s, as: Noise entropy was then computed as: We computed mutual information by inserting the computed values of response and noise entropies into equation (48).

Other Measurements
The synaptic drive to the model neuron was modeled as Poissondistributed pre-synaptic action potentials arriving at various stimulus frequencies (SF). We presented the model with 100 trials of inputs at each SF and measured the corresponding firing frequencies (FF; Fig. 1C) to construct the input-output relationship of the model, when the neuron rested at -65 mV. We fixed the membrane potential in order to measure the effect of synaptic and/ or h-channel plasticity on the FF-SF curve, without altering driving forces and/or the voltage-dependent conductance of all the channels present in the model [24,25,26,32]. Each trial was made of synaptically driving the neuron with the specific SF for a 1 s period, and FF was computed by counting the number of action potentials in this 1 s period. FF for each SF was represented as mean 6 SEM, computed over the 100 trails for that SF (Fig. 1C). It should be noted that the maximum SF used in arriving at this FF-SF curve was dependent on the specific choices of the underlying parameters, especially the baseline values of AMPAR permeability and h conductance, the two parameters that underwent plasticity. This was to ensure the sigmoidal parametric form for the FF-SF curve, which would be lost if firing entered the high frequency regime where it would be limited by the refractory period or depolarizationinduced inactivation of the fast Na + channels.
Firing frequencies in response to non-synaptic pulse current injections (e.g., Fig. 2E-F) was determined by injecting a pulse current of various amplitudes into the neuronal model, and computing the number of action potentials fired per second.  Figure S2 Sensitivity analysis for the calcium-dependent plasticity rule (CDPR), also demonstrating changes in excitability obtained after CDPR. (A) Plot depicting the effectiveness of CDPR in achieving firing rate homeostasis after synaptic plasticity induced with 900 pulses of different induction frequencies. As CDPR runs in parallel to synaptic plasticity, homeostatic gain control runs in parallel, reducing the root mean squared error (RMSE) between the achieved FF-SF curve and the target FF-SF curve below 1 Hz across all induction frequencies.

Supporting Information
(B) 3-D plot showing the updated h-conductance for various AMPAR permeability and baseline h-conductance values obtained by implementing CDPR for h channel plasticity in parallel with synaptic plasticity. Note that the sensitivity analysis presented here for CDPR is over a range smaller than the one presented for IPR. This was because the FF-SF lost its sigmoidal characteristic at larger AMPAR permeabilities and/or lower baseline g g h . (C) Traces showing neuronal firing for a non-synaptic pulse current injection, before (black) and after LTP and CDPR (green). (D) Firing frequencies for various amplitudes of non-synaptic pulse current injections (of 500 ms) shown for cases before (black) and after LTP and CDPR (green). (E-F) Same as (C)-(D), but for LTD, when implemented in parallel with CDPR. (TIF) Figure S3 Sensitivity analysis for the information transfer across the neuron. (A) Impact of repeated LTP (20.15 Hz/900 pulses, each induction) on AMPAR permeability depicted as a function of the number of LTP inductions, plotted for cases where synaptic plasticity was accompanied (green) and not accompanied (black) by CDPR. P P AMPAR = 25 nm/s and baseline g g h = 0.25 mS/cm 2 . (B) Mutual information plotted as a function of number of successive LTP runs, under cases where CDPR accompanied (green) or did not accompany (black) synaptic plasticity for parametric values as in (A). (C) Impact of repeated LTP (13.1 Hz/900 pulses, each induction) on AMPAR permeability depicted as a function of the number of LTP inductions, plotted for cases where synaptic plasticity was accompanied (green) and not accompanied (black) by CDPR. P P AMPAR = 36 nm/s and baseline g g h = 0.35 mS/cm 2 . (D) Mutual information plotted as a function of number of successive LTP runs, under cases where CDPR accompanied (green) or did not accompany (black) synaptic plasticity for parametric values as in (C). (E) Impact of repeated LTP (10.55 Hz/900 pulses, each induction) on AMPAR permeability depicted as a function of the number of LTP inductions, plotted for cases where synaptic plasticity was accompanied (green) and not accompanied (black) by CDPR. P P AMPAR = 45 nm/s and baseline g g h = 0.55 mS/cm 2 . (F) Mutual information plotted as a function of number of successive LTP runs, under cases where CDPR accompanied (green) or did not accompany (black) synaptic plasticity for parametric values as in (E). The frequencies employed for induction are the sliding threshold frequencies for the corresponding baseline parameters of P P AMPAR and g g h . (TIF)