Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Nonequilibrium Enhances Adaptation Efficiency of Stochastic Biochemical Systems

  • Chen Jia ,

    jiac@utdallas.edu

    Affiliations Beijing Computational Science Research Center, Beijing, China, Department of Mathematical Sciences, The University of Texas at Dallas, Richardson, Texas, United States of America

  • Minping Qian

    Affiliation School of Mathematical Sciences, Peking University, Beijing, China

Nonequilibrium Enhances Adaptation Efficiency of Stochastic Biochemical Systems

  • Chen Jia, 
  • Minping Qian
PLOS
x

Abstract

Adaptation is a crucial biological function possessed by many sensory systems. Early work has shown that some influential equilibrium models can achieve accurate adaptation. However, recent studies indicate that there are close relationships between adaptation and nonequilibrium. In this paper, we provide an explanation of these two seemingly contradictory results based on Markov models with relatively simple networks. We show that as the nonequilibrium driving becomes stronger, the system under consideration will undergo a phase transition along a fixed direction: from non-adaptation to simple adaptation then to oscillatory adaptation, while the transition in the opposite direction is forbidden. This indicates that although adaptation may be observed in equilibrium systems, it tends to occur in systems far away from equilibrium. In addition, we find that nonequilibrium will improve the performance of adaptation by enhancing the adaptation efficiency. All these results provide a deeper insight into the connection between adaptation and nonequilibrium. Finally, we use a more complicated network model of bacterial chemotaxis to validate the main results of this paper.

Introduction

Adaptation is a crucial biological function by which a living system becomes better suited to its environment [15]. Specifically, adaptation refers to the ability of a biochemical system to recover to its initial state after a transient response to an abrupt environmental change. An intuitive description of adaptation is illustrated in Fig 1(a). In response to a step increase of the input stimulus, the output of the system undergoes a transient increase to a peak followed by a decrease to a lower plateau [3, 4].

thumbnail
Fig 1. Adaptation and overshoot in biochemical systems.

(a) Adaptation. (b) Overshoot and non-overshoot.

https://doi.org/10.1371/journal.pone.0155838.g001

Another important concept associated with adaptation is overshoot. Overshoot refers to the dynamic phenomenon that the output of a system varies with time in a non-monotonic way and exceeds both its initial and steady-state values over a certain period of time (see Fig 1(b)). In general, an adaptation system will perform overshoot. Thus the concepts of adaptation and overshoot are closely related and interconnected.

Adaptation is observed in a wide variety of sensory systems and its examples include the chemotaxis of bacteria [68], the osmotic shock response in yeast [9], the odor response in mammalian olfactory receptors [10, 11], the light response in mammalian retinal rods [12], the mechanical stress response in mammalian mechanoreceptors [13, 14], the calcium response in mammalian inositol trisphosphate receptors [15, 16] and ryanodine receptors [17, 18], and the hormone response in mammalian hormone receptors [19, 20]. The benefits of adaptation are well documented. It allows a biochemical system to detect the environmental fluctuations more rapidly and more sensitively, and thus protects the system from irreversible damages caused by unfavorable conditions.

To understand how adaptation is achieved, a large number of Markov models for various sensory systems have been proposed over the years [17, 2127]. In the early work, one of the most influential models is the Markov model based on ligand-induced receptor modification proposed by Segel, Goldbeter, and coworkers, which has been successfully applied to describe adaptation of many chemoreceptors such as ryanodine receptors and hormone receptors [2124]. Recently, significant progresses have been made on the biochemical feedback networks of adaptation systems [13]. Among these studies, Ma et al [3] searched all three-node feedback networks and found that adaptation is most likely to be achieved in two types of networks: the negative feedback loop or the incoherent feedforward loop.

In recent years, another exciting topic on adaptation is the relationship between adaptation and nonequilibrium in stochastic biochemical systems [5, 2831]. Recall that if the surrounding conditions remain constant, a biochemical system will reach a steady state for which two classes must be distinguished. In an equilibrium state, the detailed balance condition is satisfied. In a nonequilibrium steady state (NESS), however, the breaking of detailed balance indicates that the system is externally driven with concomitant energy dissipation [32]. Inspired by [3], Lan et al [5] studied sensory systems with a negative feedback network. They found that the negative feedback mechanism must break detailed balance and they also obtained a relation that bridges the energy dissipation rate, adaptation speed, and adaptation accuracy. In addition, Jia et al [30] found that in some particular Markov models, nonequilibrium is even a necessary condition for adaptation.

However, we are now faced with two seemingly contradictory statements. In the early work, equilibrium Markov models were widely used to describe adaptation in various sensory systems [17, 2127]. It has been shown that an equilibrium model based on ligand-induce receptor modification can achieve accurate adaptation [2124]. These results appear to contradict recent studies that emphasize the necessity of nonequilibrium and energy dissipation for adaptation. This inconsistency is the motivation of this paper.

In the present paper, we provide an explanation of this inconsistency based on Markov models with relatively simple networks. Previous studies have shown that a biochemical system may perform three different types of dynamic behavior: non-adaptation, simple adaptation, and oscillatory adaptation [33]. In simple Markov models, we show that as the nonequilibrium driving becomes stronger, the system will undergo a phase transition along a fixed direction: from non-adaptation to simple adaptation then to oscillatory adaptation, while the transition in the opposite direction is forbidden. During the phase transition, the system may skip a particular type of dynamic behavior. If the non-adaptation phase is skipped, then the system will transition directly from simple adaptation to oscillatory adaptation. This suggests that although adaptation could be observed in equilibrium systems, it tends to occur in systems with a strong nonequilibrium driving.

In addition, we also study the effects of nonequilibrium on the performance of adaptation, which is usually characterized by two quantities: the adaptation sensitivity and accuracy [3, 5]. In this paper, we define another indicator named as adaptation efficiency. We find that as the nonequilibrium driving becomes stronger, both the three indicators will increase. This shows that nonequilibrium will improve the performance of adaptation. Incidentally, we also find that in realistic biochemical systems, it is difficult to distinguish oscillatory adaptation from simple adaptation by the experimentally observed output curve. This explains why adaptation with oscillatory dynamics is rarely reported in the documented cases.

To validate the main results of this paper, we study the relationship between adaptation and nonequilibrium based on the sensory network model of the E. coli chemotactic sensory system [5, 31, 34] by both theoretical and numerical methods. The numerical results coincide perfectly with the main conclusions of this paper.

Model and Concepts

Markov model

Markov jump processes are widely used to model various stochastic biochemical systems such as chemical reaction networks [35], single-molecule enzyme kinetics [36], allostery of proteins or ion channels [33], and phenotypic switching of cell populations [37]. In this paper, we consider a biochemical system modeled by a Markov jump process with generator matrix Q = (qij), where qij with ij denotes the transition rate from state i to j and qii = −∑ji qij. We assume that the system can exist in N different states, denoted by 1, ⋯, N. The implication of these states varies from systems to systems. Each state can represent a binding state of an enzyme molecule [36], a conformational state of a receptor molecule [33], or a phenotypic state of a cell population [37].

Let p(t) = (p1(t), ⋯, pN(t)) denote the probability distribution of the system, where pi(t) denotes the probability of the system being in state i at time t. Then the dynamics of the system is governed by the master equation (1) In this paper, we assume that the matrix Q has N different eigenvalues, since any matrix can be approximated by such a matrix with arbitrarily high accuracy.

In applications, the system is always associated with an observable g = (g1, ⋯, gN). For instance, if the system models allostery of an ion channel, then the observable g is often chosen as gi = 1 if i is an open state of the channel and gi = 0 if i is a closed state of the channel. With these notations, the output ϕ(t) of the system is then the ensemble average of the observable g: where gT denotes the transpose of g. By solving Eq (1), it is easy to see that the output ϕ(t) is nothing but a linear combination of exponential functions: where λ1, ⋯, λN are all the eigenvalues of the matrix Q and c1, ⋯, cN are N constants (see S1 Appendix:1). The famous Perron-Frobenius theorem in the matrix theory claims that one eigenvalue of the generator matrix Q must be zero and the real parts of other eigenvalues are all negative [38]. In the sequel, we always fix λN = 0 and thus the output ϕ(t) can be rewritten as (2) where ϕ is the steady-state output of the system. It is easy to see that if λi is a real number, then ci is also a real number, whereas if λi and λj are conjugate complex numbers, then ci and cj are also conjugate complex numbers (see S1 Appendix:1).

Overshoot

Let π = (π1, ⋯, πN) and μ = (μ1, ⋯, μN) denote the initial and steady-state distributions of the system, respectively. Then the initial output of the system is ϕ0 = ∑i πi gi and the steady-state output of the system is ϕ = ∑i μi gi. Following the standard definition, we say that the system performs overshoot if the output first rises to a peak and then declines to a lower plateau (see Fig 1(b)). Mathematically, this is equivalent to saying that there exists a time t such that ϕ(t) is larger than both ϕ0 and ϕ.

In realistic biochemical systems, the output usually increases at small times and the steady-state output is usually larger than the initial output (see Fig 1(b)). Mathematically, these two conditions mean that ϕ′(0) > 0 and ϕϕ0, where ϕ′(t) denotes the derivative of ϕ(t). In the following discussions, we always assume that these two conditions are satisfied.

Adaptation

In practice, a sensory system usually has an input I, which represents a signal from the environment. In this case, both the generator matrix Q and the output ϕ(t) will depend on I. In response to a step increase of the input I, the output ϕ(t) of a sensory system usually undergoes a transient increase to a peak followed by a decrease to a lower plateau (see Fig 1(a)). The output recovery after a transient response to the input stimulus is called adaptation [3, 4, 31].

Let I0 and I1 denote the initial and final input levels, respectively, and we assume that the input level is elevated from I0 to I1 at time t = 0 (see Fig 1(a)). In response to the step increase of the input stimulus, the system is driven from a steady state to another one. The initial distribution π of the system is exactly the steady-state distribution for the input I0 and the final distribution μ is exactly the steady-state distribution for the input I1.

In the literature, the performance of adaptation is usually characterized by two quantities: the adaptation sensitivity and accuracy [3, 5]. Let ϕpeak denote the peak value of the output ϕ(t). Then the adaptation sensitivity γ and the adaptation error ϵ are defined respectively by The adaptation accuracy δ is then defined as the inverse of the adaptation error, namely, δ = 1/ϵ. In realistic biochemical systems, we hope the sensitivity γ and the accuracy δ to be as large as possible. In this way, the systems could detect the environmental fluctuations more sensitively and recover to their initial states more accurately.

In this paper, we define another indicator named as adaptation efficiency that can reflect the information of both the sensitivity γ and the accuracy δ. The adaptation efficiency η of the system is defined by (3) which is a number between 0 and 1. It is obvious that with the increase of γ and δ, the efficiency η will become larger. Thus in realistic biochemical systems, we hope the efficiency η to be as large as possible.

In general, an adaptation system will perform overshoot. Unlike the definitions of γ and ϵ, the definition of the efficiency η does not depend on the input I. This shows that the concept of efficiency can also be introduced for overshoot.

Sensory network model of adaptation in E. coli

As a paradigmatic example of adaptation systems, the E. coli chemotactic sensory system has been studied extensively over the years [68, 31, 34, 39]. Here we briefly recall a popular model of adaptation in E. coli to help the readers understand the basic framework introduced above.

The core component of the E. coli chemotactic sensory system is the transmembrane methyl-accepting chemotaxis protein (MCP) receptor. MCP receptors bind selectively to the ligands outside the plasma membrane and modulate the activity of its downstream signal transduction pathway. Specifically, the activity of MCP receptors regulates the clockwise-counterclockwise rotational switch of downstream flagellar motors, which drive the run-and-tumble motion of an E. coli cell.

A schematic diagram of an MCP receptor is illustrated in Fig 2(a). The activity of the receptor can be described by a binary variable a, where a = 1 represents the active state and a = 0 represents the inactive state. Moreover, each MCP receptor has four methylation sites and thus the methylation level of the receptor can be represented by a variable m, where m ranges from 0 to 4. The change of the methylation level m relies on methylation and demethylation of the receptor by the enzymes CheR and CheB, respectively.

thumbnail
Fig 2. The E. coli chemotactic sensory system.

(a) A schematic diagram of the MCP receptor. (b) The response of the MCP receptor to a step increase of the ligand concentration. (c) The sensory network model of the MCP receptor.

https://doi.org/10.1371/journal.pone.0155838.g002

Recently, an influential model of the MCP receptor called the sensory network model has been proposed by Lan et al [5]. In their model, each state can be represented as (a, m), where a is the activity and m is the methylation level (see Fig 2(c)). The transitions between the active and inactive states take place on the time scale τa in a way that depends on the methylation level m and the ligand concentration I, with rates given by where kB is Boltzmann’s constant, T is the temperature, and E(m, I) = e0(m0m) + f(I) is the free energy difference between states (0,m) and (1,m). Here e0 is the methylation energy, m0 is an offset methylation level, and f(I) = kB T log[(1 + I/Ki)/(1+I/Ka)] gives the chemical potential difference between the active and inactive states due to the binding of ligands with respective equilibrium constants Ka and Ki [5]. The transitions between different methylation levels take place on the time scale τm in a way that depends on the activity of the receptor. When the receptor is active, the rate of methylation is αk while the rate of demethylation is k. When the receptor is inactive, the rate of demethylation is αl while the rate of methylation is l. The methylation-demethylation cycle for the MCP receptor is fueled by the hydrolysis of SAM which donates a methyl group to an inactive receptor. The effect of the energy supply on the methylation and demethylation rates is represented by the parameter α, whose biophysical origin is discussed in [5, 29].

The output of the sensory network model is usually chosen as the sum of probabilities of the five inactive states: (4) In fact, 1 − ϕ(t) exactly describes the mean activity of the system. The response of the MCP receptor to a step increase of the ligand concentration obtained from experimental observations is illustrated in Fig 2(b), from which we see that the system performs adaptation. The output ϕ(t) changes sharply on a very short time scale τa less than a second, and then recovers slowly on a much longer time scale τm with the order of a minute [5, 34]. Thus τa and τm characterize the time scales of the response and relaxation processes, respectively.

Simple overshoot and oscillatory overshoot

Recently, it has been shown that overshoot can be classified into two different types: simple overshoot and oscillatory overshoot [30]. Here we briefly recall these two concepts. If the system has only two states, then the output has the form of ϕ(t) = ϕ+c1eλ1t. In this case, the output ϕ(t) must be a monotonic function and thus the system can never perform overshoot.

The situation is totally different in systems with three or more states. For simplicity, we consider a three-state system. From Eq (2), the output ϕ(t) of a three-state system is given by (5) where λ1 and λ2 are the nonzero eigenvalues of the matrix Q. There are two different situations: either λ1 and λ2 are both negative real numbers, or λ1 and λ2 are conjugate complex numbers with a negative real part.

If λ1 and λ2 are negative real numbers and the signs of c1 and c2 are not the same, then ϕ(t) may not be a monotonic function and thus the system may perform overshoot. In this case, the output first rises to a peak and then declines directly toward a lower plateau. This type of overshoot is referred to as simple overshoot (see Fig 3(a)).

thumbnail
Fig 3. Simple overshoot and oscillatory overshoot.

(a) Simple overshoot. (b) Oscillatory overshoot. When the number of states is small, the oscillation of the output can hardly be observed. When the number of states is large, the oscillation of the output may become apparent.

https://doi.org/10.1371/journal.pone.0155838.g003

If λ1 and λ2 are conjugate complex numbers with a negative real part, we may denote them as λ1 = −λ + ωi and λ2 = −λ − ωi, where λ is a positive real number. With these notations, the output ϕ(t) can be rewritten as where φ is the argument of c1. In this case, after the output rises to a peak and declines toward its initial value, it may rise and decline again to form a damped oscillation with decaying rate λ and angular frequency ω. This type of overshoot is called oscillatory overshoot.

Similarly, we can also classify adaptation into two different types: simple adaptation and oscillatory adaptation, whose essential difference is whether the system exhibits an oscillatory response.

Results

Reason why oscillatory adaptation is seldom reported

We have seen that adaptation can be classified into two different types, according to whether the system exhibits an oscillatory response. In biochemical systems modeled by continuous Markov models such as stochastic differential equations, the oscillation of the output is widely observed and this phenomenon is referred to as stochastic oscillation [40, 41]. In biochemical systems modeled by discrete Markov models, however, the vast majority of documented cases are adaptation without oscillatory dynamics, whereas adaptation with oscillatory dynamics is rarely reported.

To explain this phenomenon, we need a mathematical result established by Dmitriev and Dynkin [42] and Karpelevich [43] about the eigenvalues of generator matrices. The Dmitriev-Dynkin-Karpelevich theorem claims that if a Markov system has N different states, then any pair of conjugate complex eigenvalues −λ ± ωi of the generator matrix Q must satisfy (6) where the equality holds if and only if the following three conditions are satisfied (see Fig 4):

  1. the system has a cyclic topological structure;
  2. the transition rates along the forward cycle are the same;
  3. the transition rates along the back cycle are all zero.
thumbnail
Fig 4. The sufficient and necessary conditions such that the equality in Eq (6) holds.

The system is required to have a cyclic topological structure. Moreover, the transition rates along the forward cycle (blue arrows) need to be the same and the transition rates along the backward cycle need to be all zero.

https://doi.org/10.1371/journal.pone.0155838.g004

In particular, if a Markov system has only three states, the above inequality reduces to . This suggests that in a three-state system, the decaying rate λ of the output must be very fast compared to the angular frequency ω. Before the output could enter the second period of the oscillation, its amplitude has decayed exponentially to a neighborhood of its steady-state value that is too small to be observed, as illustrated by the blue curve in Fig 3(b).

The situation is somewhat different when the system has a large number of states, in which case the term tan(π/N) becomes very small. If λ/|ω| ≈ tan(π/N), then the decaying rate λ will be very slow compared to the angular frequency ω. In this case, the output may oscillate many times before approaching its steady-state value, as illustrated by the red curve in Fig 3(b). However, if λ/|ω| deviates significantly from tan(π/N), then the decaying rate λ will be very fast compared to the angular frequency ω. In this case, the oscillation of the output is hard to be observed.

In realistic biochemical systems, the number of states is usually not very large. Even if a biochemical system has a large number of states, the three conditions below Eq (6) are difficult to be satisfied, which implies that λ/|ω| cannot be very small. Therefore, in practice, it is difficult to distinguish oscillatory adaptation from simple adaptation by the experimentally observed output curve. Even if a biochemical system performs oscillatory adaptation, the oscillation of the output can hardly be observed. This explains why adaptation with oscillatory dynamics is rarely reported.

Phase transition associated with overshoot

If the surrounding conditions remain constant, a biochemical system will reach a steady state for which two classes must be distinguished. In an equilibrium state, the detailed balance condition μi qij = μj qji holds for any pair of states i and j and there are no net fluxes between adjacent states. In an NESS, however, the detailed balance condition is broken. The nonzero net fluxes between adjacent states indicate that the system is externally driven with concomitant energy dissipation [32, 44]. Natural biochemical systems are highly dissipative, consuming energy to carry out various important biological functions. If there is a sustained energy supply from the environment, then the system will reach an NESS.

The central aim of this paper is to study the effects of nonequilibrium on adaptation. To this end, we only need to study the relationship between overshoot and the net flux. For simplicity, we consider a three-state system. The steady-state distribution μ = (μ1, μ2, μ3) of a three-state system must satisfy μQ = 0, which is equivalent to saying that (7) where J is referred to as the net flux of the system. It is easy to see that the system satisfies detailed balance if and only if J = 0. Thus the net flux J is an effective indicator that describes the strength of the nonequilibrium driving. In this paper, we only consider the case of J ≥ 0. The case of J ≤ 0 can be analyzed in a similar way.

For convenience, we introduce the following three parameters: With these parameters, the generator matrix Q can be represented by the steady-state distribution μ and the net flux J as (8)

Let λ1 and λ2 denote the two nonzero eigenvalues of Q. Direct computation shows that λ1 and λ2 are conjugate complex numbers if and only if J > β (see S1 Appendix:2), where β is a nonnegative constant defined as This shows that β is a critical value of the net flux. If J > β, then λ1 and λ2 are conjugate complex numbers and the system will perform oscillating overshoot, whereas if Jβ, then λ1 and λ2 are real numbers and oscillating overshoot will not occur.

In the case of Jβ, the system may perform non-overshoot or simple overshoot. Without loss of generality, we assume that λ1λ2. For convenience, we introduce the following two parameters: It can be proved that when Jβ, the system performs simple overshoot if and only if E + λ2 F > 0 (see S1 Appendix:3).

In order to study the effects of the net flux on simple overshoot, we only need to study the dependence of E and −λ2 F on J and compare which one of E and −λ2 F is larger. Direct computation shows that where a1, a2, a3, and a4 are nonnegative constants independent of J (see S1 Appendix:4). This suggests that E is an increasing function of J and −λ2 F is a decreasing function of J. As J varies between 0 and β, the dependence of E and −λ2 F on J is illustrated by the red and blue lines in Fig 5(a)–5(c), respectively. The system performs simple overshoot if and only if the red line is above the blue line.

thumbnail
Fig 5. Three types of phase transitions as the net flux varies.

(a)-(c) The dependence of E (red line) and −λ2 F (blue line) on the net flux J. (a) The red and blue lines intersect at J = α. (b) The red line is below the blue line. (c) The red line is above the blue line. (d) The change of the output as the net flux increases. (e) The possible directions (blue arrows) and impossible directions (red arrows) of the phase transition as the nonequilibrium driving becomes stronger.

https://doi.org/10.1371/journal.pone.0155838.g005

We have seen that a biochemical system may perform three different types of dynamic behavior: non-overshoot (NON), simple overshoot (SIM), and oscillatory overshoot (OSC). With the increase of J, the system may undergo three different types of phase transitions which are listed below.

The first type of phase transition is illustrated in Fig 5(a), where the red and blue lines intersect at J = α. This shows that the net flux has two critical values α and β. At the critical value α, the system undergoes a phase transition between non-overshoot and simple overshoot. At the critical value β, the system undergoes a phase transition between simple overshoot and oscillatory overshoot. With the increase of J, all the three types of dynamic behavior will appear (see Fig 5(d)).

The second type of phase transition is illustrated in Fig 5(b), where the red line is below the blue line. This type of phase transition can be viewed as the degenerate case of α = β. At the critical value β, the system undergoes a phase transition between non-overshoot and oscillatory overshoot. In this case, J cannot vary between α and β and thus simple overshoot will not appear.

The third type of phase transition is illustrated in Fig 5(c), where the red line is above the blue line. This type of phase transition can be viewed as the degenerate case of α = 0. At the critical value β, the system undergoes a phase transition between simple overshoot and oscillatory overshoot. In this case, J cannot vary between 0 and α and thus non-overshoot will not appear.

In summary, we have seen that the net flux has two critical values α and β. When 0 ≤ Jα, the system will not perform overshoot; when α < Jβ, the system will perform simple overshoot; when J > β, the system will perform oscillatory overshoot.

Overshoot and nonequilibrium

We shall next discuss the relationship between overshoot and nonequilibrium. If a system is in equilibrium, then the detailed balance condition μi qij = μj qji holds for any pair of states i and j. Let denote the diagonal matrix whose diagonal entries are , respectively. Then the detailed balance condition suggests that the matrix is symmetric. It is an elementary result in linear algebra that the eigenvalues of a symmetric matrix must be all real numbers. Since Q and S only differ up to a similar transform, they must have the same eigenvalues. This shows if a system is in equilibrium, then the eigenvalues of the generator matrix Q must be all real numbers. As a result, oscillatory overshoot can never occur in equilibrium systems.

Unfortunately, a similar result does not hold for simple overshoot. For the third type of phase transition illustrated in Fig 5(c), the system will perform simple overshoot even when the net flux vanishes. This indicates that simple overshoot may occur in equilibrium systems, which is consistent with the early work that used equilibrium Markov models to describe adaptation in various sensory systems [17, 2127].

Despite all this, there is still a close relationship between overshoot and nonequilibrium. We make a crucial observation that for all the three types of phase transitions illustrated in Fig 5(a)–5(c), with the increase of the net flux, the system will undergo a phase transition along a fixed direction: from non-overshoot to simple overshoot then to oscillatory overshoot, although a particular phase may be skipped during the phase transition. The possible directions of the phase transition are illustrated by the blue arrows in Fig 5(e). In addition, the phase transition in the opposite direction is forbidden. Specifically, as the nonequilibrium driving becomes stronger, the phase transition from oscillatory overshoot to simple overshoot, that from oscillatory overshoot to non-overshoot, and that from simple overshoot to non-overshoot are all impossible, as illustrated by the red arrows in Fig 5(e). All these results indicate that overshoot and adaptation tend to occur in systems far away from equilibrium. Therefore, in realistic biochemical systems, adaptation is likely a good indicator of nonequilibrium dynamics.

Nonequilibrium enhances adaptation efficiency

To gain a deeper insight into the relationship between adaptation and nonequilibrium, we shall study the effects of the net flux on the adaptation efficiency. For simplicity, we consider a three-state system and continue to use the parametrization Eq (8) of the generator matrix Q.

In order to obtain an explicit dependence of the adaptation efficiency η on the net flux J, we need to make an additional assumption. We assume that state 1 has relatively fast transitions compared to the other two states. Under this assumption, the state transitions of the system have two separate time scales. In realistic sensory systems, adaptation always possesses a multiscale feature: in response to an abrupt input change, the output usually changes sharply on a very short time scale and recovers slowly over a much longer time scale. In this way, the system could detect the environmental fluctuations more rapidly. Thus the above assumption of time scale separation does not impose too much restriction onto the system.

If state 1 has relatively fast transitions compared to the other two states, then the process with the fast time scale can be eliminated [45, 46]. Recall that the system will perform overshoot provided J > α. In the case of J > α, it can be proved that the peak output ϕpeak is a approximate linear function of J, namely, where b1 and b2 are two nonnegative constants independent of J (see S1 Appendix:5), as illustrated by the blue line in Fig 6(a). With the increase of J, the initial output ϕ0 and the steady-state output ϕ remain invariant, whereas the peak output ϕpeak will become larger (see Fig 5(d)). From Eq (3), the adaptation efficiency η can be rewritten as This suggests that the adaptation efficiency will also increase as the nonequilibrium driving becomes stronger (see Fig 6(b)).

thumbnail
Fig 6. Nonequilibrium enhances the adaptation efficiency.

The red lines illustrate the case where the state transitions of the system do not have a separate time scale. In this case, the model parameters are taken as π = (1, 0, 0), μ = (0.1, 0.5, 0.4), g = (0, 2, 1), a = 1.2, b = 0.6, and c = 0.2. The blue lines illustrate the case where state 1 has relatively fast transitions compared to the other two states. In this case, all the model parameters remain the same except that we take μ = (0.0001, 0.5, 0.4999). (a) The peak output versus the net flux. (b) The adaptation efficiency versus the net flux.

https://doi.org/10.1371/journal.pone.0155838.g006

The readers may ask what will happen if the state transitions of the system do not have two separate time scales. In this case, we cannot discard the fast process and it is difficult to obtain an explicit dependence of the peak output on the net flux. However, according to our numerical computation, we find that similar results still hold. The numerical results show that ϕpeak is still an increasing function of J, but no longer depends on J linearly, as illustrated by the red line in Fig 6(a). From Eq (3), it is easy to see that the adaptation efficiency η is also an increasing function of J (see Fig 6(b)). This indicates that nonequilibrium will always enhance the adaptation efficiency whether the system has two separate time scales or not.

Phase transition in the sensory network model

In the above discussions, we mainly develop our theory in simple Markov models. A natural question is whether our main results also hold in realistic biochemical systems with relatively complex networks. Here we shall validate the main results of this paper by studying the relationship between adaptation and nonequilibrium based on the sensory network model of the E. coli chemotactic sensory system (see Fig 2(c)).

It is well-known that whether a Markov system will reach an equilibrium state or an NESS is closely related to the cycles of the system. Recall that a Markov system satisfies the detailed balance condition if and only if the product of transition rates along any cycle and that along its reversed cycle are exactly the same [32]. In fact, the sensory network model has four basic cycles, as illustrated by the red arrows in Fig 2(c). By using the above criteria, it is easy to check that the sensory network model satisfies the detailed balance condition if and only if α = ee0/2. If α < ee0/2, the system is driven out of equilibrium with energy dissipation. Thus α is an effective parameter that characterizes the strength of the nonequilibrium driving.

We next discuss the effects of nonequilibrium on adaptation in the sensory network model using both analytic and numerical methods. In the following numerical computations, we adopt the model parameters as suggested in [5]: e0 = 2, m0 = 1, Ki = 18.2 μM, Ka = 3000 μM, and kB T = 1. The ligand concentration is elevated from I0 = 10Ki to I1 = 15Ki. The time constants are taken as τa = 0.1 s and k = l = 0.01 s−1. With this set of model parameters, an estimate of the typical methylation-demethylation cycle time is given by τm ≈ 1/k + 1/l = 200 s.

The sensory network model has some nice mathematical properties. In fact, it can be proved that if the system satisfies the detailed balance condition, then adaptation will never occur, regardless of how the model parameters are chosen (see S1 Appendix:6). This indicates that nonequilibrium is a necessary condition for adaptation in the sensory network model. Moreover, with the model parameters taken above, it has been shown that the system performs adaptation if and only if α < 1 [31]. This shows that α = 1 is a critical value of adaptation. If 1 ≤ αee0/2, the system will not perform adaptation, whereas if α < 1, the system will perform adaptation (see Fig 7(a)). According to our numerical computation, the system will undergo a phase transition between non-adaptation and simple adaptation at the critical value α = 1. If we further decrease α, the system still performs adaptation and the eigenvalues of the generator matrix are all real numbers. This shows that with the decrease of α, the system cannot further transition from simple adaptation to oscillatory adaptation.

thumbnail
Fig 7. The relationship between adaptation and nonequilibrium in the sensory network model.

(a) The response of the MCP receptor to a step increase of the ligand concentration under different choices of α. (b) The dependence of the adaptation efficiency on α under τa = 0.1 s (blue line) and τa = 200 s (red line). (c) The dependence of the adaptation sensitivity on α under τa = 0.1 s (blue line) and τa = 200 s (red line). (d) The dependence of the adaptation error on α under τa = 0.1 s (blue line) and τa = 200 s (red line).

https://doi.org/10.1371/journal.pone.0155838.g007

The reason why oscillatory adaptation does not appear during the phase transition can also be explained by our theory. In fact, the conditions (b) and (c) below Eq (6) show that if we hope a Markov system to exhibit an apparent oscillatory response, the transition rates along any cycle cannot differ to much. In the sensory network model, all the vertical transition rates have the time scale of τa and all the horizontal ones have the time scale of τm. Thus in order to make the sensory network model perform oscillatory adaptation, τa and τm cannot differ too much. To validate this claim, we change the time constant τa from 0.1 s to 200 s to ensure τaτm. With this parameter value, α = 1 is still a critical value that distinguishes non-adaptation from simple adaptation, according to our numerical computation. If we further decrease α, then the system undergoes a phase transition from simple adaptation to oscillatory adaptation at another critical value α ≈ 0.91. With the decrease of α, the number of complex eigenvalues of the generator matrix increases, as shown in Table 1.

thumbnail
Table 1. The number of complex eigenvalues of the generator matrix under different ranges of α.

https://doi.org/10.1371/journal.pone.0155838.t001

All these numerical results indicate that as the nonequilibrium driving becomes stronger, the sensory network model will undergo a phase transition along a fixed direction: from non-adaptation to simple adaptation then to oscillatory adaptation, which coincides perfectly with our previous conclusion.

Nonequilibrium enhances adaptation performance in the sensory network model

We have seen that in simple Markov models, nonequilibrium will enhance the adaptation efficiency. Here we shall study the effects of nonequilibrium on the performance of adaptation based on the sensory network model.

With the set of model parameters listed above, the dependence of the adaptation efficiency η, the adaptation sensitivity γ, and the adaptation error ϵ on the parameter α is illustrated by the blue lines in Fig 7(b)–7(d), respectively. With the decrease of α, the adaptation efficiency and sensitivity will increase and the adaptation error will decrease. This suggests that a strong nonequilibrium driving will make the system respond to the environmental fluctuations more sensitively and recover to the initial state more accurately. In other words, nonequilibrium will improve the performance of adaptation by enhancing the adaptation efficiency, sensitivity, and accuracy. This also coincides perfectly with our previous conclusion.

We next discuss another interesting topic. We have seen that the time constants τa and τm give the time scales of the response and relaxation processes, respectively (see Fig 2(b)). In realistic systems, τa is usually far less than τm. In this way, the system possesses a fast response speed and a much slower adaptation speed. Interestingly, we find that the small time constant τa not only guarantees a fast response speed, but also improves the performance of adaptation.

To see this point, we change the time constant τa from 0.1 s to 200 s to ensure τaτm. With this parameter value, the dependence of the efficiency η, the sensitivity γ, and the error ϵ on the parameter α is illustrated by the red lines in Fig 7(b)–7(d), respectively. By comparing the blue and red lines in Fig 7(b)–7(d), we can see that a small time constant τa will lead to a larger efficiency, a larger sensitivity, and a smaller error. When τaτm, the efficiency approaches 1 as α tends to 0. However, when τaτm, the efficiency has a maximum value less than 0.8. All these numerical results indicate that a fast transition between the active and inactive states not only enhances the response speed, but also enhances the adaptation efficiency, sensitivity, and accuracy.

Conclusions and Discussions

The relationship between adaptation and nonequilibrium has gained increasing attention in recent years [5, 2831]. In our previous work, we found that in some particular Markov models, nonequilibrium is a necessary condition for adaptation [30]. However, this result is not quite satisfactory. In fact, it has been shown that some influential equilibrium models, such as the four-state model based on ligand-induced receptor modification proposed by Segel, Goldbeter, and coworkers, can achieve accurate adaptation [2124]. In addition, Ao et al [28] found that a three-state equilibrium model may perform overshoot. These results indicate that even in a simple Markov model, nonequilibrium is not always a necessary condition for adaptation. This appears to contradict recent studies, which emphasize the nonequilibrium effects of adaptation. How to understand these seemingly conflicting statements is the starting point of this paper.

In the present paper, we provide an explanation of this inconsistency by studying the effects of nonequilibrium on adaptation in stochastic biochemical systems. In simple Markov models, we reveal that as the nonequilibrium driving becomes stronger, the system will undergo a phase transition along a fixed direction: from non-adaptation to simple adaptation then to oscillatory adaptation, although a particular type of dynamic behavior may be skipped during the phase transition. If the non-adaptation phase is skipped, then the system will transition directly from simple adaptation to oscillatory adaptation. This shows that although adaptation may occur in equilibrium systems, it tends to occur in systems far from equilibrium. All these results provide a relatively satisfactory answer to the underlying contradiction in the previous studies.

In simple Markov models, we impose very few constraints on the topological structure, transition rates, initial distribution, and observable of the system. In the recent work of Ao et al [28], the authors showed that the choice of the observable may affect the analysis of overshoot: some observables are not good enough to discuss overshoot, while other observables may be better. In this paper, we find that the choice of the observable could be rather arbitrary and will hardly affect the theoretical analysis and main results. The only requirements for the observable are to make sure ϕ′(0) > 0 and ϕϕ0. This suggests that our conclusions should be rather general.

In the previous literature, the performance of adaptation is usually characterized by two quantities: the adaptation sensitivity and accuracy [3, 5]. In this paper, we define another indicator named as adaptation efficiency that contains the information of both the sensitivity and accuracy. The most important advantage of the adaptation efficiency is that its definition does not depend on the input stimulus. In realistic biochemical systems, the transition rates between states may depend on the input stimulus in many different ways. Since the dependence of the transition rates on the input stimulus does not have a general form, it is difficult to perform a theoretical analysis on the sensitivity and accuracy in general Markov models.

In recent years, the relationship between nonequilibrium and the adaptation accuracy has been discussed in some particular Markov models such as the sensory network model and a coarse-grained model of stochastic differential equations [5, 29, 31]. Among these studies, Lan et al [5] found a relation among the energy dissipation rate, adaptation speed, and adaptation accuracy, which they claimed to hold generally. However, we make an important observation that the definition of the accuracy only depends on the initial and steady-state outputs. In order to perform an analysis on the accuracy, we only need to study the steady-state properties of the system, which is a relatively easy task.

In this paper, we study the effects of nonequilibrium on the adaptation efficiency, whose definition depends on the peak value of the output. In order to perform an analysis on the efficiency, we need to study the dynamic properties of Markov models, which turns out to be a very difficult problem. The previous work focuses more on the relaxation process of adaptation, while the present paper focuses more on the response process of adaptation, namely, overshoot. In addition to the phase transition associated with overshoot, we also find that a strong nonequilibrium driving will help a biochemical system enhance its adaptation efficiency. The study on the connection between nonequilibrium and the dynamic properties of adaptation, such as the transient response of overshoot and the adaptation efficiency, is probably the main innovative point of this paper.

A limitation of the present paper is that we mainly perform our analysis in simple Markov models. This is because if we hope to study the dynamic behavior of adaptation, we need to obtain an explicit expression of the output and analyze its properties such as monotonicity. To this end, we also need to calculate all the eigenvalues of the generator matrix. All these procedures become extremely difficult to be implemented if the system has a large number of states. However, we believe that the main results of this paper might be general principles that also apply to Markov models with relatively complex networks.

To validate whether our main results apply to realistic biochemical systems, we discuss the relationship between adaptation and nonequilibrium based on the sensory network model of the E. coli chemotactic sensory system using both analytic and numerical methods. It turns out that the numerical computations based on the sensory network model coincide perfectly with the main conclusions of this paper. Moreover, we prove that nonequilibrium is a necessary condition for adaptation in the sensory network model. In addition, we also find an interesting fact that a fast transition between the active and inactive states not only enhances the response speed, but also improves the performance of adaptation.

In the past, people tended to focus on the steady-state phenomena in biochemical systems. Recently, increasing attention has been paid to the dynamic phenomena in biochemical systems such as adaptation and overshoot. So far, there have been extensive studies on the equilibrium state and the NESS and these studies have been developed into a beautiful mathematical theory [32]. However, the study on the dynamic behavior of stochastic systems turns out to be a very difficult problem due to the lack of effective mathematical tools.

In this paper, we use several mathematical tools to perform an analysis on the dynamic properties of adaptation. These mathematical tools include the matrix theory, the properties of the eigenvalues of generator matrices, the mathematical theory of NESSs, and the mathematical theory of multiscale methods. We hope that the techniques used in this paper could be applied to the study of other dynamic phenomena in deterministic and stochastic biochemical systems.

Supporting Information

S1 Appendix. Detailed calculations.

This file provides some detailed calculations that help the readers understand the content of the main text.

https://doi.org/10.1371/journal.pone.0155838.s001

(PDF)

Acknowledgments

The authors are grateful to S.W. Wang, D.Q. Jiang, J. Yu, and X.B. Jing for stimulating discussions and the anonymous reviewers for valuable suggestions.

Author Contributions

Conceived and designed the experiments: CJ MQ. Performed the experiments: CJ MQ. Analyzed the data: CJ. Contributed reagents/materials/analysis tools: CJ MQ. Wrote the paper: CJ.

References

  1. 1. Behar M, Hao N, Dohlman HG, Elston TC. Mathematical and computational analysis of adaptation via feedback inhibition in signal transduction pathways. Biophys J. 2007;93(3):806–821. pmid:17513354
  2. 2. François P, Siggia ED. A case study of evolutionary computation of biochemical adaptation. Phys Biol. 2008;5(2):026009. pmid:18577806
  3. 3. Ma W, Trusina A, El-Samad H, Lim WA, Tang C. Defining network topologies that can achieve biochemical adaptation. Cell. 2009;138(4):760–773. pmid:19703401
  4. 4. Keener J, Sneyd J. Mathematical Physiology I: Cellular Physiology. 2nd ed. New York: Springer; 2009.
  5. 5. Lan G, Sartori P, Neumann S, Sourjik V, Tu Y. The energy-speed-accuracy trade-off in sensory adaptation. Nat Phys. 2012;8(5):422–428. pmid:22737175
  6. 6. Berg HC, Brown DA, et al. Chemotaxis in Escherichia coli analysed by three-dimensional tracking. Nature. 1972;239(5374):500–504. pmid:4563019
  7. 7. Macnab RM, Koshland D. The gradient-sensing mechanism in bacterial chemotaxis. Proc Natl Acad Sci U S A. 1972;69(9):2509–2512. pmid:4560688
  8. 8. Hazelbauer GL, Falke JJ, Parkinson JS. Bacterial chemoreceptors: high-performance signaling in networked arrays. Trends Biochem Sci. 2008;33(1):9–19. pmid:18165013
  9. 9. Mettetal JT, Muzzey D, Gómez-Uribe C, van Oudenaarden A. The frequency dependence of osmo-adaptation in Saccharomyces cerevisiae. Science. 2008;319(5862):482–484. pmid:18218902
  10. 10. Duchamp-Viret P, Chaput M, Duchamp A. Odor response properties of rat olfactory receptor neurons. Science. 1999;284(5423):2171–2174. pmid:10381881
  11. 11. Reisert J, Matthews HR. Response properties of isolated mouse olfactory receptor cells. J Physiol (Lond). 2001;530(1):113–122.
  12. 12. Nakatani K, Tamura T, Yau K. Light adaptation in retinal rods of the rabbit and two other nonprimate mammals. J Gen Physiol. 1991;97(3):413–435. pmid:2037836
  13. 13. Howard J, Hudspeth A. Mechanical relaxation of the hair bundle mediates adaptation in mechanoelectrical transduction by the bullfrog’s saccular hair cell. Proc Natl Acad Sci U S A. 1987;84(9):3064–3068. pmid:3495007
  14. 14. Hamill OP, McBride DW. Rapid adaptation of single mechanosensitive channels in Xenopus oocytes. Proc Natl Acad Sci U S A. 1992;89(16):7462–7466. pmid:1380158
  15. 15. Marchant JS, Taylor CW. Rapid activation and partial inactivation of inositol trisphosphate receptors by inositol trisphosphate. Biochem J. 1998;37(33):11524–11533.
  16. 16. Adkins C, Taylor C. Lateral inhibition of inositol 1,4,5-trisphosphate receptors by cytosolic Ca2+. Curr Biol. 1999;9(19):1115–1118. pmid:10531009
  17. 17. Gyorke S, Fill M. Ryanodine receptor adaptation: control mechanism of Ca2+-induced Ca2+ release in heart. Science. 1993;260(5109):807–809. pmid:8387229
  18. 18. Valdivia HH, Kaplan JH, Ellis-Davies G, Lederer WJ. Rapid adaptation of cardiac ryanodine receptors: modulation by Mg2+ and phosphorylation. Science. 1995;267(5206):1997–2000. pmid:7701323
  19. 19. Su YF, Harden TK, Perkins JP. Catecholamine-specific desensitization of adenylate cyclase. Evidence for a multistep process. J Biol Chem. 1980;255(15):7410–7419. pmid:6248556
  20. 20. Stadel JM, Nambi P, Shorr R, Sawyer DF, Caron MG, Lefkowitz RJ. Catecholamine-induced desensitization of turkey erythrocyte adenylate cyclase is associated with phosphorylation of the beta-adrenergic receptor. Proc Natl Acad Sci U S A. 1983;80(11):3173–3177. pmid:6304694
  21. 21. Devreotes P, Sherring J. Kinetics and concentration dependence of reversible cAMP-induced modification of the surface cAMP receptor in Dictyostelium. J Biol Chem. 1985;260(10):6378–6384. pmid:2987221
  22. 22. Segel LA, Goldbeter A, Devreotes PN, Knox BE. A mechanism for exact sensory adaptation based on receptor modification. J Theor Biol. 1986;120(2):151–179. pmid:3784578
  23. 23. Knox BE, Devreotes PN, Goldbeter A, Segel LA. A molecular mechanism for sensory adaptation based on ligand-induced receptor modification. Proc Natl Acad Sci U S A. 1986;83(8):2345–2349. pmid:3010308
  24. 24. Cheng H, Fill M, Valdivia H, Lederer W. Models of Ca2+ release channel adaptation. Science. 1995;267(5206):2009–2010. pmid:7701326
  25. 25. Keizer J, Levine L. Ryanodine receptor adaptation and Ca2+-induced Ca2+ release-dependent Ca2+ oscillations. Biophys J. 1996;71(6):3477–3487. pmid:8968617
  26. 26. Jafri MS, Rice JJ, Winslow RL. Cardiac Ca2+ dynamics: the roles of ryanodine receptor adaptation and sarcoplasmic reticulum load. Biophys J. 1998;74(3):1149–1168. pmid:9512016
  27. 27. Fill M, Zahradníková A, Villalba-Galea CA, Zahradník I, Escobar AL, Györke S. Ryanodine receptor adaptation. J Gen Physiol. 2000;116(6):873–882. pmid:11099353
  28. 28. Ao P, Chen TQ, Shi JH. Dynamical decomposition of markov processes without detailed balance. Chinese Phys Lett. 2013;30(7):070201.
  29. 29. Barato AC, Hartich D, Seifert U. Efficiency of cellular information processing. New J Phys. 2014;16(10):103024.
  30. 30. Jia C, Qian M, Jiang D. Overshoot in biological systems modelled by Markov chains: a non-equilibrium dynamic phenomenon. IET Syst Biol. 2014;8(4):138–145. pmid:25075526
  31. 31. Wang SW, Lan Y, Tang LH. Energy dissipation in an adaptive molecular circuit. J Stat Mech: Theory E. 2015;P07025.
  32. 32. Zhang XJ, Qian H, Qian M. Stochastic theory of nonequilibrium steady states and its applications. Part I. Phys Rep. 2012;510(1):1–86.
  33. 33. Jia C, Jiang DQ, Qian MP. An allosteric model of the inositol trisphosphate receptor with nonequilibrium binding. Phys Biol. 2014;11:056001. pmid:25118617
  34. 34. Tu Y. Quantitative modeling of bacterial chemotaxis: Signal amplification and accurate adaptation. Annu Rev Biophys. 2013;42:337–359. pmid:23451887
  35. 35. Kurtz TG. The relationship between stochastic and deterministic models for chemical reactions. J Chem Phys. 1972;57(7):2976–2978.
  36. 36. Qian H, Elson LE. Single-molecule enzymology: Stochastic Michaelis–Menten kinetics. Biophys Chem. 2002;101:565–576. pmid:12488027
  37. 37. Gupta PB, Fillmore CM, Jiang G, Shapira SD, Tao K, Kuperwasser C, et al. Stochastic state transitions give rise to phenotypic equilibrium in populations of cancer cells. Cell. 2011;146(4):633–644. pmid:21854987
  38. 38. Berman A, Plemmons RJ. Nonnegative Matrices in the Mathematical Sciences. New York: Academic Press; 1979.
  39. 39. Tu Y, Shimizu T, Berg H. Modeling the chemotactic response of Escherichia coli to time-varying stimuli. Proc Natl Acad Sci U S A. 2008;105(39):14855–14860. pmid:18812513
  40. 40. Wang J, Xu L, Wang E. Potential landscape and flux framework of nonequilibrium networks: robustness, dissipation, and coherence of biochemical oscillations. Proc Natl Acad Sci U S A. 2008;105(34):12271–12276. pmid:18719111
  41. 41. Li C, Wang E, Wang J. Landscape, flux, correlation, resonance, coherence, stability, and key network wirings of stochastic circadian oscillation. Biophys J. 2011;101(6):1335–1344. pmid:21943414
  42. 42. Dmitriev NA, Dynkin E. On characteristic roots of stochastic matrices. Izvestiya Rossiiskoi Akademii Nauk Seriya Matematicheskaya. 1946;10(2):167–184.
  43. 43. Karpelevich FI. On the characteristic roots of matrices with nonnegative elements. Izvestiya Rossiiskoi Akademii Nauk Seriya Matematicheskaya. 1951;15(4):361–383.
  44. 44. Xu L, Shi H, Feng H, Wang J. The energy pump and the origin of the non-equilibrium flux of the dynamical systems and the networks. J Chem Phys. 2012;136(16):165102. pmid:22559506
  45. 45. Jia C. Simplification of irreversible Markov chains by removal of states with fast leaving rates. J Theor Biol. 2016;400:129–137. pmid:27067245
  46. 46. Jia C. Reduction of Markov chains with two-time-scale state transitions. Stochastics. 2016;88(1):73–105.