Modeling the Seasonal Adaptation of Circadian Clocks by Changes in the Network Structure of the Suprachiasmatic Nucleus

The dynamics of circadian rhythms needs to be adapted to day length changes between summer and winter. It has been observed experimentally, however, that the dynamics of individual neurons of the suprachiasmatic nucleus (SCN) does not change as the seasons change. Rather, the seasonal adaptation of the circadian clock is hypothesized to be a consequence of changes in the intercellular dynamics, which leads to a phase distribution of electrical activity of SCN neurons that is narrower in winter and broader during summer. Yet to understand this complex intercellular dynamics, a more thorough understanding of the impact of the network structure formed by the SCN neurons is needed. To that effect, we propose a mathematical model for the dynamics of the SCN neuronal architecture in which the structure of the network plays a pivotal role. Using our model we show that the fraction of long-range cell-to-cell connections and the seasonal changes in the daily rhythms may be tightly related. In particular, simulations of the proposed mathematical model indicate that the fraction of long-range connections between the cells adjusts the phase distribution and consequently the length of the behavioral activity as follows: dense long-range connections during winter lead to a narrow activity phase, while rare long-range connections during summer lead to a broad activity phase. Our model is also able to account for the experimental observations indicating a larger light-induced phase-shift of the circadian clock during winter, which we show to be a consequence of higher synchronization between neurons. Our model thus provides evidence that the variations in the seasonal dynamics of circadian clocks can in part also be understood and regulated by the plasticity of the SCN network structure.


Instantaneous phase response curves (iPRCs)
The network of connected oscillator cells in our model can be described by the generalized ordinary differential equation: If the coupling between the cells is strong enough, this equation possesses a limit cycle solution with the period . A light pulse subjected to of the oscillator network along a specific dimension d can be described as a perturbation of the above equation: where is a diagonal matrix that determines which oscillators (and oscillator dimension) are subjected to the external light-input with amplitude . If the external perturbation amplitude is small compared to the amplitude of the limit cycle, it is valid to reduce the dynamics to the perturbation of the phase of the limit cycle solution [1]. The equation describing phase dynamics is given by: The function describes the instantaneous phase response along the perturbed dimension d of the jth oscillator [2,3]. It can be calculated by integrating the following equation backwards in time from a given reference point on the limit cycle [3,4]: The final condition is ( ) ( ⁄ ( ( )) ) , where the entry ⁄ ( ( )) is at the lth position and ensures a normalization such that ( ) .
The matrix is the Jacobian matrix evaluated along the limit cycle. As an example we computed the iPRCs for the oscillators in our SCN network with the parameter determining the number of long-range connections set to 0.0035 (Section 7, Figure S5). For a square pulse with short duration during the phases and and small amplitude , the iPRCs can be used to approximate the PRC [2]: This shows that the magnitude and shape of the overall PRC is determined by the sum of the iPRCs of the individual oscillators. 3

Phase synchronization of weakly coupled oscillator networks
It is known that N uncoupled identical oscillators possess N zero Floquet exponents that correspond to perturbations in the direction of the phase. Weak coupling of the oscillators (small coupling strength ) leads to of these Floquet exponents becoming -small. These exponents determine the rate of relaxation to synchronization in the subspace of phases of the oscillator network. The Floquet exponents can be approximated by the eigenvalues of the following matrix [5]: where: Here is the vector of iPRCs (see eq. 4) along each dimension of the uncoupled oscillator and is the right-hand side of the uncoupled oscillator evaluated along the limitcycle. The Jacobian matrix describes how the dimensions of the oscillators i and j are coupled to each other along the limit cycle.
In fact this result can be proven for a more general setup by using the phase reduction method [1]. Consider the network of coupled heterogeneous n-dimensional oscillators: where we can expand the right-hand side for small as . Thus, we can write the coupled system as: Further we assume that the differences in the oscillators are small and, for simplicity, of the same order of magnitude as the coupling strength . This implies that the deviations in the oscillator frequencies are also -small. The imaginary oscillator derived after averaging over the uncoupled oscillators has a limit cycle solution 〈 〉 ⁄ , with average period 〈 〉 ⁄ , 〈 〉 ⁄ ∑ , and is governed by the averaged right hand side . is the vector of iPRCs of the uncoupled imaginary average oscillator and is periodic. To avoid complicated expressions it is convenient to redefine and as periodic functions: The reduction to the individual oscillator phases yields [1]: 4 We now define the phase differences to the synchronized state: with . The frequency of the phase locked state, , is determined by a solvability condition for the stationary phase differences. It will be given in the next section. The evolution of the phase differences is then governed by: Here we assume that the differences of the individual frequencies to the synchronized frequency are -small. We will see that this is justified if the differences to the average oscillator frequency are -small, which we assumed above. Then we can write: Since is small the changes in the are slow compared to the movement of the reference oscillator. Therefore, we are permitted to average over one cycle ⁄ and consider on the slow time scale : The integral term is only a function of the phase differences . This can be seen by rewriting the term as: Due to the in-phase locking of the oscillators we can linearize the term around small phase differences and finally obtain: where ∫ ( ) is the contribution to the i-th oscillators frequency arising due to the coupling to the other oscillators. We can write the system in matrix form using the matrix from eq. 6: A phase distribution around the synchronized frequency phase is given by solving the linear equation system ̃ and we will consider it in more detail in the next section. The stability and time scale of approaching this phase distribution is given by the 5 eigenvalues of the matrix M. Transforming the system into normal coordinates (denoted by ), given by the eigenvectors of the matrix M, makes this more obvious [6]: Note that some of the eigenvalues might also have an algebraic and/or geometric multiplicity, which we did not take into account here. If all eigenvalues of M have a negative real part, the synchronized phase distribution is stable and all oscillators are locked in-phase with frequency .

Synchronized frequency and its phase distribution
It was shown that the magnitude of the Floquet exponents determines how fast the oscillators or groups of oscillators synchronize their phases to each other [7,8]. Especially, near-zero eigenvalues indicate a clustering in the network with groups of oscillators that do not or only weakly synchronize to each other. It was numerically shown that the phase-of-peak distribution gets broader the more near-zero eigenvalues the Laplacian of the network possesses and consequently the more clustered the network becomes. Here, we present an analytical derivation of this for the general case considered above.
The synchronized phase distribution is given by solving the linear equation system: A simple inverse of the matrix M does not exist, because one of its eigenvalues is zero: with corresponding eigenvector √ ⁄ . This means that the system is neutrally stable to a simultaneous shift in all phases. This is to be expected since we can view the overall network as an oscillator, which again has a zero eigenvalue into the direction of its phase.
A solution to eq. 18 can be given in terms of the pseudoinverse of : The linear equation system in eq. 18 is only consistent and thus has at least one exact solution, when the following condition holds: If this equation does not hold the solution in eq. 19 will only be a least fit to eq. 18 and thus not a stable solution because then the right-hand side is not exactly zero in eq. 16. Using this condition we can determine : Therefore, we obtain ( ) and finally: Thus, besides the variations in the oscillators, the kernel vector of determines the locked frequency. For the derivation to be closed, we now need to show that the deviations between the synchronized frequency and the individual frequencies are -small, when the deviations from the average oscillator frequency 〈 〉 ∑ are of the order as assumed in the previous derivations: This shows that the averaging in eq. 13 is justified. In a similar way we obtain: where 〈 〉 〈 〉 , 〈 〉 〈 〉 and 〈 〉 ∑ . Thus the vector ̃ is mainly determined by the deviations from the average values of the oscillator frequency and the coupling contribution. 7 Eq. 19 can be simplified by noting that : Therefore, as expected the solution is not unique with respect to a simultaneous shift of all oscillators. For convenience we consider the solution with .
To characterize the distribution some more we now assume that ̃ is Gaussian distributed with the simple variance ̃ . Since is a linear transformation of ̃ its distribution is also Gaussian with variance: This shows that the variance is mostly determined by the squared singular values that are near zero. Moreover, the singular vectors corresponding to these near-zero eigenvalues are near to the kernel vector √ ⁄ . This can be seen from the definition of the singular vectors:

Special cases of coupling
Next, we consider some special but common cases of coupling. If all oscillators are coupled symmetrically, additively and in the same way, the synchronized frequency in eq. 22 is: The system in eq. 16 simplifies to: , where, In our model the bidirectional couplings between the oscillators are weighted by the number of connections the i-th oscillator has and therefore are not symmetric (see manuscript, Section: Oscillator Network, eq. 3). However, since we can write the matrix M as follows: the kernel vector of is simply given by . Thus the synchronized frequency from eq. 22 is given by: The weighting leads to the following linear system: For our model the system simplifies further due to the linear coupling between the oscillators and the simple one-to-one coupling of oscillator dimensions: Here the last identity follows from the normalization condition ( ) .
Therefore, in this case the matrix M is identical to L. Moreover, for our model it is easy to show that:

Spectral Graph Analysis and Synchronization
The community structure of undirected networks can be characterized by the eigenvalue spectrum of the network's symmetric Laplacian matrix L (see Eq. 5 of main text). These eigenvalues can be ordered by increasing magnitude and due to the special construction of the 9 Laplacian it always has a trivial zero eigenvalue. Any further zero eigenvalue indicates that the graph is partitioned into disconnected components. However, also eigenvalues that are near-zero indicate components that are loosely connected. These eigenvalues can be related to the so called algebraic connectivity of the network [9]. Especially, the second smallest eigenvalue bounds the isoperimetric number of the graph via the cheeger inequalitiy [10]. In general, a small isoperimetric number indicates 'bottlenecks' in the graph. The network can be subdivided into communities via these bottlenecks. This is done by considering the eigenvectors associated with the eigenvalues, also known as Fiedler vectors. Nodes can be assigned to specific communities according to the values of the eigenvectors and in fact there are algorithms that partition graphs according to the eigenvector and eigenvalue values [11,12].
For non-symmetric networks, as in our study due to the local mean-field coupling, the singular value spectrum becomes important. We have calculated the singular value spectra for both the summer and winter topology for 25 replicates of our network structure (Section 7, Figure S6). The large fraction of singular values near 1 corresponds to groups of oscillators that are well connected to each other. In our case the local mean field coupling leads to a normalization of the singular values around 1 as was discussed in a previous study [7]. It should be noted that a fully synchronized network still possesses one trivial zeroeigenvalue/singularvalue, which describes a simultaneous phase advance in all oscillators.
The winter topology shows only the trivial zero-eigenvalues (25, one for each replicate) at zero (Section 7, Figure S6B). On the other hand for the summer topology one observes additional near-zero singular values, leading to a larger bin of the histogram around zero (>25, Section 7, Figure S6A). With our theoretical results from the previous sections 2-4 it is now clear that for the summer topology the phase distribution is broader, since small singular values lead to a large variance in the phase distribution (see Eq. 26).
It was shown previously that the spectral properties of the Laplacian also affect the synchronization dynamics [6,7]. In particular, the individual eigenvalues can be related to separate timescales of synchronization of oscillator communities associated to the Fiedler vectors. In these numerical studies, especially graphs with community structure, indicated by small non-zero eigenvalues as in the summer-topology (Section 7, Figure S6A), showed worse synchronization behavior in the sense that stronger coupling is needed to achieve a similar amount of phase synchronization as in a network with no community structure. This is intuitively to be expected since the bottlenecks limit the coupling from one community to the other and in general the communities are connected only via a few nodes. These bottlenecks also intuitively explain why the phase distribution is broader in the summer topology.
Moreover, for the winter topology a gap between the zero and non-zero singular values can be observed (Section 7, Figure S6B), indicating fast synchronization of the SCN cells in winter, since slow timescales are missing.

Entrainment of a single amplitude-phase oscillator
In this section, we want to analyze the entrainment behavior of a single oscillator of the type considered in our study (see Eqs. (1) and (2) in the main text). Ultimately, we aim at deriving analytical expressions for the entrainment border of the entrainment region (commonly referred to as Arnold tongue). In line with our study, we consider entrainment by squareshaped signals. The amplitude-phase oscillators we consider are generally given by: Here T determines the offset such that the oscillator has a period of or frequency .
Circadian rhythms always have . Integrating the system by separation of variables and relating the period to we derive √ ⁄ . For the oscillator used in our study , which is commonly referred to as the Poincaré oscillator.
Applying a forcing , where is the entrainment amplitude and a function with frequency , to the x-coordinate (rectangular coordinates) the system in polar coordinates is given by: The key to analyzing this system is to separate slow time-scales of amplitude and/or phase adaptation and fast time-scales of external forcing. The method of choice is the averaging technique introduced by Krylov and Bogolyubov [13]. However, a straightforward application of this method is hindered by the fast terms involving . A solution to this problem is to project the phase of the forced system onto a suitable chosen phase of the unperturbed limit cycle. A suitable chosen phase is here defined as a constantly (with time) increasing variable [1].
Such a projection can be found by considering the unperturbed phase in Eq. (37). Performing the first steps of separation of variables we obtain: This shows that a suitable phase is given by t with the corresponding projection . We and after some tedious algebra we derive a simple expression for the phase response: For completeness, we also give the phase response in the y-coordinate, which can be obtained in a similar manner: This shows that in general a forcing in the y-coordinate is not preferable since it shifts the entrainment region away from the intrinsic period towards lower periods.
After defining the new phase as ̃ , the system in Eqs. (38) and (39) is rewritten as: It now makes sense to introduce the phase difference to the external forcing ̃ and consider several limit cases to obtain analytical expressions for the entrainment range.
First of all, let us consider rigid oscillators ( ). Then the amplitude of the oscillator will not change under forcing and will be ( is an unstable solution).
Thus, the above system reduces to one equation for the phase difference: If and are small then will be a slow moving variable compared to the forcing and thus we can average over one external period : Here has to be seen as the variable on the slow time scale. The integral is performed easily for a sinusoidal or square-shaped forcing signal. To be specific we here consider a squareshaped function with period that starts with 1 at and is set to 0 for . Here is the width of the entraining signal (see also main text). Then the integral is given by: 12 Here we defined . This implies that the entrainment region for rigid oscillators under rectangular forcing is given by: Let us now consider weak oscillators ( ). In this case, the amplitude dynamics in Eq.
(45) cannot be neglected. However, if and are small the amplitude dynamics can be treated by the same averaging method we already used for the phase difference dynamics: Again and should be interpreted as variables on the slow time scale. Even for a rectangular forcing the integral is rather complicated, although it can be solved exactly.
Therefore, we consider two additional limit cases: sinusoidal oscillators with and spiking oscillators with . For the system reduces to the original system in Eqs.
(38) and (39) and the integral under the previously defined rectangular forcing is given by: Thus, we obtain the following equations for the steady state solution: We have: For the Poincaré oscillator in our study we have: . Obviously this function is always larger or equal to 0, goes through the origin (0,0) and has a maximum and a minimum between and at: √ and √ , respectively. These minima and maxima exist only for | | √ . Thus, Eq. (57) that determines the entrained amplitude has three solutions for | | √ and one solution for | | √ . Therefore, the entrainment region is determined by the linear stability of these solutions. The coefficients of the quadratic characteristic polynomial of the system are given by: The first coefficient is negative for . According to Hurwitz' criterion solutions with will be unstable. The second coefficient is negative between the maxima and minima of the function in Eq. (57): √ √ . Therefore, according to Hurwitz' criterion these solutions will be saddle points and unstable. Thus, for | | √ the entrainment border is given by We can see that since the entrainment region of weak sinusoidal Poincaré oscillators is always larger than that of the corresponding rigid oscillators. The reason for this is the decrease of the entrained amplitude near the entrainment border.
Let us now consider weak spiking oscillators with . We then have to deal with the full integral in Eq. (51). The integral can be calculated as: ).
The terms involving the cotangents can be bounded by 0 and . Thus, we have Here, the equations decouple and the steady-state solutions can be easily found. For the Poincaré oscillator amplitude, we obtain a stable (+) and an unstable (-) solution at: Here an interesting phenomenon appears for a weak spiking oscillator forced by square- and (69) to zero: Here only the positive phase is a stable solution. We now have characterized the behavior of this low amplitude attractor. Next, we approximately derive the entrainment bounds. As we already have seen above in case a) the phase will always be resetted to ̃ and thus | | | ̃ | → . In a strict sense, this means that no entrainment is possible, since the oscillator phase cannot follow the external phase due to resetting. On the other hand consider case b) in this case at the k-th resetting the phase will be shifted to a point above to the steady state at ̃ . This implies | | | ̃ | and thus entrainment of the oscillator. Therefore, the upper condition for the entrainment (in a strict sense) is approximately given by: On the other hand, since we only consider 1:1 entrainment, a lower bound can be given as: These bounds becomes more exact the larger . However, as mentioned above the bound in Eq. (72) is for entrainment seen in a strict sense. This means that the phase ̃ of the oscillator, measured as the phase of the unperturbed oscillator, follows the external phase. To overcome this limitation we may look at each resetting event as a resetting of ̃ to ̃ during which ̃ crosses . Then the bound in Eq. (72) is meaningless, which means that entrainment to periods smaller than the internal period is always possible for weak, spiking oscillators due to the domination of the dynamics by the rectangular forcing.
All entrainment borders of the individual oscillator in the specific cases considered are shown in Figure S7 (Section 7).

Supplementary
Figures S1-S11 Figure S1. Transient time evolution of the mean-field for summer -δ=0.005, t p =16 (A) and winter -δ=0.01, t p =8 (B) conditions. All simulations were started with initial conditions randomly distributed around x = 1 and y = 0, according to a normal distribution with a standard deviation of 0.2. Just a few periods are necessary for the system to get into its final dynamical state.  ). Evidently, the network structure is the crucial factor defining the shape of the global activity, whereas the duration of the light input has an almost negligible effect.      ) and entrained by long photoperiods.