Dynamical Models Explaining Social Balance and Evolution of Cooperation

Social networks with positive and negative links often split into two antagonistic factions. Examples of such a split abound: revolutionaries versus an old regime, Republicans versus Democrats, Axis versus Allies during the second world war, or the Western versus the Eastern bloc during the Cold War. Although this structure, known as social balance, is well understood, it is not clear how such factions emerge. An earlier model could explain the formation of such factions if reputations were assumed to be symmetric. We show this is not the case for non-symmetric reputations, and propose an alternative model which (almost) always leads to social balance, thereby explaining the tendency of social networks to split into two factions. In addition, the alternative model may lead to cooperation when faced with defectors, contrary to the earlier model. The difference between the two models may be understood in terms of the underlying gossiping mechanism: whereas the earlier model assumed that an individual adjusts his opinion about somebody by gossiping about that person with everybody in the network, we assume instead that the individual gossips with that person about everybody. It turns out that the alternative model is able to lead to cooperative behaviour, unlike the previous model.

(S1) The norm induced by this inner product is the Frobenius norm |X| F = (tr(XX T )) 1 2 . Recall that the Frobenius norm is unitarily invariant, i.e. if U is orthogonal (i.e. U U T = I n ), then We denote by I n the n × n identity matrix, and by J n a specific skew symmetric matrix: J n = 0 I n/2 −I n/2 0 , n even.
For all other linear algebra related terminology and properties we refer to [1]. We briefly review two key ingredients of Heider's (static) theory on social balance, namely those of a balanced triangle and a balanced network: Definition 1. A triangle of (not necessarily distinct) agents i, j and k is called balanced if A network is said to be balanced if all triangles of agents in the network are balanced.
It turns out that a balanced network takes on a specific structure, in that at most 2 factions emerge, where members within each faction have positive opinions about each other, but members in different factions have negative opinions about each other. This result is known as the Structure Theorem [2,3]: Theorem 1 (Structure Theorem in [2,3]). Let X represent a balanced network. Then up to a permutation of agents, the matrix X has the following sign structure: Consider the model studied numerically in [4] and analysed for symmetric initial conditions in [5]: where each X ij is real-valued and denotes the opinion agent i has about agent j. Positive values mean that agent i thinks favourably about j, whereas negative values mean that i thinks unfavourably about j. More explicitly, model S5 can also be written entrywise: The basic question in this context is whether or not the solutions of S5 evolve towards a state which corresponds to a balanced network. 1

Normal initial condition
We start by defining N = {X ∈ R n×n |XX T = X T X}, the set of real, normal matrices. Notice that if X belongs to N then so does X 2 , hence the set N is invariant forẊ = X 2 .
Recall that normal matrices are (block)-diagonalisable with blocks of size at most 2 by an orthogonal transformation: if X 0 ∈ N , then where Λ 0 consists of real 1 × 1 scalar blocks A i and real 2 × 2 blocks B j = α j I 2 + β j J 2 with β j = 0. Note that if Λ(t) is the solution to the initial value problemΛ = Λ 2 , Λ(0) = Λ 0 , then X(t) := U Λ(t)U T is the solution to Eq. S5. This shows it is sufficient to solve system S5 in case of scalar X or in case of a specific, 2 × 2, normal matrix X. The scalar case is easy to solve: the solution ofẋ = x 2 , x(0) = x 0 , is which is easily verified, so we turn to the 2 × 2 case by considering: Proof. Let X 0 = S 0 + A 0 , S 0 = αI 2 and A 0 = βJ 2 where J 2 is as defined in Eq. S3. Then the solution X(t) of S9 can be decomposed as S(t) + A(t), wherė Figure S1. Phase portrait of system S12-S13. Circular orbits in the upper half plane (a > 0) are traversed counter clockwise, whereas circular orbits in the lower half plane (a < 0) are traversed clockwise.
Note that S10 is a matrix Riccati differential equation with the property that the set L := {sI 2 +aJ 2 |s, a ∈ R}, is an invariant set under the flow. Therefore it suffices to solve the scalar Riccati differential equation corresponding to the dynamics of the scalar coefficients s and a: s = s 2 − a 2 , s(0) = α, (S12) a = 2as, a(0) = β, (S13) whose solution is given implicitly by: where c is an integration constant. So, the orbits form circles which are centred at (0, 1/2c) and pass through (0, 0), and by a = 0 if c = 0. The phase portrait of system S12-S13 is illustrated in Fig. S1. All solutions (s(t), a(t)) of system S12-S13, not starting on the s-axis, converge to zero as t → +∞, and approach the origin in the second quadrant for solutions in the upper-half-plane, and in the third quadrant for solutions in the lower-half-plane. Moreover, since the s-axis is the tangent line to every circular orbit at the origin, the slopes a(t)/s(t) converge to 0 along every solution lim t→+∞ a(t)/s(t) = 0. Consequently, the forward solution X(t) of S9 satisfies: Combining the solution for the scalar and 2 × 2 case yields our main result in the normal case: Theorem 2. Let X 0 ∈ N , and let (U, Λ 0 ) be as in Eq. S7. Definē and lett = min iti . Then the forward solution X(t) of S5 is defined for [0,t).
If there is a unique i * ∈ {1, . . . , k} such thatt =t i * is finite, then where U i * is the i * th column of U , an eigenvector corresponding to eigenvalue a i * of X 0 .
Proof. Consider the initial value problem:Λ Its solution is given by where for all j = 1, . . . , l, X j (t) is the forward solution of S9, which is defined for all t in [0, +∞), and converges to 0 as t → +∞ by Lemma 1. This clearly shows that Λ(t) is defined in forward time for t in [0,t). Since the solution of S5 is given by X(t) = U Λ(t)U T , X(t) is also defined in forward time for t in [0,t). It follows from S2 that If i * ∈ {1, . . . , k} is the unique value such thatt =t i * , then using S2: where e i * denotes the i * th standard unit basis vector of R n .
Theorem 2 provides a sufficient condition guaranteeing that social balance in the sense of definition 1 is achieved. If X 0 has a simple, positive, real eigenvalue a i * , and if no entry of the eigenvector U i * is zero, then the network becomes balanced. Indeed, there holds that, up to a permutation of its entries, the sign pattern of the eigenvector U i * is either: In either case, Theorem 1 implies that the normalized state of the system becomes balanced in finite time.

Generic initial condition
Although Theorem 2 provides a sufficient condition for the emergence of social balance, it requires that the initial condition X 0 is normal. But the set N of normal matrices has measure zero in the set of all real n × n matrices, and thus the question arises if social balance will arise for non-normal initial conditions as well. We investigate this issue here, and will see that generically, social balance is not achieved. If X 0 is a general real n × n matrix, we can put it in real Jordan canonical form by means of a similarity transformation: with Λ 0 = diag(A 1 , . . . , A k , B 1 , . . . , B l ), where A i are real Jordan blocks and with β j = 0. We again observe that if Λ(t) is the solution to the initial value problemΛ = Λ 2 , Λ(0) = Λ 0 , then X(t) := T Λ(t)T −1 , is the solution to Eq. S5. Again, it is sufficient to solve system S5 in case of specific block-triangular X of the form A i or B j as in S15. To deal with the first form A i , we first we consider more general, triangular Toeplitz initial conditions: with x i (0) reals, and denote T T = {X | X is of the form S16}. It turns out that this is an invariant set for the system, which can be easily verified by noting that if X belongs to T T , then so does X 2 .
Then the forward solution X(t) of S5 is defined on [0, t * ) where t * = 1/a if a > 0 and on t * = ∞ if a ≤ 0, belongs to T T , and satisfies where each p i (z) is a polynomial of degree i: where c i is some real constant, so that p i (z) has no constant or first order terms when i > 1.
Proof. First note that system S5 can be solved recursively, starting with x 1 (t), followed by x 2 (t), x 3 (t), . . . . Only the first equation for x 1 is nonlinear, whereas the equations for x 2 , x 3 , . . . are linear. To see this, we write these equations: The forward solution for we obtain the proof by induction on n. Assume the result holds for i = 1, . . . , n, for some n ≥ 2, and consider the equation for x n+1 . Using that x n (0) = 0 for n ≥ 2, the solution is given by: Since e t 0 2x1(s)ds = x 2 (t) and thus e s 0 −2x1(τ )dτ = 1/x 2 (s), it follows that: Since the polynomials appearing in the integral take the form of Eq. S17, they are all missing first order and constant terms, and thus there follows that and so that x n+1 (t) = 1 a n−1 where K n+1 and c n+1 are certain constants (which are related in some way which is irrelevant for what follows). This shows that x n+1 (t) is indeed of the form p n+1 (1/(1 − at)) with p n+1 (z) as in S17.
Next we consider equation S5 in case X(0) is a block triangular Toeplitz initial condition: Again the set BT T is invariant for system S5. We use this to solve equation S5 in case X(0) is a real Jordan block corresponding to a pair of eigenvalues α ± jβ.
Then the forward solution X(t) of S5 is defined on [0, +∞), and it belongs to BT T .
Proof. Just like in the proof of Proposition 2, we note that system S5 can be solved recursively, starting with X 1 (t), followed by X 2 (t), X 3 (t), . . . . Only the first equation for X 1 is nonlinear, whereas the equations for X 2 , X 3 , . . . are linear. To see this, we write these equations: Here we have used the fact that X 1 X i + X i X 1 = 2X 1 X i , since any two matrices of the form pI 2 + qJ 2 commute and the matrices X i (t) are of this form.
By Lemma 1, the forward solution for X 1 (t) is defined for all t in [0, +∞) (and in fact, converges to zero as t → +∞).
Since the X 1 (t) commute for every pair of t's, the forward solution for X 2 (t) is given by [6] X 2 (t) = e t 0 2X1(s)ds , for t ∈ [0, +∞), where this solution exists for all forward times t because X 1 (t) is bounded and continuous. Similarly, the forward solution for X i (t) when i > 2, is given by the variation of constants formula: for t ∈ [0, +∞) when i > 2, where these solutions are recursively defined for all forward times because the formula only involves integrals of continuous functions.
Combining both results, puts us in a position to state and prove our main result.
Theorem 3. Let X(0) ∈ R n×n and (T, Λ 0 ) as in S14 with S15. Let a 1 > a 2 ≥ · · · ≥ a k with a 1 > 0 a simple eigenvalue with corresponding right and left-eigenvectors U 1 and V T 1 respectively: Then the forward solution X(t) of S5 is defined for [0, 1/a 1 ), and lim t→1/a1 Proof. Consider the initial value problemΛ = Λ 2 , Λ(0) = Λ 0 , whose solution is given by The matrices B j (t), j = 1, . . . , l, are the forward solution of S5 with B j (0) of the form B j in S15, and by Lemma 3, they are defined for all t in [0, +∞). This clearly shows that Λ(t) is defined in forward time for t in [0, 1/a 1 ). Since the solution of S5 is given by X(t) = T Λ(t)T −1 , X(t) is also defined in forward time for t in [0, 1/a 1 ), and it follows that lim t→1/a1 where e 1 denotes the first standard unit basis vector of R n .
Theorem 3 implies that social balance is usually not achieved when X(0) is an arbitrary real initial condition. Indeed, if X 0 has a simple, positive, real eigenvalue a 1 , and if we assume that no entry of the right and left eigenvectors U 1 and V T 1 are zero (an assumption which is generically satisfied), then in general, up to a permutation of its entries, the sign patterns of U 1 and V T 1 are: Then Theorem 1 implies that the normalized state of the system does not become balanced in finite time. This shows that in general, unless X 0 is normal (so that Theorem 2 is applicable), we cannot expect that social balance will emerge for system S5.
where again, each X ij denotes the real-valued opinion agent i has about agent j. As before, for i = j, the value of X ii is interpreted as a measure of self-esteem of agent i. We can also write the equations entrywise: As in the case of modelẊ = X 2 , we split up the analysis in two parts. First we consider system S19 with normal initial condition X 0 , and we shall see that not all initial conditions lead to the emergence of a balanced network in this case, in contrast to the behaviour of S5. Secondly, we will see that for nonnormal, generic initial conditions X 0 , we typically do get the emergence of social balance, also contrasting the behaviour of S5.

Normal initial condition
As for the modelẊ = X 2 the set N is invariant for system S19. By using the same diagonalisation as in Eq. S7, if Λ(t) is the solution to the initial value problemΛ = ΛΛ T , Λ(0) = Λ 0 , then X(t) := U Λ(t)U T , is the solution to Eq. S19. This shows it is sufficient to solve system S19 in case of scalar X or in case of a specific 2 × 2 normal matrix X. The scalar case is easy to solve and follows Eq. S8, so we turn to the 2 × 2 case by consideringẊ = XX T , X(0) = αI 2 + βJ 2 , where β = 0. (S21) We define the angle φ as Then the forward solution X(t) of S21 is: Proof. Let X 0 = S 0 + A 0 , S 0 = αI 2 , and A 0 = βJ 2 . Then the solution X(t) of S21 can be decomposed as S(t) + A(t), whereṠ so A(t) = A 0 , and reduces toṠ Note that S27 is a matrix Riccati differential equation with the property that the line L = {αI 2 |α ∈ R}, is an invariant set under the flow. Therefore it suffices to solve the scalar Riccati differential equation corresponding to the dynamics of the diagonal entries of S:ṡ = s 2 + β 2 , s(0) = α, whose forward solution is: s(t) = β tan (βt + φ) , for t ∈ (0,t), wheret is given by S23. Consequently, the forward solution X(t) of S21 is given by: X(t) = S(t) + A 0 = β tan(βt + φ)I 2 + βJ 2 , for t ∈ (0,t), and thus lim t→t− X(t) = +∞I 2 + βJ 2 and lim t→t− Combining the solution for the 1 × 1 scalar case in Eq. S8 and Lemma 4 yields our main result: Theorem 4. Let X 0 ∈ N , and let (U, Λ 0 ) be as in Lemma S7. Definē where φ j = arctan αj βj and lett = min i,j {t i ,t j }. Then the forward solution X(t) of S19 is defined for If there is a unique i * ∈ {1, . . . , k} such thatt =t i * is finite, then where U i * is the i * th column of U , an eigenvector corresponding to eigenvalue a i * of X 0 .
If there is a unique j * ∈ {1, . . . , l} such thatt =t j * , then where U j * is an n × 2 matrix consisting of the two consecutive columns of U which correspond to the columns of the 2 × 2 block B j * in Λ 0 .
By Lemma 4 its solution is given by where for all j = 1, . . . , l, X j (t) is given by the 2 × 2 matrix in S24 with β, φ andt replaced by β j , φ j and t j respectively. This clearly shows that Λ(t) is defined in forward time for t in [0,t). Since the solution of S19 is given by If i * ∈ {1, . . . , k} is the unique value such thatt =t i * , then where e i * denotes the i * th standard unit basis vector of R n . If j * ∈ {1, . . . , l} is the unique value such thatt =t j * , then by Lemma 4: where E j * has exactly two non-zero entries equal to 1 on the diagonal positions corresponding to the block B j * in Λ 0 .
A particular consequence of Theorem 4 is that if X 0 has a complex pair of eigenvalues, the solution of S19 always blows up in finite time, even if all real eigenvalues of X 0 are non-positive. Recall that the solution of S5 blows up in finite time, if and only if X 0 has a positive, real eigenvalue. Another implication of Theorem 4 is that if blow-up occurs, it may be due to a real eigenvalue of X 0 , or to a complex eigenvalue. In contrast, if the solution of S5 blows up in finite time, it is necessarily due to a positive, real eigenvalue, and never to a complex eigenvalue. When the solution of S19 blows up because of a positive, real eigenvalue of X 0 , the system will achieve balance, just as in the case of system S5. If on the other hand, finite time blow up of S19 is caused by a complex eigenvalue of X 0 , we show that in general one cannot expect to achieve a balanced network. Assume there is a unique j * such that: Assuming that no entry of U j * is zero, the sign pattern of U j * U T j * , with up to a suitable permutation, where all p i and q i , i = 1, . . . , 4, are entrywise positive vectors, and where because U is an orthogonal matrix. The ? are not entirely arbitrary because U j * U T j * is a symmetric matrix, but besides that their signs can be arbitrary.

Generic initial condition
where X is a real n × n matrix, which is not necessarily normal. We first decompose the flow S28 into flows for the symmetric and skew-symmetric parts of X. Let X = S + A,X 0 = S 0 + A 0 , where S, S 0 ∈ S and A, A 0 ∈ A are the unique symmetric and skew-symmetric parts of X and X 0 respectively. If X(t) satisfies S28, then it can be verified that S(t) and A(t) satisfy the system:Ṡ Consequently, A(t) = A 0 for all t, and thus the skew-symmetric part of the solution X(t) of S28 remains constant and equal to A 0 . Throughout this subsection we assume that A 0 = 0, for otherwise X(0) is symmetric, hence normal, and the results from the previous subsection apply. It follows that we only need to understand the dynamics of the symmetric part. Then the solution X(t) to S28 is given by X(t) = S(t) + A 0 , where S(t) solves S29, and in view of S1, there follows by Pythagoras' Theorem that: Next we shall derive an explicit expression for the solution S(t) of S29. We start by performing a change of variables: This yields the equationṠ We perform a further transformation which diagonalizes −A 2 0 : Let V be an orthogonal matrix such that and multiplying equation S33 by V on the left, and by V T on the right, we find that: Notice that this is a matrix Riccati differential equation, a class of equations with specific properties which are briefly reviewed next. Consider a general matrix Riccati differential equation: where M = M T ,N = N T and L arbitrary, defined on S. Associated to this equation is a linear system where H is a Hamiltonian matrix, i.e. J 2n H = (J 2n H) T holds, where J 2n is as defined in Eq. S3. The following fact is well-known.

Lemma 5. Let
be a solution of S37. Then, provided that P (t) is non-singular, is a solution of S36. Conversely, if S(t) is a solution of S36, then there exists a solution P (t) such that S38 holds, provided that P (t) is non-singular.
Proof. Taking derivatives in S(t)P (t) = Q(t) yields thatṠ = (Q − SṖ )P −1 , and using S37, showing that S(t) solves S36. For the converse, let S(t) be a solution of S36. Let be the solution of S37. Then which implies that QP −1 is a solution to S36. Since S(0) = Q(0)P −1 (0), it follows from uniqueness of solutions that S(t) = Q(t)P −1 (t).
In other words, in principle we can solve the nonlinear equation S36 by first solving the linear system S37, and then use formula S38 to determine the solution of S36.
We carry this out for our particular Riccati equation S35 which is of the form S36 if M = I n , L = 0, N = D 2 . The corresponding Hamiltonian is H = . We partition D in singular and non-singular parts: whereD is positive definite since all ω j > 0. Partitioning H correspondingly: This matrix is then exponentiated to solve system S37: where we have introduced the following notation: and similarly c(t) = cos(Dt). By setting P (0) = I n , and Q(0) =S 0 , and using Lemma 5, it follows that the solution of the initial value problem S35 is given byS(t) = Q(t)P (t) −1 , for all t for which P (t) is non-singular. We now make the following assumption: Assumption A. The matrix P (t) is non-singular for all t in [0,t), wheret is finite and such that s(t) is non-singular for all t in (0,t). Moreover, P (t) has rank n − 1, or equivalently, has a simple eigenvalue at zero.
Later we will show that this assumption is generically satisfied, and also that where [0, t * ) is the maximal forward interval of existence of the solutionS(t) of the initial value problem S35. Consequently, the theory of ODE's implies that lim t→t |S(t)| F = +∞, i.e. thatt is the blow-up time for the solutionS(t).
Assuming for the moment that assumption A is satisfied, back-transformation using S32 and S34, yields that the solution S(t) of S29 is S(t) = e tA0 VS(t)V T e −tA0 , which is defined for all t in [0,t), because e tA0 V is bounded for all t (as it is an orthogonal matrix). It follows from S2 that provided that at least one of the two limits exists. PartitioningS 0 in S40 as follows: we can rewrite P (t) and Q(t) on the time interval (0,t) as: P (t) = ∆(t)M (t) with, and and Note that the factorisation of P (t) is well-defined on (0,t) because by assumption A, the matrix s(t) is non-singular in the interval (0,t). Moreover, assumption A also implies there exists a nonzero vector u corresponding to the zero eigenvalue of M (t), i.e. M (t)u = 0, and that u is uniquely defined up to scalar multiplication because the zero eigenvalue is simple. More explicitly, partitioning u as ( u1 u2 ), there holds that Notice that M (t) is at least real-analytic on the interval (0,t). Hence, it follows from [7] (see also [8,9]), that there is an orthogonal matrix U (t), and a diagonal matrix Λ(t), both real-analytic on (0,t), such that: M (t) = U (t)Λ(t)U T (t), for t ∈ (0,t), and thus M −1 (t) = U (t)Λ −1 (t)U T (t), for t ∈ (0,t). Returning to S42, we obtain that: Here, we have used the fact that M −1 (t) is positive definite on the interval (0,t), so that its largest eigenvalue (which is simple for all t <t sufficiently close tot, because of assumption A approaches +∞ -and not −∞-as t →t. To see this, note that from its definition follows that M (t) is positive definite for all sufficiently small t > 0, becauseD is positive definite. Moreover, M (t) is non-singular on (0,t) since by assumption (A), P (t) is non-singular on (0,t), and because M (t) = ∆ −1 (t)P (t) (it is clear from its definition and assumption A that ∆(t) is non-singular on (0,t) as well). Consequently, the smallest eigenvalue of M (t) remains positive in (0,t), as it approaches zero as t →t. This implies that the largest eigenvalue of M −1 (t) is positive on (0,t), and approaches +∞ as t →t, as claimed. Note that: where in the second equality, we used the second row of S43 , multiplied by c(t). From this follows that Taking limits for t →t in S31, and using the above equality, we finally arrive at the following result, which implies that system S28 evolves to a socially balanced state (in normalized sense) when t →t: Proposition 1. Suppose that assumption A holds and A 0 = 0. Then the solution X(t) of S28 satisfies:

Genericity
Generically, assumption A holds, and S41 holds as well. There are two aspects to assumption A: 1. The matrix P (t) is nonsingular in the interval [0,t), but singular at some finitet such that: 2. P (t) has a simple zero eigenvalue.
To deal with the first item, suppose that the solutionS(t) of S35 is defined for all t ∈ [0, t * ) for some finite positive t * . By Lemma 5, there exist P (t) and Q(t) such thatS(t) = Q(t)P −1 (t), where P (t) and Q(t) are components of the solution of system S37 with H defined in S39. Then necessarilyt ≤ t * . Thus, if we can show that t * < min j π/ω j , then S44 holds. To show that t * < min j π/ω j , we rely on a particular property of matrix Riccati differential equations S36: their solutions preserve the order generated by the cone of non-negative symmetric matrices, see [10]. More precisely, if S 1 (t) and S 2 (t) are solutions of S36, and if S 1 (0) S 2 (0), then S 1 (t) S 2 (t), for all t ≥ 0 for which both solutions are defined. The partial order notation S 1 (t) S 2 (t) means that the difference S 2 (t) − S 1 (t) is a positive semi-definite matrix. We apply this to equation S35 withS 1 (0) = α min I n andS 2 (0) =S(0), where we choose α min as the smallest eigenvalue ofS(0) (or equivalently, of S(0) = S 0 , sinceS(0) = V T S 0 V ), so that clearlỹ S 1 (0) S 2 (0). Consequently, by the monotonicity property of system S35, it follows thatS 1 (t) S (t), as long as both solutions are defined. We can calculate the blow-up time t * 1 ofS 1 (t) explicitly, and then it follows that t * ≤ t * 1 , where t * is the blow-up time ofS(t). Indeed, equations of system S35 decouple for an initial condition of the form α min I n , and the resulting scalar equations are scalar Riccati equations we have solved before. The blow-up time forS 1 (t) is given by: with φ j := arctan αmin ωj ∈ − π 2 , π 2 . Notice that for all j = 1, . . . , k, there holds that π 2ωj − φj ωj < π ωj , because by definition, φj ωj ∈ (− π 2ωj , π 2ωj ). Consequently, which establishes S44. In other words, we have shown that the first item in assumption A is always satisfied. The second item in assumption A may fail, but holds for generic initial conditions as we show next. For this we first point out that the derivative of each eigenvalue of M (t) is a strictly decreasing function in the interval (0,t), independently of the value of the matrixS 0 . Indeed, the derivative of eigenvalue λ j (t) of M (t) equals (see [7]) : where u j (t) is the normalized eigenvector of M (t) corresponding to λ j (t), and which is analytic in the considered interval. SinceṀ (t) is negative definite in that interval,λ j (t) is also negative and hence all eigenvalues of M (t) are strictly decreasing functions of t in that interval. Suppose now that M (t) has a multiple eigenvalue 0 at t =t, then M (t) is positive semi-definite sincet is the first singular point of M (t) and the eigenvalues are decreasing function of t. If we now choose a positive semi-definite ∆S 0 of nullity 1, such that M (t) + ∆S 0 also has nullity 1, then the perturbed initial condition (S 0 ) p =S 0 − ∆S 0 yields the perturbed solutionS p (t) which can be factored as Q p (t)P −1 p (t), and where P p (t) = ∆(t)M p (t) (note that ∆(t) remains the same as before the perturbation) for M p (t) = M (t) + ∆S 0 which now has a single root at the same minimal valuet. To construct such a matrix ∆S 0 is simple since the only condition it needs to satisfy is that M (t) and ∆S 0 have a common null vector. Those degrees of freedom show that the second item in assumption A is indeed generic. Now that we have established that A generically holds, we show that S41 is satisfied also. The proof is by contradiction. Earlier, we have shown thatt ≤ t * . Thus, if we suppose that S41 fails, then necessarilȳ t < t * . This implies that although P (t) is singular, the solutionS(t) exists for t =t. Our goal is to show that lim t→t |S(t)| F = +∞, which yields the desired contradiction (by the theory of ODE's).
We first claim the following: If u = 0 and P (t)u = 0, then Q(t)u = 0.
Indeed, if this were not the case, then there would exist some vectorū = 0 such that P (t)ū = 0 and Q(t)ū = 0. On the other hand, P (t) and Q(t) are components of the matrix product where H is defined in S39. Multiplying the latter in t =t byū, and using the previous expression, it follows from the invertibility of et H thatū = 0, a contradiction. This establishes S45.
In the previous section, we factored P (t) as P (t) = ∆(t)M (t). Since P (t) is non-singular on [0,t), and singular att, it follows from S44 and the definition of ∆(t), that M (t) is non-singular (and, in fact, positive definite as shown in the previous section) on (0,t), and singular att as well. Therefore, since M (t) is symmetric and real-analytic, it follows from [7] that we can find a positive and real-analytic scalar function (t), and a real-analytic unit vector u(t) such that: M (t)u(t) = (t)u(t), (t) > 0 on (0,t), (t) = 0, |u(t)| 2 = 1, where |.| 2 denotes the Euclidean norm. In particular, M (t)u(t) = 0, and since ∆(t) is non-singular, it follows that P (t)u(t) = 0. Then S45 implies that Q(t)u(t) = 0. Define the real-analytic unit vector v(t) = ∆(t)u(t) |∆ Since for any real n × n matrix A, and for any unit vector x (i.e. |x| 2 = 1) holds that |Ax| 2 ≤ |A| F , it follows that lim t→t |S(t)| F = +∞. This yields the sought-after contradiction. By combining Proposition 1 and the results in this subsection, we have proved the main result concerning the generic emergence of balance for solutions of system S28.
Theorem 5. There exists a dense set of initial conditions X 0 in R n×n such that the corresponding solution X(t) of S28 satisfies: Proof. The set of initial conditions X 0 for which A 0 = 0 and assumption A holds is dense in R n×n .