Measuring Symmetry, Asymmetry and Randomness in Neural Network Connectivity

Cognitive functions are stored in the connectome, the wiring diagram of the brain, which exhibits non-random features, so-called motifs. In this work, we focus on bidirectional, symmetric motifs, i.e. two neurons that project to each other via connections of equal strength, and unidirectional, non-symmetric motifs, i.e. within a pair of neurons only one neuron projects to the other. We hypothesise that such motifs have been shaped via activity dependent synaptic plasticity processes. As a consequence, learning moves the distribution of the synaptic connections away from randomness. Our aim is to provide a global, macroscopic, single parameter characterisation of the statistical occurrence of bidirectional and unidirectional motifs. To this end we define a symmetry measure that does not require any a priori thresholding of the weights or knowledge of their maximal value. We calculate its mean and variance for random uniform or Gaussian distributions, which allows us to introduce a confidence measure of how significantly symmetric or asymmetric a specific configuration is, i.e. how likely it is that the configuration is the result of chance. We demonstrate the discriminatory power of our symmetry measure by inspecting the eigenvalues of different types of connectivity matrices. We show that a Gaussian weight distribution biases the connectivity motifs to more symmetric configurations than a uniform distribution and that introducing a random synaptic pruning, mimicking developmental regulation in synaptogenesis, biases the connectivity motifs to more asymmetric configurations, regardless of the distribution. We expect that our work will benefit the computational modelling community, by providing a systematic way to characterise symmetry and asymmetry in network structures. Further, our symmetry measure will be of use to electrophysiologists that investigate symmetry of network connectivity.

Studies on the brain wiring diagram have shown that connectivity is non-random, highlighting the existence of specific connectivity motifs at the microcircuit level, see for instance [14][15][16][17]. Of particular interest are the motifs that exhibit bidirectional (reciprocal) and unidirectional (non-reciprocal) connections between pairs of neurons. More specifically, theoretical work [18] studied the development of unidirectional connectivity due to long-term plasticity in an artificial network of spiking neurons under a temporal coding scheme, where it is assumed that the time at which neurons fire carries out important information. This finding is correlated to unidirectional connectivity observed in somatosensory cortex, see [19]. In [18] the development of bidirectional connectivity in the same network under a frequency coding scheme, where information is transmitted in the firing rate of the neurons, was also studied and correlated to bidirectional connectivity found in the visual cortex [14]. Complementary to this work, in [20,21] the authors explored the experimentally identified correlation of bidirectional and unidirectional connectivity to shortterm synaptic dynamics, see [22], by studying the development of connectivity in networks with facilitating and depressing synapses due to the interaction of short-term and long-term plasticities. The role of synaptic long-term plasticity in structures formation within networks has been also investigated in [23][24][25].
Similar to [18] and [20,21], we hypothesise that the above mentioned motifs have been shaped via activity dependent synaptic plasticity processes, and that learning moves the distribution of the synaptic connections away from randomness. Our aim is to provide a global, macroscopic, single parameter characterisation of the statistical occurrence of bidirectional and unidirectional motifs. To this end: We define a symmetry measure that does not require any a priori thresholding of the weights or knowledge of their maximal value, and hence is applicable to both simulations and experimental data. We calculate the mean and variance of this symmetry measure for random uniform or Gaussian distributions, which allows us to introduce a confidence measure of how significantly symmetric or asymmetric is a specific configuration, i.e. how likely it is that the configuration is the result of chance. We demonstrate the discriminatory power of our symmetry measure by inspecting the eigenvalues of different types of connectivity matrices, given that symmetric matrices are known to have real eigenvalues. We show that a Gaussian distribution biases the connectivity motifs to more symmetric configurations than a uniform distribution and that introducing a random synaptic pruning, mimicking developmental regulation in synaptogenesis, biases the connectivity motifs to more asymmetric configurations, regardless of the distribution. Our statistics of the symmetry measure allows us to correctly evaluate the significance of a symmetric or asymmetric network configuration in both these cases. Our symmetry measure allows us to observe the evolution of a specific network configuration, as we exemplify in our results.
We expect that our work will benefit the computational modelling community, by providing a systematic way to characterise symmetry and asymmetry in network structures. Further, our symmetry measure will be of use to electrophysiologists that may investigate symmetric or asymmetric network connectivity.

Methods
In what follows, we first define a novel measure that quantifies the degree of symmetry in a neuronal network with excitatory synaptic connections. More specifically, we describe the strength of the synaptic efficacies between the neurons by the elements of a square matrix, i.e. the connectivity matrix, to which we associate a number that quantifies the similarity of the elements above the matrix diagonal to those below the diagonal. We further study this measure from a statistical point of view, by means of both analytical tools and numerical simulations. Aiming to associate a significance value to the measure, i.e. the probability that a certain symmetric or non-symmetric configuration is the result of chance, we consider random synaptic efficacies drawn from uniform and Gaussian distributions. We also study how our symmetry measure is affected by the anatomical disconnection of neurons in a random manner, i.e. zeroing some entries in the connectivity matrix. Finally, we anticipate that connectivity distributions are modified by activity-dependent processes and we describe the structure of the network we use as a demonstrative example in the Results section.

Definitions
Let us consider the adjacency (or connectivity) matrix W of a weighted directed network [26], composed of N vertices and without self-edges. The N vertices represent the neurons, with N N{1 ð Þ possible synaptic connections among them. The synaptic efficacy between two neurons is expressed as a positive element w ji in the adjacency matrix. W is thus composed by positive elements off-diagonal, taking values in the bounded range 0,w max ½ , and by zero diagonal entries. We define s as a measure of the symmetry of W: where M is the number of instances where both w ij and w ji are zero, i.e. there is no connection between two neurons. The term is a normalisation factor that represents the total number of synaptic connection pairs that have at least one non-zero connection. A value of s near 0 indicates that there are virtually no reciprocal connections in the network, while a value of s near 1 indicates that virtually all connections are reciprocal. We exclude (0,0) pairs from our definition of the symmetry measure. Mathematically such pairs would introduce undefined terms to Eq.
(1). In addition, conceptually, we expect that small weights will not be experimentally measurable. It is then reasonable to exclude them, expecting to effectively increase the signal to noise ratio. Pruning and Plasticity. We assume that a connection w ij is permanently disconnected and set to 0 with probability 2 pruning~a [ 0, 1 ½ Þ: Consequently, the probability that two neurons i and j are mutually disconnected, i.e. w ij~wji~0 , is a 2 : When a connection is permanently pruned in such a way, its efficacy remains 0 all the time, whereas the off-diagonal nonpruned values of the adjacency matrix W change slowly in time, as a result of activity-dependent synaptic plasticity. We consider that this procedure correlates with developmental mechanisms associated with or following synaptogenesis.
Unidirectional and Bidirectional connection pairs. We

Statistics of s
Let us consider a large number of n instances of a network whose connection weights are randomly distributed. Each adjacency matrix can be evaluated via our symmetry measure. We rewrite Eq. 1 as: where k is a linear index running over all the q non-zero ''connection pairs'' within the network. We can then estimate the mean m s and variance s 2 s of s over all n networks as:  where the notation E n : ½ and Var n ( : ) implies that the expected value and variance are computed along the n different representations of the network.
Eq. (3), (4) allow us to transfer the statistical analysis from s to Z: To derive theoretical formulas for mean value and variance of Z we use the fact that its probability density function (PDF), f (Z), can be written as a joint distribution, f (Z 1 ,Z 2 ) where we have introduced the notation Z 1~D w ij {w ji D, Z 2~wji zw ji : within the range D defined by 0ƒZ 1 ƒ1 and Z 1 ƒZ 2 ƒ2{Z 1 : Similarly, we can calculate the variance as follows: We note that mean value and variance of Z can be numerically estimated either by using a large set of small networks or on a single very large network: What matters is that the total number of connection pairs, given by the product n : q, is sufficiently large to guarantee good statistics and that connection pairs are independent of each other. In the calculations below, we assume a very large adjacency matrix.

Adjacency matrix with uniform random values
We first consider a network with randomly distributed connections without pruning, followed by the more general case where pruning is taken into account.
Fully connected network. For the uniform distribution f u (w)~1 for w[ 0, 1 ½ , see Fig. 1A. The probability of having w ij~wji~0 for at least one pair (i, j) is negligible, hence M~0: It is straightforward to derive the distributions f u 2 (Z 2 ) and f u 1 (Z 1 ), depicted in Fig. 1B,C correspondingly: We can therefore obtain the joint PDF ( Fig. 1D): Pruning. Introducing pruning to the elements of the adjacency matrix, with probability a, corresponds to a discontinuous probability distribution function of w, that can be written as a sum of a continuous function and of a Dirac's Delta centred in w~0 (see also Fig. 1E): Now the (0, 0) pairs have to be explicitly excluded from the distributions of Z 1 and Z 2 : Also, the number of pairs of the type (w, 0) increases, resulting in the appearance of a uniform contribution in the region 0, 1 ½ in both the PDF of Z 1 and Z 2 : Their final exact profile can be obtained by considering the possible combinations of drawing w ij and w ji from the above pruned distribution and their corresponding probability of occurrence. There are four contributions: f u (w)|f u (w), f u (w)|d(0), d(0)|f u (w), d(0)|d(0): The last term, which describes the (0, 0) pairs, has to be subtracted and the remaining expression has to be renormalised. The results are graphically shown in Fig. 1F, 1G and are mathematically described by the following expressions: The joint PDF is a mixture of two uniform distributions: the unpruned distribution f u 12 (Z 1 , Z 2 ) and the contribution from the pruning, f u peak (Z 1 , Z 2 ), which is a delta peak along the line Z 1~Z2 , see Fig. 1H. To obtain f u a (w), the two unitary distributions are mixed with some coefficients c 2 and c 1 , satisfying the normalisation condition c 1 zc 2~1 : With the same arguments used for f u 1,a (Z 1 ) and f u 2,a (Z 2 ), we can derive the relation between c 1 , c 2 and a, so that we can finally write: Expected value and variance of Z. We can calculate mean value and variance of Z by plugging Eq. (13) into Eqs (5) and (6): Expected value and variance of s. By combining the above results with Eqs (3) and (4), we can derive the final formulas for the expected value and variance of s : Adjacency matrix with Gaussian-distributed random values The procedure described above to derive the joint PDF of Z 1 and Z 2 is applicable to any distribution. In what follows, we consider a network with initial connections drawn by a truncated Gaussian distribution.
Distribution of connections. Whereas the uniform distribution is well defined in any finite interval, the Gaussian distribution requires some considerations. Strictly speaking, any Gaussian distribution is defined over the entire real axes. For practical reasons, however, for any finite network Nv?, the maximum and the minimum values of the weights, w max and w min , are always well defined, and therefore the actual distribution is a truncated Gaussian. To be able to consider the truncated Gaussian distribution as Gaussian with satisfactory accuracy, we require that the portion of the Gaussian enclosed in the region w min , w max ½ is as close as possible to 1: This means that the distribution has to be narrow enough with respect to the interval of definition ½w min , w max : Also, by definition, the distribution has to be symmetric in ½w min , w max : Because we are considering only excitatory connections then w min~0 , so as the mean value has to be m w~w max 2 : On the other hand, the narrowness imposes a condition on the standard deviation of the distribution: s w %Dw~w max : Since we can set w max~1 without loss of generalization, the entire study on all the possible Gaussian distributions can be limited to a special class, m~1 2 , s%1 : The choice of s. To guarantee a good approximation of a Gaussian distribution, we define the truncated Gaussian distribution such that points within 5s fall in [0, 1] leading to s w~1 10 and a truncation error : Fully connected network. For the truncated Gaussian distribution defined above, the distribution of connections without pruning is (see also Fig. 2A): where N denotes the normal distribution. Since combinations of Gaussian distributions are also Gaussian distributions, we can immediately derive the PDF of Z 2 and Z Ã 1~w ij {w ji : Then, f g 1 (Z 1 ) is simply the positive half of f Ãg 1 (Z Ã 1 ), but scaled by a factor of two because of the normalization. We obtain (Fig. 2B,C): where N 1=2 identifies the normalised (positive) half of a normal distribution. Similarly, the joint distribution f g 12 (Z 1 ,Z 2 ) can be easily derived from the bivariate Gaussian of Z Ã 1 and Z 2 ( Fig. 2D): with N B 1=2 being the normalised half (where Z 1 w0) of a bivariate normal distribution.
Pruning. When taking pruning into account, each PDF can be considered as a mixture of the unpruned distribution and the contribution coming from the pruning. We can therefore write: The above distributions are plotted in Fig. 2E, 2F, 2G. Finally, the joint PDF is again a mixture model, with a univariate Gaussian peak profile on the line Z 1~Z2 (Fig. 2H). Note that this peak can be described by the intersection of the plane Z 1~Z2 with the full unpruned bivariate normal distribution N B transformed to have its mean in m w ,m w ð Þ: This operation implies a re-normalisation by Z~4ps 2 2 À Á {1=2 of the resulting univariate Gaussian. Then, we can write: Correlation in the bivariate Gaussian. The correlation r between Z 1 and Z 2 , appearing in the off-diagonal terms of 1,2 , can be computed by running a numerical simulation. We estimated r as a mean value over 10 5 representations of a 10neuron network with random connections distributed according to N (w; m w , s w ) and with no pruning, i.e. a~0: The result is r^7|10 {4 , which allows to treat Z 1 and Z 2 as independent variables and then to factorise the bivariate normal distribution Eq. (21) in the product of the two single distributions. Indeed, by introducing the Heaviside step function H(x) and the renormalisation parameter R~2, we can write: We note that the pruning case does not require a different calculation and can be treated as the a~0 case. This is because we are describing the effect of the pruning with a separate (univariate) function, i.e. the halved bivariate normal distribution describes only the unpruned part of the network, see Eq. (25).
The suitability of this approximation is also certified by Fig. 2D,H, where the agreement between simulation results and theoretical fit with Eq. (26) is excellent.
Expected value and variance of Z. Now we can insert the expression of the joint distribution, Eq. (25), into Eq. (3) and Eq. (4): To calculate the above expression we use symbolic integration. Expected value and variance of s. By plugging the above results into Eq. (3), (4), we obtain: The four formulas Eq. (16), (17), (29), (30) are the final result of the statistical analysis and they will be discussed in the Results section.

Model network with plastic weights
Below we describe the model neural network on which we will apply our symmetry measure.
Single-neuron dynamics. We simulated N~30 leaky integrate-and-fire neurons [27] with a firing threshold of V thr~{ 50 mV : The sub-threshold dynamics of the electrical potential V i is given by: where t m is the membrane time constant, V rest is the resting potential, R is the membrane resistance and I i (t) is the input signal. We chose t m~1 0 ms, V rest~{ 70 mV , R~1 KV: To introduce noise in the firing process of neurons, we implemented the escape noise model [28]. At each time-step Dt the probability that the neuron i fires is given by: where r 0~0 :1 ms {1 and DV~5 mV : Once a neuron fires, its membrane potential is reset to the resting value.
Synaptic and External Inputs. The input I i (t) to each neuron has two components: a synaptic part, coming from the action potentials of the other neurons, and an external part, which is defined by the applied protocol: In the synaptic term, w ij are the synaptic weights, t f j is the firing time of the presynaptic neuron j and E is a small positive number accounting for the delivering time of the electrical signal from the presynaptic to the postsynaptic neuron. The term t ext i is the time course of the injected input, which is different from neuron to neuron and depends on the protocol we use (see Results section). Finally, the amplitudes a syn and a ext are fixed to the same value for all neurons. We chose a syn~1 mA|s and a ext~3 0 mA|s, so that each external input forces the neurons to fire.
Plasticity. The efficacy of the synaptic connections is activitydependent. Therefore, the unpruned elements of the adjacency matrix w ij in Eq. (33) change in time by Spike-Timing Dependent Plasticity (STDP) mechanisms, i.e. passively driven by the input protocol and emerging internal dynamics, without the presence of a supervisory or reinforcement learning signal [29][30][31]. More specifically, we implemented the triplet STDP rule [18,32,33] with parameters from [32] (Visual cortex, nearest neighbour dataset), see Table 1, and we constrain the connections in 0, 1 ½ : In this model, each neuron has two presynaptic variables r 1 , r 2 and two postsynaptic variables o 1 , o 2 : In the absence of any activity, these variables exponentially decay towards zero with different time constants: whereas when the neuron elicits a spike they increase by 1 : Then, assuming that neuron i fires a spike, the STDP implementation of the triplet rule can be written as follows: where c is the learning rate and E is an infinitesimal time constant to ensure that the values of r 2 i and o 2 i used are the ones right before the update due to the spike of neuron i: The learning rate used is 1 for the frequency protocol, 7 for the sequential protocol (see Results).
Reproducibility of results. All simulations were performed in MATLAB (The Mathworks, Natick, USA). Code is available from ModelDB [34], accession number: 151692.

Results
We recall the definition of the symmetry measure s (Eq. 1): where w ij is the positive synaptic connection from neuron j to neuron i, N is the total number of neurons and M is the number of instances where both w ij and w ji are zero, i.e. there is no connection between two neurons. The term q~N N{1 ð Þ{2M 2 is a normalisation factor that represents the total number of synaptic connection pairs that have at least one non-zero connection. By using this definition, we were able to estimate the expected value and the variance of s on random matrices (uniform and truncated Gaussian), see Eq. (16)- (17) and (29)-(30) correspondingly. This provides us a tool to estimate the significance of the ''symmetry'' or ''asymmetry'' of the adjacency matrix of a given network, shaped by learning, given the initial distribution of the synaptic connections prior to the learning process. The statistical analysis is particularly useful in cases where the developed configuration is not ''clear-cut'', i.e. all connections have been turned to either bidirectional or unidirectional resulting in a symmetry measure almost 1 or 0, which is probably an artificial scenario, but rather in the intermediate cases, where we need a measure of how far away the value of the symmetry measure of a specific configuration is from that of a random configuration. Though here we focused on two specific random distributions, our methodology is applicable to other distribution choices.

Hypothesis test
Having calculated the mean and variance of the symmetry measure s over random networks of a specific connectivity distribution, we are now able to directly evaluate the symmetry measure s s of a specific connectivity structure and conclude whether the symmetric or asymmetric structure observed is due to chance or it is indeed significant. A simple test is, for instance, to calculate how many standard deviations s s is away from m s : Equivalently, we can form the hypothesis that the configuration s s is non-random and calculate the p-value by: where we implicitly assume that the distribution of the symmetry measure s over all random networks is Gaussian. We can compare this result with the significance level we fixed, typically p s~0 :05, and we can then conclude the nature of the symmetry of the  Table 2. Mean value and standard deviation of the symmetry measure as obtained from the theoretical analysis. network with a confidence level equal to p s or reject the hypothesis.

Pruning biases the network towards asymmetry
To demonstrate the validity of our analytical results, we compare them to simulation results. We generated a sample of n~10 5 networks with N~10 neurons with random connections with synaptic efficacies varying from 0 to 1: We evaluated the symmetry measure on each network by applying directly the definition of Eq. (37), and then we computed the mean value and variance of that sample. This process was repeated ten times, each one for a different value of the pruning parameter, a~0,0:1,0:2, . . . 0:9 f g : The final results are shown in Fig. 3A,B, together with the analytical results, see Eq. (16) and Eq. (29). Since numerical and analytical results overlap, we used a thicker (black) line for the latter. The agreement between theoretical findings, listed in Table 2, and numerical evaluations is excellent.
We also considered two extreme cases, symmetric and asymmetric random networks, which respectively represent the upper and lower bound for the symmetry measure defined in Eq. (37). Symmetric random networks have been generated as follows: we filled the upper triangular part of the N|N weights matrix with random values from the uniform/Gaussian distribution. We then mirrored the elements around the diagonal so as to have w ij~wji : In the asymmetric case, instead, we generated a random adjacency matrix with values in 0:1, 1 ½ for the upper triangular part and in 10 {3 | 0:1, 1 ½ for the lower triangular part, so as to have w ij %w ji : Then, we shuffled the adjacency matrix.
In Fig. 3A,B we contrast our results on random networks with numerical simulations of symmetric and asymmetric random networks: the dashed, light grey line (top line) shows the upper extreme case of a symmetric random network w ij~wji Vi,j~1 . . . N, whereas the dash-dotted, light grey line (bottom line) shows the lower extreme case of a asymmetric random network w ij %w ji for iwj: When we introduce pruning, the lower bound of s remains unchanged, whereas the more we prune the more a symmetric network appears as asymmetric.

Gaussian-distributed synaptic efficacies bias the network towards symmetry
In Fig. 3C,D, we show the adjacency matrix W for a random pruned network with pruning parameter a~0:4: A network with uniformly distributed initial connectivity is shown in Fig. 3C and a network with Gaussian-distributed initial connectivity is shown in Fig. 3D. Black areas represent zero connection, w ij~0 : The ''Gaussian'' network has most of the connections close to the mean value m w , resulting in higher values for the symmetry measure than in the case of a uniform distribution, compare Fig. 3B with Fig. 3A.
This difference in the mean values of s depending on the shape of the distribution implies that for example a weight configuration that would be classified as non-random under the hypothesis that the initial connectivity, before learning, is uniform, is classified as random under the hypothesis that the initial distribution of the connections is Gaussian. To more emphasise this point, we show in Fig. 4 the adjacency matrix of two different networks of 30 neurons. The first network, Fig. 4A, is a non-pruned network with s^0:900: According with the values obtained from the statistical analysis (Table 2), if we assume that the connections of this network are randomly drawn from a uniform distribution, the pvalue test (Eq. (38)) gives us p-value^6:50|10 {12 : With the usual confidence level of p s~0 :05, this is a significant result, implying that the network configuration is unlikely to be random. On the other hand, if we assume that the initial connectivity is drawn from a Gaussian distribution, we obtain p-value^0:25, meaning that the network configuration should be considered random.
In Fig. 4B, we show a pruned network with a~0:2 and s^0:334: In this case the opposite is true: under the hypothesis of uniform random initial connectivity, the network should be considered random, as p-value^0:18: Under the hypothesis of Gaussian-distributed random initial connectivity, the network should be considered asymmetric, as p-value^7:20|10 {5 :

Relation between symmetry measure and motifs
In what follows, we demonstrate the relation between our symmetry measure and unidirectional and bidirectional motifs. From the definition s, Eq. 37, we can deduct that in the extreme case of a network with unidirectional motifs, i.e. pairs of the form (0, x), xw0, the symmetry measure will result in s~0, while in the Figure 4. Symmetry and asymmetry depends on the distribution of the initial connectivity. A Example of an adjacency matrix in a random network with pruning parameter a~0 and symmetry measure s^0:900: According with the p-value test with the null hypothesis of random connectivity and with a level of confidence of 0:05, the symmetry of this network is significant if the distribution of the initial connections is uniform but is non-significant if the initial distribution of the connections is Gaussian. Therefore, in the first case it should be regarded as a non-random network whereas in the second case as a random network. B The same as A but with pruning parameter a~0:2 and symmetry measure s^0:334: In this case, with the same hypothesis test, the situation is reversed: the network should be considered random for initial uniform distribution of connections, but non-random for initial Gaussian-distributed connections (see the discussion in the text). doi:10.1371/journal.pone.0100805.g004 case of bidirectional motifs i.e. pairs of the form (x, x), the symmetry measure will result in s~1: By inverting Eq. 3, we can derive the mean value for connection pairs m Z :E n Z k ½ ~1{m s : We can use now this value to define connection pairs in a network as unidirectional or bidirectional: if Z k §m Z than Z k is a unidirectional motif, otherwise it is a bidirectional motif. In this way we relate unidirectional and bidirectional motifs to what is traditionally called single edge motif and second-order reciprocal motif, respectively. It is then expected that when s increases, the fraction of bidirectional motifs increases towards 1, whereas the percentage of unidirectional motifs decreases towards 0: We show this relation in simulations by generating 10 3 networks of 15 neurons each, with uniformly distributed random connections in 0, 1 ½ and no pruning. In this case the mean value of the symmetry measure is m u s^0 :614: Using Eq. 3, we have m u Z^0 :386, which is the value used to decide whether a connection pair is unidirectional or bidirectional. For each of these networks, we calculated the value of the symmetry measure and the fraction of unidirectional and bidirectional motifs and we plotted the results in Fig. 5A as a scatter plot (black circlesbidirectional motifs, grey circles -unidirectional motifs). Also, un Fig. 5B we show the analogous results obtained when we prune the connections with a~0:4: In both cases, a linear relation between s and motifs is evident.
Note that in both figures the restricted domain on the s-axis: this is determined by the range of s values that correspond to random networks. If we want to extend this range, we need to consider networks that are not random any more. We achieve this by fixing a distribution for connection pairs Z: Once we decide on the desirable value of s, in our case the whole zero to one spectrum, we can use a distribution (e.g. Gaussian) with mean m Z~1 {m s and a chosen variance to draw the values of all the connection pairs in the network. Following this procedure, we fill the upper triangular part of the 15|15 weights matrix with random values from the uniform/Gaussian distribution, and derive the other half of the weights by inverting the definition of Z: As a PDF(Z) we chose a Gaussian distribution around m Z with s~0:1, except for the extreme cases (near s~0, s~1) where s~0: With this technique of creating networks, we sampled the entire domain of s in steps of 0:01: For each value, we again generated 10 3 networks of 15 neurons with (half of the) weights uniformly distributed, and then we computed the mean value and standard deviation. Results are shown in Fig. 5C,D respectively for unpruned and pruned (with a~0:4) networks (black line -bidirectional motifs, grey lineunidirectional motifs). We can see that Fig. 5C,D correctly reproduce the linear regime observed in Fig. 5A,B for values of s close enough to m u s : Due to the method by which we generated networks, the shape of the distribution of half of the weights does not affect the shape of the dependence in Fig. 5C,D. Indeed, if we choose half of the connections to be Gaussian-distributed, we will observe only a shift in both curves as they have to cross at m g s~0 :885 (results not shown).

Symmetry measure and eigenvalues
In the definition of our symmetry measure we have deliberately excluded (0,0) connection pairs. This was a conscious decision for mathematical and practical reasons, see Methods. As a consequence, pairs of the form (0,0) do not contribute to the evaluation of the symmetry of the network. Instead, pairs of the form (0,E), with E very small, contribute to the asymmetry of the network according to our specific choice of symmetry measure (leading to Z~1, see Methods). Here we further motivate this choice via a comparison of our measure to the evaluation of the symmetry via the matrix eigenvalues, for three types of networks: (i) symmetric, where each connection pair consists of synapses of the same value, (ii) asymmetric, where every connection pair has one connection set to a small value E, and (iii) random, where connections are uniformly distributed. We demonstrate that our measure has a clear advantage over the eigenvalues method, in particular when pruning is introduced. This difference in performance lays in the different ways that (0,0) and (0,E) are treated by our measure.
A crucial property of the real symmetric matrices is that all their eigenvalues are real. Fig. 6A depicts the fraction of complex eigenvalues vs the pruning parameter a for a symmetric (dashdotted, light grey line) asymmetric (dotted, dark grey line) and random (dashed, black line) matrix with uniformly distributed values, similar to Fig. 3A, with the same statistics (10 5 networks of 10 neurons). As expected, if no pruning takes place (a~0), symmetric matrices have no complex eigenvalues and are clearly distinguishable from random and asymmetric matrices. On the contrary, both random and asymmetric matrices have a non-zero number of complex eigenvalues, which increases with a higher degree of asymmetry, leading to a considerable overlap between these two cases, differently from what happens with our measure in Fig. 3A.
As we introduce pruning, the mean of the complex eigenvalues of the three distinctive types of network moves towards the same value, an increase for the symmetric network and decrease for the random and non-symmetric networks. This is expected as pruning specific elements will make the symmetric network more asymmetric while it will increase the symmetry of the asymmetric network by introducing pairs of the form (0,E) or (0,0): The (0,E) pairs are due to the construction of the asymmetric network, where half of the connections are stochastically set to very low values.
This continues till a~0:5, after which further pruning reduces the number of complex eigenvalues of all networks: a high level of pruning implies the formation of more (0,E) or (0,0) pairs for the asymmetric network and more (0,0) pairs for the symmetric network. In Fig. 6B we show the dependence of the fraction of complex eigenvalues for uniform random matrices on their size.
Comparing Fig. 6A to Fig. 3A, we observe that our symmetry measure offers excellent discrimination between the symmetric, asymmetric and random matrices for e.g. a~0:4: This is despite the fact that the structure of the asymmetric matrix per se has become less asymmetric and the structure of the symmetric matrix has become more asymmetric due to the pruning, as it is confirmed by the overlapping fraction of complex eigenvalues for asymmetric and random matrices (Fig. 6A). In our measure (0,E) pairs are treated as asymmetric, (0,0) pairs are ignored, and the bias that pruning introduces is taken into account allowing for good discrimination for all types of matrices, even beyond a~0:4: Case study: Monitoring the connectivity evolution in neural networks We demonstrate the application of the symmetry measure to a network of neurons evolving in time according to a Spike-Timing Dependent Plasticity (STDP) ''triplet rule'' [32] by adopting the protocols of [18]. These protocols are designed to evolve a network with connections modified according to the ''triplet rule'', to either a unidirectional configuration or bidirectional configuration, with the weights being stable under the presence of hard bounds. We have deliberately chosen a small size network as a ''toy-model'' that will allow for visual inspection and characterisation at the mesoscopic scale.
We simulated N~30 integrate-and-fire neurons (see Methods section for simulation details) initially connected with random weights w ij [ 0, 1 ½ drawn from either a uniform (Fig. 7) or a Gaussian (Fig. 8) distribution (see Table 1 for parameters). Where a pruning parameter is mentioned, the pruning took place prior to the learning procedure: with a fixed probability some connections were set to zero and were not allowed to grow during the simulation.
Our choice allows us to produce an asymmetric or a symmetric network depending on the external stimulation protocol applied to the network. Since the amplitude of the external stimulation we chose (a ext~3 0 mV) is large enough to make a neuron fire every time it is presented with an input, the firing pattern of neurons reflects the input pattern and we can indifferently refer to one or another. The asymmetric network has been obtained by using a ''sequential protocol'', in which neurons fire with the same frequency in a precise order one after the other, with 5 ms delay, see also [18]. The symmetric network is produced by applying a ''frequency protocol'', in which each neuron fires with a different frequency from the values 15, 16, 17, . . . , 44 Hz f g : In both cases, the input signals were jittered in time randomly with zero mean and standard deviation equal to 2% of the period of the input itself for the frequency protocol, to 25% of the delay for the sequential protocol. Depending on the protocol, we expect the neurons to form mostly unidirectional or bidirectional connections during the evolution.
The time evolution for both protocols and initial distributions is shown in Figs 7A,D (uniform) and 8A,D (Gaussian). Each panel represents the evolution of the symmetry measure averaged over 50 different representations for both fully connected networks (a~0, solid black line) and pruned networks (e.g. a~0:4, dashed grey line). The shaded area represents the standard deviation. The time course of the symmetry measure can be better understood with the help of the Fig. 3. At the beginning, the values of s reflect what we expect from a random network. Afterwards, as the time passes, the learning process leads to the evolution of the connectivity. As expected, the frequency protocol induces the formation of mostly bidirectional connections, leading to the saturation of s towards its maximum value, depending on the degree of pruning. On the other hand, when we apply the sequential protocol, connection pairs develop a high degree of  Table 3. In the case that a~0, a careful inspection of Fig. 7B, 8B indicates that connectivity is bidirectional: all-to-all strong connections have been formed. On the other hand, In Fig. 7E, 8E, trying to determine if there is a particular connectivity emerging in the network starts to be considerably tough. However, by using our symmetry measure (see values in Table 3) we can infer that the connectivity is unidirectional. In the pruned networks, however, see Fig. 7C, 8C and Fig. 7F, 8F, the formation of bidirectional and unidirectional connection pairs is not as obvious as for a~0: We therefore refer again to the Table 3 and compare the values of s with m rand s and with m asym s or m sym s , depending on the case. We can then verify that the learning process has significantly changed the network and its inner connections from the initial random state.
We can rigorously verify the above conclusions via a statistical hypothesis test such as the p-value test, which in essence quantifies how far away the value of our symmetry measure s of our final configuration is from the initial, random configuration (see also Methods). In Table 3 we show the p-values corresponding to the null hypothesis of random connectivity for the examples in the Fig. 7, 8. Once we set the significance level at p s~0 :05, we can verify that, except for the case of pruned network with initially Gaussian-distributed connections where a frequency protocol has Figure 8. Evolution of networks with STDP and initially Gaussian-distributed weights. A Time evolution of the symmetry measure when a frequency protocol is applied on a network, shown as average over 50 representations. The shaded light grey areas represent the standard deviation (the total length of height of each band is twice the standard deviation). Solid black line: no pruning, Dashed grey line: with pruning a~0:4: B Example of an adjacency matrix at the end of the evolution for a network with frequency protocol and no pruning. For this example s^0:963: C The same as B but with pruning a~0:4: For this example s^0:456: D, E, F The same as A, B and C but with the sequential protocol applied. The connectivity matrix in panel E has s^0:426: The connectivity matrix in panel F has s^0:153: doi:10.1371/journal.pone.0100805.g008 been applied (i.e. GFa 0:4 ), the p-values are significant, implying the rejection of the null hypothesis. This is also justified by Fig. 3A, B: when we increase the pruning, the mean value of the symmetry measure of the fully symmetric network approaches that of the pruned random network and in particular for the case where the weight are randomly Gaussian-distributed.

Summary
The study of the human brain reveals that neurons sharing the same cognitive functions or coding tend to form clusters, which appear to be characterised by the formation of specific connectivity patterns, called motifs. We, therefore, introduced a mathematical tool, a symmetry measure s, which computes the mean value of the connection pairs in a network, and allows us to monitor the evolution of the network structure due to the synaptic dynamics. In this context, we applied it to a number of evolving networks with plastic connections that are modified according a learning rule. After the network connectivity reaches a steady state as a consequence of the learning process, connectivity patterns develop. The use of the symmetry measure together with the statistical analysis and the p-value test allow us both to quantify the connectivity structure of the network, which has changed due to the learning process, and observe its development. It also allows for some interesting observations. (i) Introducing a fixed amount of pruning in the network prior to the learning process biases the adjacency matrix towards an asymmetric configuration. (ii) A network configuration that appears to be symmetric under the assumption of a uniform initial distribution is random under the assumption of a Gaussian initial distribution.
Statements on non-random connectivity in motifs experimental work, e.g. [14,20] are supported by calculating the probability of connectivity in a random network and then distributing it uniformly: this becomes the null hypothesis. This was a most suitable approach given the paucity of data. If, however, the null hypothesis consisted of a Gaussian-distributed connectivity, then a higher number of bidirectional connections would be expected, as suggested by our analysis.
It is also possible that in a large network, learning processes are only modifying a subset of the connections, forming motifs that might be unobserved if the symmetry measure is applied to the whole adjacency matrix. In such cases, algorithms of detecting potential symmetric or asymmetric clusters would detect the area of interest and the symmetry measure presented here reveals the evolution of the structure and its significance.