A permutation method for network assembly

We present a method for assembling directed networks given a prescribed bi-degree (in- and out-degree) sequence. This method utilises permutations of initial adjacency matrix assemblies that conform to the prescribed in-degree sequence, yet violate the given out-degree sequence. It combines directed edge-swapping and constrained Monte-Carlo edge-mixing for improving approximations to the given out-degree sequence until it is exactly matched. Our method permits inclusion or exclusion of ‘multi-edges’, allowing assembly of weighted or binary networks. It further allows prescribing the overall percentage of such multiple connections—permitting exploration of a weighted synthetic network space unlike any other method currently available for comparison of real-world networks with controlled multi-edge proportion null spaces. The graph space is sampled by the method non-uniformly, yet the algorithm provides weightings for the sample space across all possible realisations allowing computation of statistical averages of network metrics as if they were sampled uniformly. Given a sequence of in- and out- degrees, the method can also produce simple graphs for sequences that satisfy conditions of graphicality. Our method successfully builds networks with order O(107) edges on the scale of minutes with a laptop running Matlab. We provide our implementation of the method on the GitHub repository for immediate use by the research community, and demonstrate its application to three real-world networks for null-space comparisons as well as the study of dynamics of neuronal networks.


Introduction
The effects of the structure of a network of neurons on its dynamics is a large topic of current interest [35,17,21,29,30,31,19]. Here we consider the effects of degree assortativity in a large network of theta neurons. Assortativity in this context refers to the probability that a neuron with a given in-and out-degree connects to another neuron with a given in-and out-degree. If this probability is what one would expect by chance, given the neurons' degrees, the network is referred to as neutral. If the probability is higher (lower) than one would expect by chance the network is assortative (disassortative).
Assortativity has often been studied in undirected networks, where a node simply has a degree, rather than in-and out-degrees (the number of connections to and from a node, respectively) [28,25,24]. Since neurons form directed connections, there are four types of assortativity to consider [10]: between either the in-or out-degree of a presynaptic neuron, and either the in-or out-degree of a postsynaptic neuron ( Figure 1). We are aware of only a small number of previous studies in this area [30,13]. Kähne et al. [13] considered networks with equal in-and out-degrees and investigated degree assortativity, effectively correlating both in-and out-degrees of pre-and post-synaptic neurons. They mostly considered networks with discrete time and a Heaviside firing rate, i.e. a McCulloch-Pitts model [22]. They found that positive assortativity created new fixed points of the model dynamics. Schmeltzer et al. [30] also consider networks with equal in-and out-degrees and investigated degree assortativity. These authors considered leaky integrate-and-fire neurons and derived approximate self-consistency equations governing the steady state neuron firing rates. They found, among other things, that positive assortativity increased the firing rates of high-degree neurons and decreased that of low-degree ones. Positive assortativity also seemed to make the network more capable of sustained activity when the external input to the network was low. Four assortativity types in directed networks: Assortativity in undirected networks: Figure 1. Assortativity in undirected and directed networks. An undirected network (left column) is assortative if high degree nodes are more likely to be connected to high degree nodes, and low to low, than by chance (top left). Such a network is disassortative if the opposite occurs (bottom left). In directed networks (right column) there are four possible kinds of assortativity. The probability of a connection (red) is thus influenced by the number of red shaded links of the sending (left) and receiving (right) node.
Experimental evidence for positive degree assortativity includes [6], who examined a neuronal culture but did not determine directionality of connections. Eguíluz et al. [7] also found evidence of positive assortativity in the brain, and did not determine directionality of connections. They also found evidence for a power law degree distribution, as we consider here.
The outline of the paper is as follows. In Sec. 2 we present the model and then derive several approximate descriptions of its dynamics. In Sec. 3 we describe the method for creating networks with prescribed types of degree assortativity, and in Sec. 4 we discuss aspects of the numerical implementation of the reduced model. Results are given in Sec. 5 and we conclude with a discussion in Sec. 6. Appendix A contains the algorithms we use to generate networks with prescribed assortativity.

Model description and simplifications
We consider a network of N theta neurons: η j is a constant current entering the jth neuron, randomly chosen from a distribution g(η), K is strength of coupling, k is mean degree of the network, and the connectivity of the network is given by the adjacency matrix A, where A jn = 1 if neuron n connects to neuron j, and zero otherwise. The connections within the network are either all excitatory (if K > 0) or inhibitory (if K < 0). Thus we do not consider the more realistic and general case of a connected population of both excitatory and inhibitory neurons, although it would be possible using the framework below. The theta neuron is the normal form of a Type I neuron which undergoes a SNIC bifurcation as the input current is increased throught zero [8,9]. A neuron is said to fire when θ increases through π, and the function (2) is meant to mimic the current pulse injected from neuron n to any postsynaptic neurons when neuron n fires. a q is a normalisation constant such that 2π 0 P q (θ)dθ = 2π independent of q.
The in-degree of neuron j is defined as the number of neurons which connect to it, i.e.
A jn while the out-degree of neuron n is the number of neurons it connects to, i.e.
Since each edge connects two neurons, we can define the mean degree Networks such as (1) have been studied by others [20,2,14], and note that under the transformation V = tan (θ/2) the theta neuron becomes the quadratic integrate-and-fire neuron with infinite thresholds [23,18].

2.
1. An infinite ensemble. As a first step we consider an infinite ensemble of networks with the same connectivity, i.e. the same A jn , but in each member of the ensemble, the value of η j associated with the jth neuron is randomly chosen from the distribution g(η) [1]. Thus we expect a randomly chosen member of the ensemble to have values of η in the ranges with probability g(η 1 )g(η 2 ) . . . g(η N )dη 1 dη 2 . . . dη N . The state of this member of the ensemble is described by the probability density which satisfies the continuity equation If we define the marginal distribution for the jth neuron as where we have now evaluated I j as an average over the ensemble rather than from a single realisation (as in (2)). This is reasonable in the limit of large networks [1]. Multiplying (9) by k =j dθ k dη k and integrating we obtain A network of theta neurons is known to be amenable to the use of the Ott/Antonsen ansatz [27,20,14] so we write The dependence on θ j is written as a Fourier series where the kth coefficient is the kth power of a function α j . Substituting this into (12) and (11) we find that α j satisfies where an overbar indicates complex conjugate, and Assuming that g is a Lorentzian: we can use contour integration to evaluate the integral in (15), and evaluating (14) at the appropriate pole of g we obtain (19) dz A jn H(z n ; q) and z j = e iθ j , where the expected value is taken over the ensemble. Now (19) is a set of N coupled complex ODEs, so we have not simplified the original network (1) in the sense of decreasing the number of equations to solve. However, the states of interest are often fixed points of (19) (but not of (1)), and can thus be found and followed as parameters are varied. At this point the network we consider, with connectivity given by A, is arbitrary. If A was a circulant matrix, for example, this would represent a network of neurons on a circle, where the strength of coupling between neurons depends only on the distance between them [14].

2.2.
Lumping by degree. The next step is to assume that for a large enough network, the dynamics of neurons with the same degrees will behave similarly [2]. Such an assumption has been made a number of times in the past [13,12,28]. We thus associate with each neuron the degree vector k = (k in , k out ) and assume that the value of z for all neurons with a given k are similar. There are N k = N k in N k out distinct degrees where N k in and N k out are the number of distinct in-and out-degrees, respectively. We define b s to be the order parameter for neurons with degree k s , where s ∈ [1, N k ], and now derive equations for the evolution of the b s .
Let z be the vector of ensemble states z j , where j ∈ [1, N ] and the degree index of neuron j be d(j), such that k d(j) is its degree. We assume that for all neurons with the same degree k d(j) = k s the ensemble state z j is similar in sufficiently large networks and thus we only care about the mean value z j d(j)=s = b s with s ∈ [1, N k ]. We say that degree k s occurs h s times and thus write b = Cz, (21) where the N k × N matrix C has h s entries in row s, each of value 1/h s , at positions j where d(j) = s and zeros elsewhere, i.e. C sj = δ s,d(j) /h s with δ being the Kronecker delta.
To find the time derivative of b we need to express z in terms of b, which we do with an N × N k matrix B which assigns to z j the corresponding b s value, such that with components B js = δ d(j),s . Note that CB = I N k , the N k × N k identity matrix. Differentiating (21) with respect to time, inserting (19) into this and writing z in terms of b using (22) we obtaiṅ Considering that for all t there is only a single non-zero entry B jt , equal to 1, the identity holds for any power n. Further we find that Thus, the local term in (23) For the non-local term we writė whereJ s describes the synaptic current of the ensemble equations averaged over nodes sharing the same degree k s . The identity (24) also applies to (20), so that (28) and the current can be written as The effective connectivity between neurons with different degrees is therefore expressed in the matrix E = CAB and we end up with equations governing the b s : These equations are of the same form as (19)- (20) except that A has been replaced by E. Note that the connectivity matrix A is completely general; we have only assumed that neurons with the same degrees behave in the same way. We are not aware of a derivation of this form being previously presented.

Network assembly
We are interested in the effects of degree assortativity on the dynamics of the network of neurons. We will choose a default network with no assortativity and then introduce one of the four types of assortativity and investigate the changes in the network's dynamics. Our default network is of size N = 5000 neurons where in-and out-degrees k for each neuron are independently drawn from the interval [750, 2000] with probability P (k) ∼ k −3 . We create networks using the configuration model [25], then modify them using algorithms which introduce assortativity and then remove multiple connections between nodes (or multi-edges) (described in Appendix A). Further, to observe any influence of multi-edges on the dynamics investigated here, we also developed a novel network assembly technique permitting introduction of very high densities of multi-edges also described in the Appendix; we refer to this novel technique as the "permutation" method. We choose as our default parameters η 0 = −2, ∆ = 0.1, K = 3, for which a default network approaches a stable fixed point. The sharpness of the synaptic pulse function is set to q = 2 for all simulations.
We first check the validity of averaging over an infinite ensemble. We assemble 20 different default networks and for each, run (19)- (20) to a steady state and calculate the order parameter z, the mean of Bb. The real part of z is plotted in orange in Fig. 2. For 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18  each of these networks we then generated 50 realisations of the η i 's and ran (1)-(2) for long enough that transients had decayed, and then measured the corresponding order parameter for the network of individual neurons and plotted its real part in blue in Fig. 2. Note that the orange circles always lie well within the range of values shown in blue. The fact that deviations within the 50 realisations are small relative to the value obtained by averaging over an infinite ensemble provide evidence for the validity of this approach, at least for these parameter values.
We also investigate the influence of multi-connections (i.e. more than one connection) between neurons on the network dynamics. The configuration model creates a network in which the neuron degrees are exactly those specified by the choice from the appropriate distribution, but typically results in both self-connections and multiple connections between neurons. We have an algorithm for systematically removing such connections while preserving the degrees, and found that removing such edges has no significant effect (results not shown). We also have an algorithm (see "Permutation Method" in Appendix A) for increasing the number of multi-edges from the number found using the default configuration model. This novel network assembly method meets specified neuron degrees and also produces specified densities of multi-connections ranging from none to 97%; see Fig. 3 for the results of such calculations. We see that only when the fraction of multi-edges approaches 90% do we see a significant effect. However, in our simulations we use simple graphs without multi-edges.
3.1. Assortativity. For a given matrix A we can measure its assortativity by calculating the four Pearson correlation coefficients r(α, β) with α, β ∈ [in, out] which read where N e being the number of connections and the leading superscript s or r refers to the sending or receiving neuron of the respective edge. For example the sending node's in-degree of the second edge would be s k in 2 . Note that there are four mean values to compute.
We introduce assortativity by randomly choosing two edges and swapping postsynaptic neurons when doing so would increase the target assortativity coefficient [30]. An edge (i, j) is directed from neuron j to neuron i. In order to know whether the pair (i, j) and (h, l) should be rewired or left untouched, we compare their contribution to the covariance in the numerator of (33): If c \ / > c we replace the edges (i, j) and (h, l) by (i, l) and (h, j), respectively, otherwise we do not, and continue by randomly choosing another pair of edges. Algorithm 1 (see Appendix A) demonstrates a scheme for reaching a certain target assortativity coefficient.
We investigate the effects of different types of assortativity (see Fig 1) in isolation. We thus need a family of networks parametrised by the relevant assortativity coefficient. Algorithm 1 is used to create a network with a specific value of one of the assortativity coefficients, but especially for high values of assortativity it may be that in doing so a small amount of assortativity of a type other than the intended one is introduced. Accordingly, it may be necessary to examine all types of assortativity and apply the mixing scheme to reduce other types back to zero, and then (if necessary) push the relevant value of assorativity back to its target value. We do multiple iterations of these mixing rounds until all assortativities are at their target values (which may be 0) within a range of ±0.005. We use Algorithm 1 with a range of target assortativities r, and for each value, store the connectivity matrix A and thus form the parametrised family E(r). We do this for the four types of assortativity.
We have chosen to use the configuration model to create networks with given degree sequences and then introduced assortativity by swapping edges. Although we developed our novel "permutation" method as well (see Appendix A), that method was designed for assembling adjacency matrices with desired multi-edge densities and was applied only for that aspect. By contrast, another common adjacency network assembly technique, that of Chung and Lu [4] together with an analytical expression for assortativity (as in [2]), proved inadequate. We found that the latter approach significantly alters the degree distribution for large assortativity, whereas the configuration model combined with our mixing algorithm does not change degrees at all. For our default network this approach allows us to introduce assortativity of any one kind up to r = ±0.5.

Implementation
For networks of the size we investigate it is impractical to consider each distinct inand out-degree (because E will be very large and sparse). Due to the smoothness of the degree dependency of b(k) we coarse-grain in degree space by introducing "degree clusters" -lumping all nodes with a range of degrees into a group with dynamics described by a single variable. Let there be N c in clusters in in-degree and N c out clusters in out-degree, with a total of N c = N c in · N c out degree clusters. The matrix C then is an N c × N matrix and constructed as previously, except that d(j) is not the degree index of neuron j, but the cluster index and s is the cluster index running from 1 to  N c . Similarly for the matrix B. There are multiple options for how to combine degrees into a cluster. The cluster index of a neuron can be computed linearly, corresponding to clusters of equal size in degree space. However, with this approach, depending on the degree distribution, some of the clusters may be empty or hardly filled, resulting in poor statistics. To overcome this issue, the cumulative sum of in-and out-degree distribution can be used to map degrees to cluster indices. Thus, clusters are more evenly filled and at the same time regions of degree space with high degree probability are more finely sampled. The dynamical equations (30)-(31) are equally valid for describing degree cluster dynamics with s, t ∈ [1, N c ] and E = CAB, where C and B are cluster versions of their previous definitions.
To check the effect of varying the number of clusters we generate 20 default matrices and then generate the corresponding matrix E with varying numbers of clusters (N c in and N c out are equal), then run (30)-(31) to a steady state and plot the real part of z in Fig. 4. We see that the order parameter is well approximated using as little as about N c in = N c out = 10 degree clusters. Beyond that, fluctuations between different network realisations exceed the error introduced by clustering. In our simulations we stick to the choice of 10 degree clusters per in-and out-degree space.
Having performed this clustering, we find that it is possible to represent E using a low-rank approximation, calculated using singular value decomposition. Thus for a fixed r we have  where S is a diagonal matrix with decreasing entries, called singular values, and U and V are unitary matrices. In Fig. 5 we plot the largest 6 singular values of E as function the assortativity coefficient, for the 4 types of assortativity. Even for large |r| the singular values decay very quickly, thus a low-rank approximation is possible. We choose a rank-3 approximation, so approximate E by where u i is the ith column of U , similarly for v i and V , and s i is the ith singular value. We have such a decomposition at discrete values of r and use cubic interpolation to evaluate E(r) for any r. This decomposition means that the multiplication in (31) can be evaluated quickly using 3 columns of U and V rather than the full N c × N c matrix E. We note that the components for the approximation of E(r) are calculated once and then stored, making it very easy to systematically investigate the effects of varying any of the parameters η 0 , ∆, K and q (governing the sharpness of the pulse function (3)).

Excitatory coupling.
We take K = 3 to model a network with only excitatory connections. To study the dynamical effect of assortativity we generate positive and negative (r = ±0.2) assortative networks of the four possible kinds and follow fixed points of (30)-(31) as a function of η 0 , and compare results with a neutral (r = 0) network. We use pseudo-arc-length continuation [15,11].
To calculate the mean frequency over the network we evaluate z = Bb and then use the result that if the order parameter at a node is z, then the frequency of neurons at that node is [16,23] (39) 1 π Re 1 −z 1 +z Averaging these gives the mean frequency.
Results are shown in Figure 6, where we see quite similar behaviour in each case: apart from a bistable region containing two stable and one unstable fixed point, there is only a single stable fixed point present. Further, the two assortativity types (out,in) and (out,out) apparently do not affect the dynamics, whereas the saddle-node bifurcations marking the edges of the bistable region move slightly for (in,out) and significantly for (in,in) assortativity. Following the saddle-node bifurcations for the latter two cases we find the results shown in Figure 7. We have performed similar calculations for different networks with the same values of assortativity and found similar results.

Inhibitory coupling.
We choose K = −3 to model a network with only inhibitory coupling. Again, we numerically continue fixed points for zero, positive and negative assortativity (r = 0, ±0.2) as η 0 is varied and obtain the curves shown in Figure 8. Consider the lower left plot. For large η 0 the system has a single stable fixed point which undergoes a supercritical Hopf bifurcation as η 0 is decreased, creating a stable periodic orbit. This periodic orbit is destroyed in a saddle-node bifurcation on an invariant circle (SNIC) bifurcation at lower η 0 , forcing the oscillations to stop. Decreasing η 0 further, two unstable fixed points are destroyed in a saddle-node bifurcation. In contrast with the case of excitatory coupling, oscillations in the average firing rate are seen. These can be thought of as partial synchrony, since some fraction of neurons in the network have the same period and fire at similar times to cause this behaviour. The period of this macroscopic oscillation tends to infinity as the SNIC bifurcation is approached, as shown in the inset of the lower left panel in Fig. 8.
As in the excitatory case, we see that assortativities of type (out,in) and (out,out) have no influence on the dynamics in this scenario. However, type (in,out) does have a small effect, slightly moving bifurcation points (top right panel in Fig. 8). Type (in,in) has the strongest effect, resulting in a qualitative change in the bifurcation scenario for large enough assortativity: there is a region of bistability between either two fixed points or a fixed point and a periodic orbit. This is best understood by following the bifurcations in the top panels of Fig. 8 as r is varied, as shown in Figure 9. There is one fixed point in regions A, B and D, and three in region C. For (in,out) assortativity there is a stable periodic orbit in region B and never any bistability.
We now describe the case for (in,in) assortativity. For negative and zero r the scenario is the same as for the other three types, but as r is increased there is a Takens-Bogdanov bifurcation where regions C,D,E and F meet, leading to the creation of a curve of homoclinic bifurcations, which is destroyed at another codimension-two point where there is a homoclinic connection to a non-hyperbolic fixed point [3]. There are stable oscillations in region E, created or destroyed in supercritical Hopf or homoclinic bifurcations. In region F there is bistability between two fixed points.

Discussion
We investigated the effects of degree assortativity on the dynamics of a network of theta neurons. We used the Ott/Antonsen ansatz to derive evolution equations for an order parameter associated with each neuron, and then coarse-grained by degree and then degree cluster, obtaining a relatively small number of coupled ODEs, whose dynamics as parameters varied could be investigated using numerical continuation. We found that degree assortativity involving the out-degree of the sending neuron, i.e. (out,in) and (out,out), has no effect on the networks' dynamics. Further, (in,out) assortativity moves bifurcations slightly, but does not lead to substantial differences in dynamical behaviour. The most significant effects were caused by creating correlation between indegrees of the sending and receiving neurons. For our excitatorially coupled example, positive (in,in) assortativity narrows the bistable region, whereas negative assortativity widens it (see Fig. 7). In the inhibitory case introducing negative assortativity increased the amplitude of network oscillations and extended their range to slightly larger η 0 . On the contrary, positive (in,in) assortativity in this network has an opposite effect and eventually stops oscillations (see Fig. 9).
The most similar work to ours is that of [2]. These authors also considered a network of the form (1)-(2) and by assuming that the dynamics depend on only a neuron's degree and that the η j are chosen from a Lorentzian, and using the Ott/Antonsen ansatz, they derived equations similar to (30)- (31). The difference in formulations is that rather than a sum over entries of E (in (31)), [2] wrote the sum as where P (k) is the degree distribution and a(k → k) is the assortativity function, which specifies the probability of a link from a node with degree k to one with degree k (given that such neurons exist). They then chose a particular functional form for a and briefly presented the results of varying one type of assortativity (between k in and k out ). In contrast, our approach is far more general (since any connectivity matrix A can be reduced to the corresponding E, the only assumption being that the dynamics are determined by a neuron's degree). We also show the results of a wider investigation into the effects of assortativity.
This alternative presentation also explains why E can be well approximated with a low-rank approximation. If the in-and out-degrees of a single neuron are independent, This term contributes to the input current to a neuron with degree k = (k in , k out ), but is independent of k out . Thus the state of a neuron depend only on its in-degree, so Comparing with (31) we see that E is a rank-one matrix. Varying assortativity within the network is then a perturbation away from this, with the effects appearing in the second (and third) singular values in the SVD decomposition of E.
A limitation of our study is that we considered only networks of fixed size with the same distributions of in-and out-degrees, and a specific distribution of these degrees. However, our approach does not rely on this and could easily be adapted to consider other types of networks, although we expect it to become less valid as both the average degree and number of neurons in the network decrease. We have also only considered theta neurons, but since a theta neuron is the normal form of a Type I neuron, we expect similar networks of other Type I neurons to behave similarly to the networks considered here. The approach presented here could also be used to efficiently investigate the effects of correlated heterogeneity, where either the mean or width of the distribution of the η j is correlated with a neuron's in-or out-degree [33,34,5]. We could also consider assortativity based on a neuron's intrinsic drive (η j ) [32] rather than its degrees, or correlations between an individual neuron's in-and out-degree [35,17,21,36,26]. We are currently investigating such ideas.
Acknowledgements: This work is partially supported by the Marsden Fund Council from Government funding, managed by Royal Society Te Apārangi.

Appendix A. Algorithms
We present here the algorithms developed and utlised for adjacency matrix assembly and modification. These include modifications to resulting adjacency matrices produced by the familiar configuration method, and our novel matrix assembly technique we christen the "permutation" method.
A.1. Assortativity. Algorithm 1 is used to create a network with a specified degree distribution and values of the four different assortativity coefficients.
A.2. Permutation Method. The well-known configuration model for generating adjacency matrices [25] typically includes auto-connections and multiple edge connections with no control over their appearance or proportion, often forcing post-processing removal if none are desired. We developed a novel adjacency matrix assembly technique that, given predefined sequences of in-and out-degrees, permits designating not only whether multiple edges appear but also their proportion -with no post-processing required. Additionally, auto-connections can be included or omitted without manipulating the resulting A. These A's exhibit generally neutral assortativities over all types with exceptions emerging for the highest multi-edge densities we assembled: e.g., 98-99% multi-edges exceed our target neutral assortativity range of ±0.005, so for purposes of this study these were discarded.
This permutation method is a two-phase approach requiring only two sequences of in-and out-degrees, or k in and k out , respectively, and a target multiple edge connection density, ρ + m . We describe this technique briefly here, and with more detail in a subsequent companion publication. These phases are as follows: (1) Generate an initial matrix, designated A (0) , with each node's inbound edge counts (row sums) satisfying k in , yet ignoring k out . Each row of A (0) is filled with nonzero entries comprised of solo-and multiple-edge connections whose sum is k in for each node. Remaining entries along each row are simply filled with zeros out to the N th column. This resulting A (0) thus adheres to the designated k in and ρ + m , but violates k out : all the column sums are incorrect (see Fig. (10)).
(2) Randomly permute each row of A (0) , distributing the non-zero entries of soloand multiple-edge connections into a first sequence of permuted matrices, A (1) . Calculate an error distance for A (1) from the designated k out via e (1) out . Each entry in e (1) out is used to classify nodes: too many out-bound edges ("donor" nodes), those with too few ("recipient" nodes) and those at their target out-degree ("inert" nodes). We then loop over all the donor nodes, randomly pick a recipient node and exchange edges from the donor to the recipient -if suitable. After each exchange, we update the current permuted matrix, A (i) , its corresponding error distance, e (i) out , and repeat the process until this error is zero. The final matrix, A (n) , after n updates, then satisfies all the designated characteristics if we performed suitable edge exchanges along the way.
Algorithm 1: Assortative mixing. Randomly pair up all N e edges of the network with adjacency matrix A and reconnect them at once where preferable with respect to target assortativity r target . Repeat the process until the assortativity coefficient lies within the tolerance. Once overshooting the target coefficient, interpolate the length of a shortened list of edge pairs and reconnect those. showing arrangement of edge entries (all solo connections: ρ + m = 0) for each row aligned left where row sums add up to k in . If multi-edges are desired, we simply distribute them in the rows of A (0) satisfying the proportion, ρ + m = 0 and the row sum. Bottom: permutation of rows in A (0) into this example A (1) . Note row sums still add up to k in , with column sums adding to a current k Algorithm 3: Permutation Method, Phase (2). Pseudo-code for permuting initial matrix A (0) into final matrix A (n) satisfying all constraints, k in , ρ + m and k out . 1 /* permute entries of A (0) into A (1) */ 2 for i = 1...N do