Functional and spatial rewiring principles jointly regulate context-sensitive computation

Adaptive rewiring provides a basic principle of self-organizing connectivity in evolving neural network topology. By selectively adding connections to regions with intense signal flow and deleting underutilized connections, adaptive rewiring generates optimized brain-like, i.e. modular, small-world, and rich club connectivity structures. Besides topology, neural self-organization also follows spatial optimization principles, such as minimizing the neural wiring distance and topographic alignment of neural pathways. We simulated the interplay of these spatial principles and adaptive rewiring in evolving neural networks with weighted and directed connections. The neural traffic flow within the network is represented by the equivalent of diffusion dynamics for directed edges: consensus and advection. We observe a constructive synergy between adaptive and spatial rewiring, which contributes to network connectedness. In particular, wiring distance minimization facilitates adaptive rewiring in creating convergent-divergent units. These units support the flow of neural information and enable context-sensitive information processing in the sensory cortex and elsewhere. Convergent-divergent units consist of convergent hub nodes, which collect inputs from pools of nodes and project these signals via a densely interconnected set of intermediate nodes onto divergent hub nodes, which broadcast their output back to the network. Convergent-divergent units vary in the degree to which their intermediate nodes are isolated from the rest of the network. This degree, and hence the context-sensitivity of the network’s processing style, is parametrically determined in the evolving network model by the relative prominence of spatial versus adaptive rewiring.

Adaptive rewiring facilitates signal processing by attaching shortcut connections to regions where neural signal traffic is intense while pruning underused connections.
Adaptive rewiring establishes these brain-like structure in models of oscillatory neural mass activity (Gong & van Leeuwen, 2003, 2004) as well as in neuronal-level spiking networks (Kwok, Jurica, Raffone, et al, 2007).Spike propagation in neural networks can be represented as random walks on a graph (Gong & van Leeuwen, 2009), and these, in turn, can be stochastically described as graph diffusion (Abdelnour, Voss, & Raj, 2014).Graph diffusion offers a particularly parsimonious account of neural activity, suitable for implementing adaptive rewiring in neural network models (Jarman, Steur, Trengove, et al. 2017).
Current adaptive rewiring models are still too simplified for representing the anatomical networks of the brain because they are exclusively concerned with optimizing topological features, while ignoring the spatial economy of the brain.For instance, a prominent spatial feature of the brain is that adjacent regions tend to have similar functions, and neural connections are aligned in fiber bundles and layers.This indicates that besides being adaptive, rewiring also follows certain spatial optimization principles.
One spatial principle involves minimization of wiring distance (Cherniak, 1994) by topologically connecting spatially adjacent nodes (Fig 1.B).Naturally, this principle pushes for the elimination of long-range connections, but, due to adaptive rewiring, a stable, albeit sparse, proportion of them remains (Jarman, Trengove, Steur, et al., 2014).These long-range connections preferentially attach to hub nodes, while short-distance connections are assigned to nodes within the same topological modules.The differentiation into long-and short-range connections evolves gradually, similarly to what happens in the developing brain (Oldham & Fornito, 2019).This results in a rewired network akin to a topographical map, a widespread functional architecture in the brain, found from the visual to the somatosensory cortex.
Alignment is another spatial principle (Fig 1 .C) where connections tend to extend either in the same direction, as the axons of pyramidal cells in the cortex, or spread in a concentric fashion, as the dendrites of ganglia.A possible mechanism for alignment is that neuronal extensions develop along a vector field; either a chemical gradient or a traveling electrical wave field (Alexander, Jurica, Trengove et al., 2013;Muller, Chavane, Reynolds, & Sejnowski, 2018).
Propagating wave fields have been proposed to play an active role in shaping cortical maps (Alexander, Trengove, Sheridan, et al. 2011).To the extent that wave fields are homogeneous or vary smoothly across spatial regions, rewired connections that align with the direction of the wave field tend to become spatially aligned with each other.Depending on the organization of the wave field, regular topography may arise, e.g., layers as a result of a homogeneously lateral wave or ganglia as a result of a radially expanding wave (Calvo Tapia et al., 2019).In a recent study on undirected binary networks that incorporated both spatial principles, the emerging networks revealed morphologies that are stalwarts of the nervous system's functional anatomy, such as parallelism, super-rings, and super-chains, while they maintained the complex network properties generated by adaptive rewiring (Calvo Tapia et al., 2019).
Complementary to Calvo Tapia et al. (2019), who were concerned with network morphology, we focus on the evolution in these networks of a particular kind of structure that facilitates context-sensitive computation.In biological networks, context-sensitive computation is efficiently achieved through pooling, i.e., certain hub units collect converging inputs, and pass this information to divergent output-hubs via subnetworks of intermediate nodes (Fig 2).Such structures are known as convergent-divergent units (Kumar, Rotter, & Aertsen, 2010;Shaw, Harth, & Scheibel, 1982).Prominent examples of convergent-divergent units are the circuits in V1 underlying contextual modulation: pools of orientation selective neurons in layers 2/3 that send their input to somatostatin (SOM) cells, which then broadcast back to the pool of neurons their response (Adesnik et al., 2012;Niell & Scanziani, 2021).The SOM hub cells form with the vasoactive intestinal peptide (VIP) neurons an intermediate unit between convergence and divergence that adjusts the contextual modulation response of the pool of neurons as the relationship between surround and stimulus changes (Keller et al., 2020).At a different scale and for a different scope, the cortico-basal ganglia circuitry can be seen as a convergent-divergent units that regulates voluntary movement: the striatum receives multimodal contextual information from the cortex, processes it and sends it to other subcortical structures such as the pallidum and the substantia nigra, and then the thalamus which acts as a divergent hub broadcasts the processed output back to the cortex (Redgrave et al., 2010).
Convergent-divergent units thus constitute the connective core of sensory, motor, and cognitive brain regions (see Krause & Pack, 2014 for a review), and of global networks (Shanahan, 2012).They allow the receptive fields of sensory neurons to still be driven by local features but modulated by global contextual features (Keller, Dipoppa, Roth, et al., 2020).This enables, among other things, surround suppression via connections within area V1 (Das, & Gilbert, 1999;Hupé , James, Payne, et al., 1998) and sensorimotor prediction coding via longrange connections onto the visual system (Jordan, & Keller, 2020, Leinweber, Ward, Sobczak, et al., 2017;Keller, Bonhoeffer, & Hübener, 2012).Schema of a convergent-divergent unit.In a convergent-divergent unit, a convergent hub collects inputs and passes the information to a divergent hub through a subnetwork of intermediate nodes.The nodes sending information to the convergent hub are referred as source nodes, and those receiving information from the divergent hub as target nodes.Note that typically the source and target nodes are overlapping, i.e., a node can be both a source and a target node.
By employing adaptive rewiring to (binary) directed graphs, Rentzeperis et al., (2022) observed in their model the emergence of convergent-divergent units.Here we use directed weighted graphs to study the effect of spatial optimization principles, i.e., distance minimization and alignment, on the development of convergent-divergent units.We find that the distance minimization principle enables nodes to be encapsulated within convergent-divergent units.The alignment principle interferes with the formation of the convergent-divergent units, the extent of which depends on the type of the layout.
Context-sensitivity differs across brain regions: more local for early and mid-level visual areas (Field, Hayes, & Hess 1993) and more global for higher order ones (Quian Quiroga, Reddy, Kreiman, et al.,2005).It also may vary between individuals (Yamashita, Fujimura, Katahira, et al., 2016), the sexes (Phillips, Chapman, & Berry, 2004), and cultural groups (Doherty, Tsuji, & Phillips, 2008).These style variations might be associated with variations in the convergentdivergent units.In our models, we find that the degree to which nodes within convergentdivergent units are encapsulated, or isolated from the rest of the network, depends parametrically on the relative prominence of distance minimization and adaptive rewiring.

Notation and definitions
A directed graph (digraph) is described by the set,  = (, , ), where  = {1, 2, … , } is the set of nodes,  ⊂  ×  the set of ordered node pairs with (, ) ∈  representing directed edges from  to  denoted as  → , and  = {  : ,  ∈ } the set of edge weights, where   > 0 if (, ) ∈ , and   = 0 when (, ) ∉ .For a weighted edge  → , its length equals its inverse Nodes are called adjacent if there is an edge (in either direction) between them.The  × carries the edge weights of a network as   =   (Fig 3).We refer to the edges directed at node  ∈  as the in-link of  and the edges starting from node  as the out-link of .The tails of the in-links of  constitute the in-degree neighborhood of ,   ().
The remaining set of nodes,  −  −   (), is denoted as    ().The in-degree of node  is the number of its in-links.Analogously, the heads of the out-links of  constitute the out-degree neighborhood of ,   (), and the rest is denoted as    ().The out-degree of node  is the number of its out-links.For an ordered node pair (, ), a directed walk from u to v is an ordered list of edges {( 0 ,  1 ), ( 1 ,  2 ), … , ( −1 ,   ):  0 = ,   = , ( −1 ,   ) ∈ } (Bender & Williamson, 2010).A directed walk is a directed path if the vertices on it are distinct.

Consensus and advection dynamics
Rentzeperis et al. ( 2022) generalized the diffusion dynamics used for undirected graphs in Jarman et al. (2017) to consensus and advection dynamics in digraphs.Both consensus (Ren et al., 2007) and advection dynamics (Chapman, 2015) drive nodes' values to converge to a global state based on each node's local state.The local state of each unit is described by a value, which is called its concentration.Communication between nodes is represented as in-and out-flow between adjacent nodes.Consensus and advection algorithms assume different dynamics driving the communication between nodes.
In consensus dynamics there is communication between adjacent nodes if they have different concentrations.The rate of change of the concentration of node  under the consensus dynamics is The in-degree Laplacian matrix,   = {   }, of the graph  is defined as We substitute (2) into (1), and the consensus dynamics in matrix form becomes: The nodes' concentration at time t is then: In the case of advection dynamics, a node receives an inflow from its in-degree neighbors and sends an outflow to its out-degree neighbors.The inflow and outflow are analogous to the weight of the links and the concentrations of the nodes.Mathematically, the advection dynamics for node The out-degree Laplacian matrix,   = {   }, of the graph  can be defined as The advection dynamics in matrix form is and the solution is For each node, we set its initial concentration to be one and the rest to be zero.The initial conditions collectively form an identity matrix.Applying these initial conditions to eq (4) and eq (8), we get and () =  −    × =  −   (10) We refer to () and () as the consensus and advection kernels respectively.Both reflect the intensity of communication between nodes.For the purpose of our study, the time variable  for the two kernels could also be thought of as a rewiring interval, indicating the time elapsed between two successive rewiring iterations.In all our experiments it was set to 1.

Rewiring Principles
We probe how the structure of the network changes when we iteratively rewire its edges.In general, at each iteration, a node  ∈  is randomly selected and depending on the rewiring condition either one of its in-link is cut and another added, or an out-link is cut and another added.
Suppose that we are going to rewire an in-link of  in an iteration step.An edge (, ) ∈  will be dropped, and a new edge from  to a node  will be established where  was not previously connected, (, ) ∉ , will be added.When an out-link of  is rewired, an edge (, ) ∈  is substituted by a new edge (, ) ∈ .
To decide the choice of  and  at each rewiring step, one of the following three principles, explained below, will be selected with a fixed probability: either the functional principle of adaptive rewiring or one of the two spatial principles: the distance or the wave principle.

Functional principle: Adaptive rewiring
The adaptive rewiring principle is called a functional principle, as it depends on the activation flow between nodes.It states that an underused connection is removed and a new connection is established between two previously unconnected nodes with the most intense traffic between them (via all indirect paths).Distinct topological patterns develop when rewiring the in-degree neighborhood with the consensus algorithm and when rewiring the out-degree neighborhood with the advection algorithm (Rentzeperis et al., 2022).Therefore, we model the intensity of communication with the consensus kernel when rewiring the in-links and with advection kernel when rewiring the out-links of candidate nodes. ∈  () {cos (  )} and  is  ∈   () {(  )}.

Rewiring algorithm
Throughout the rewiring process, the number of nodes and edges of the networks are kept constant for the sake of simplicity (but see Gong & van Leeuwen, 2003 for growing and van den Berg, Gong, Breakspear, et al., 2012 for pruning in undirected networks).The rewiring process starts from a random directed network  = (, , ) with predetermined node number  and edge number .Edges are assigned to  nodes pairs that are randomly selected from all ( − 1) node pairs without replacement.Then positive weights sampled from a predetermined probability distribution are randomly assigned to these edges.
The iterative rewiring process proceeds as follows.
Step 1: Select a random node  ∈  such that its in-and out-degrees are neither zero nor  − 1.
Step 2: With probability   rewire an in-link of  using consensus; otherwise, rewire an out-link of the same node using advection.When   is set to 1, only in-links are rewired.In this case, we refer to the iterative process as in-link rewiring.Analogously, when   is set to 0, we refer to the iterative process as out-link rewiring.
Step 3: Depending on the result in step 2, select a random in-link or out-link of  and rewire it according to one of the three rewiring principles.The probabilities of choosing the distance principle, wave principle, or functional principle are   ,   , and   (  = 1 −   −   ), respectively.
Step 4: Return to step 1 until  edges have been rewired.
We refer to this algorithm as the 'functional + spatial' algorithm.To allow the reader a feel for the effects of functional and spatial rewiring principles, we provided illustrative examples in Supplementary Materials S1.

Baseline algorithm
We take the algorithm from Rentzeperis et al. (2022)  This algorithm was originally applied to directed binary networks.We run this algorithm on directed weighted networks to test if the similar results are obtained and compare the effects of random rewiring to that of spatial principles.This algorithm is referred as the 'functional + random' algorithm.

Network measures
To study the impact of the rewiring principles on the structure of weighted digraphs, we calculate following measures for each rewired network.High scores on each of these measures reflect better information processing and communication within the network.
Number of connected node pairs.An ordered pair (, ) is connected if there is a directed path from  to .The number of connected node pairs is a measure of the extent of information exchange in a digraph.The upper bound of the number of connected node pairs is  2 which is achieved when every node can send information to any node, including itself.We use this measure to quantify the connectedness of a digraph.
Average efficiency.The average efficiency measure quantifies the efficiency of sending information over a network, which is defined as the mean of the inversed shortest directed path lengths of all node pairs (Latora & Marchiori, 2001): where   is the length of the shortest directed path from node  to node .If there is no directed path from  to ,   = ∞.

Number of hubs.
We define convergent hubs as nodes with at least one out-link and a number of in-links that are above a threshold we define.These hubs are suitable as a substrate for collecting distributed information.Inversely, divergent hubs are nodes with at least one in-link and a number of out-links above a predefined threshold.These hubs are suitable as a substrate for information broadcasting.The threshold was set to 15 for both convergent and divergent hubs in the following analysis.

Simulation parameter settings
In our simulations, the number of nodes was  = 100.We set the number of edges to  = [2() * ( − 1)] = 912, a number sufficiently low for a network to be considered sparse (cf.van den Berg, et al, 2012 for undirected networks).The unnormalized weights were either sampled from a normal, (1, 0.25 2 ) or a lognormal distribution (0,1).Sampled negative weights from the normal distribution (an almost impossible occurrence as indicated by its probability: 3.17 * 10 −5 ) were set to 0.05.Normalized weights were obtained by dividing the sampled weights by the sum of all weights.That way the sum of the new normalized weights equals to the number of edges.
For the purposes of spatial embedding, nodes were placed randomly with a uniform distribution on a unit disk; the external field was set to a field  ⃗ ( ⃗) = (1,0) to induce parallel connections or a radial field  ⃗ ( ⃗) =  ⃗ ‖ ⃗‖ to induce concentric connections.
The number of iterations  for each run was 4000.The probabilities of the three principles (  ,   ,   ) and the probability of rewiring in-link   were kept fixed for each run but varied between simulations.For each combination of parameters (  ,   ,   ,   ), we run 30 different instantiations of the rewiring algorithm, over which the mean and standard deviation of the measures were calculated.

Results
We first test whether a combination of adaptive and random rewiring can produce a convergentdivergent unit when the weights of the connections follow a normal distribution.Then we test whether the distance principle facilitates in the formation of convergent-divergent units similarly to random rewiring.Finally, we probe the effect of the wave on the formation of convergentdivergent units.We found similar results when the weights of the connections followed a lognormal distribution, which are not shown here.

Convergent-divergent units on directed weighted networks
We run the 'functional + random' algorithm on directed weighted networks for various (  ,   ).
As the name implies, the emergence of convergent-divergent units requires the formation of both convergent and divergent hubs as well as the existence of communication pathways between them.We find that the number of connected node pairs increases with the proportion of random rewiring,   , regardless of the proportion of in-link rewiring,   (Fig 4 .A).The effect of   on efficiency (Fig S2 .A) is consistent with that on connectedness, which implies the increase of the average efficiency results from the increase of the connectedness of the networks.
As expected, the number of convergent hubs decreases and the number of divergent hubs increases when   increases (  We examine whether the convergent and divergent hubs in the network are connected in such a way so that they form convergent-divergent units.The probability of rewiring in-links,   , is set to 0.5, so that equal proportions of convergent and divergent hubs could develop.The connectedness of the hub nodes also depends on the proportion of random rewiring. For a convergent-divergent unit, we refer to nodes that can be reached from the divergent hub as the target nodes and the nodes that send information to the convergent hub as the source nodes For each convergent-divergent unit, we refer to nodes on directed paths from the convergent to the divergent node as intermediate nodes, and to the subgraph which consists of these nodes as the intermediate subgraph.The intermediate subgraph processes the information collected by the convergent hub.The degree of its isolation from the rest of the network characterizes the context-sensitivity of its processing style.We calculated for all convergentdivergent node units, the size and density of the intermediate subgraph, as long as this subgraph contained more than one node.For each combination of (  ,   ), the sizes and densities of the subgraphs were pooled together across 30 instances.

Similar effects of distance-based rewiring as random rewiring
To see the effects of distance-based rewiring, we replace random rewiring with distance-based rewiring.We run the 'functional + spatial' algorithm without including wave-based rewiring (  = 0).Note that the 'functional + spatial' algorithm does not include any random rewiring.
We found that the proportion of distance-based rewiring,   , has a similar effect on connectedness, average efficiency and hub formation in the rewired networks as random rewiring

Discussion
Starting from a random graph, repeated adaptive rewiring leads to complex network structures.
Previous studies explored this phenomenon for directed binary and undirected graphs; here we extended the scope of this principle to directed weighted graphs and considered their spatial embedding (Calvo Tapia et al., 2019).Similar to these studies, at each rewiring step, we randomly chose with different proportions from three basic rewiring principles: the functional principle of adaptive rewiring according to its ongoing network activity, as represented here by generalized diffusion (i.e., advection and consensus dynamics; Rentzeperis et al., 2022) and two spatial principles: wiring distance minimization and vector field alignment.All three principles are deemed important in shaping the morphology of the nervous system (Calvo Tapia et al, 2019).
We found that functional and spatial principles took complementary roles: whereas adaptive rewiring principle takes the role of forming hubs, the distance minimization principle ruled over network connectedness and efficiency.Previous studies in adaptive rewiring without spatial principles found that adaptive rewiring only, while effective in forming hubs and modules, tends to reduce connectivity and efficiency (Rentzeperis et al., 2022).In their seminal study, Watts & Strogatz (1998) showed that adding a small proportion of random connections strongly improved the efficiency and connectedness of a modular network.For this reason, all previous adaptive rewiring studies, from Gong & van Leeuwen (2003) to Rentzeperis et al. (2022), introduced a proportion of random rewiring to their network models, thus securing efficiency and connectedness.
As spatial rewiring principles are shown to play a similar role, they successfully substitute random rewiring in our model.A major difference between random and spatial rewiring is that the former benefits global connectivity (Watts & Strogatz, 1998), whereas the latter favors local connectivity (Cherniak, 1994).This discrepancy, however, was shown to be no obstacle to the formation of hubs in the network.In fact, applying the distance principle showed the network to evolve a modular structure.
A limitation of the current study is that in our models, the relative contribution of all three rewiring principles was fixed during the network evolution.We did not consider the possibility that different rewiring principles change in prominence over time.Early in the development of the brain, vector field alignment may play a rather prominent role, as brain activity around gestation shows massive bursts of action potentials that spread in a wave-like manner (Blankenship & Feller, 2010).
By contrast, the formation of hubs continues over a much longer period that extends into late adolescence (Oldham & Fornito, 2019).
In directed networks, we may distinguish convergent hubs and divergent hubs.Which of these is more prominent depends on the proportion of advection and consensus dynamics applied in the model (Rentzeperis et al., 2022).This feature may be useful to customize networks to processing requirements, e.g., divergence may be more useful in early processing regions; convergence in later ones (Gorban, Makarov, & Tyukin, 2019).When the advection and consensus dynamics are balanced, adaptive rewiring forms equal numbers of convergent and divergent hubs.These are the major constituents of convergent-divergent units.Convergent-divergent units collect information from pools of nodes through the convergent hubs, process the information in intermediate nodes, and broadcast the results to the network through the divergent hubs.In the brain these units enable context-sensitive modulation of network activity.In the model, convergent-divergent units are formed when convergent and divergent hubs arise (due to adaptive rewiring) and, when the network is efficiently connected (due to the distance minimization principle).We found that, as long as adaptive rewiring and the distance minimization principle are balanced in the evolving network, convergent-divergent hubs are successfully formed.
The distance minimization principle thus interacted constructively with adaptive rewiring in the formation of convergent-divergent units.Moreover, it contributed modularity to the network and established a rich club effect amongst the hubs.Because of this, the units jointly constitute the connective core (Shanahan, 2012) of the network.
An important feature of the distance minimization principle is that its prominence in rewiring determines the degree of encapsulation of the intermediate nodes in the convergent units.With lower proportions of distance-based rewiring, the intermediate nodes were relatively isolated from the rest of the network; with higher proportions they were more interconnected with it.In other words, the relative contribution of the distance principle regulates the context-sensitivity of the computations performed in the convergent-divergent units.We may consider the possibility that this feature is used to establish that whole networks differ regarding the context-sensitivity of their processing style (Doherty, et al., 2008;Phillips et al., 2004;Yamashita, et al., 2016) or to tailor the convergentdivergent units of different subnetworks to their specific computational requirements (Field et al., 1993;Krause & Pack, 2014;Quian Quiroga et al.,2005).
Different types of convergent-divergent units could be assigned to different subnetworks.

Fig 1 .
Fig 1. Principles of network rewiring.(A) Adaptive rewiring.The lightness of a node's color represents the intensity of its communication with the white node.The darker the color, the more intense the communication is (B) Minimization of wiring distance.(C) Alignment to an external vector field.The red and green arrows indicate the rewiring link and the direction of the vector field respectively.

Fig
Fig 2.Schema of a convergent-divergent unit.In a convergent-divergent unit, a convergent hub collects inputs and passes the information to a divergent hub through a subnetwork of intermediate nodes.The nodes sending information to the convergent hub are referred as source nodes, and those receiving information from the divergent hub as target nodes.Note that typically the source and target nodes are overlapping, i.e., a node can be both a source and a target node.

.
The cardinalities || =  and || =  denote the numbers of nodes and directed edges respectively.

Fig 3 .
Fig 3. Schema of the adjacency matrix.The elements of the adjacency matrix are the weights of links.Each row of the adjacency matrix contains the weights of in-links for the corresponding node, and the number of nonzero entries is the in-degree.Similarly, each column carries the weights of out-links, and the number of nonzero entries is the out-degree.
∈   () {  }.Wave principle.The wave principle serves to optimize spatial alignment between network connections.It removes the connection at the largest angle to a vector field  ⃗ ( ⃗) and replaces it with a connection between two previously unconnected nodes whose direction is most closely aligned with the direction of the vector field.The cosine of the angle between the edge  →  and the vector field at  ⃗  is cos (  ) =( ⃗  − ⃗  )• ⃗(  ⃗  )   || ⃗(  ⃗  )|| ,   ∈ [0, ].When rewiring an in-link of ,  is the node in   () such that (, ) forms the largest angle with  ⃗ ( ⃗  ), i.e.,  =  ∈  () { (  )}, and  is the node in    () such that (, ) forms the smallest angle with  ⃗ ( ⃗  ), i.e.,  =  ∈   () { (  )}.When rewiring an out-link of ,  is Fig 4.B, C).When both in-link and out-link are rewired, i.e., 0 <   < 1, the number of convergent and divergent hubs peaks at intermediate values of   (Fig 4.B, C).

Fig 4 .
Fig 4. Random rewiring enhances connectedness and increases the number of hubs when rewiring includes both advection and consensus (0 <   < 1).(A) The proportion of connected node pairs as a function of the probability of random rewiring,   , for different in-link rewiring probabilities,   .(B) The proportion of convergent hubs as a function of   .(C) The proportion of divergent hubs as a function of   .
Convergent-divergent units emerge in directed networks when adaptive and random rewiring are combined (Fig 5.A, B).The number of convergent-divergent units initially increases with   , but then drops for   > 0.6 (Fig 5.B).Although convergent and divergent hubs exist when   = 0 (Fig 4.B, C), they hardly connect to each other, making the number of convergent-divergent units close to 0 (Fig 5.B).

(
Fig 2).The source nodes and target nodes are allowed to overlap.The proportions of source and target nodes to the whole network increase with the probability of random rewiring,   (Fig 5.C).The proportion of their overlap also increases with   (Fig 5.C).
We found that the size of the subgraph increases with   (Fig 5.D) while the average density decreases until it reaches and stays at a floor at   > 0.5, near the density of the whole digraph (Fig 5.E).This is because the intermediate subgraph contains all of the nodes of the graph except for the convergent and divergent hubs.

Fig 5 .
Fig 5.   controls the formation, connectedness, and the degree of isolation of convergentdivergent units.(A) Proportion of rewired networks forming convergent-divergent units as a function of   .(B) Number of convergent-divergent units in rewired networks as a function of   .(C) Proportion of source nodes, target nodes and their overlap as a function of   .(D) Proportion of nodes in intermediate subgraphs as a function of the proportion of   .(F) Density of intermediate subgraphs as a function of   .The black horizontal line is the density of the whole digraph.

Fig 6 .
Fig 6.Distance-based rewiring has similar effects on the connectedness and the number of hubs as random rewiring.(A) The proportion of connected node pairs as a function of the probability of distance-based rewiring,   , for different probabilities of in-link rewiring,   .(B) The proportion of convergent hubs as a function of   .(C) The proportion of divergent hubs as a function of   .

Fig 7 .
Fig 7.   , controls the formation, connectedness, and the degree of isolation of convergentdivergent units.(A) Proportion of rewired networks forming convergent-divergent units as a function of   .(B) Number of convergent-divergent units in rewired networks as a function of   .(C) Proportion of source nodes, target nodes and their overlap as a function of   .(D) Proportion of nodes in intermediate subgraphs as a function of the proportion of   .(E) Density of intermediate subgraphs as a function of   .The black horizontal line represents the density of the whole digraph.

Fig 8 .
Fig 8.The wave principle affects the formation of the convergent-divergent units.Proportion of rewired networks forming convergent-divergent units as a function of the proportion of distancebased principle,   , (A) for the lateral field case and (B) radial field case.The logos next to titles are the morphological view of the networks when the wave propagates laterally and radially.The proportion of in-link rewiring is 0.5, and (  ,   ,   ) is (0.4,0.3,0.3).The green arrows indicate the direction of the underlying field.
∈  () {()  }.In a similar fashion, when rewiring an out-link,  is the node in   () such that (, ) has the lowest advection kernel value, i.e.,  =  ∈  () {()  }, and  is the node in    () such that (, ) has the largest advection kernel value, i.e.,  =  ∈   () {()  }.To instantiate the remaining two principles, the digraph is embedded in a two-dimensional Euclidean space, where the coordinates of node  is denoted as  ⃗  ∈  2 .According to the distance principle, the longest connection is removed and replaced by the spatially closest connection possible between two previously unconnected nodes.The distance between node  and node  is given by   = || ⃗  −  ⃗  ||, where || • || is the Euclidean distance.When an in-link of  is rewired,  is the node in   () with the longest distance from , i.e.,  =  ∈  () {  }, and  the node in When an in-link of  is rewired,  is the node in   () such that (, ) has the lowest consensus kernel value, i.e.,  =  ∈  () {()  } (the (, ) connection is cut), and  is the node in    () such that (, ) has the largest consensus kernel value, i.e.,  =  () with the shortest distance from , i.e.,  =  ∈   () {  }.When rewiring an out-link of ,  is  ∈  () {  } and  is at each iteration are   and   (  = 1 −   ), as the baseline algorithm, in which the functional principle was combined with random rewiring.Random rewiring drops and adds links randomly.Suppose that an in-link of a node  ∈  is rewired.According to random rewiring, two nodes  ∈   () and  ∈    () are selected randomly.The in-link (, ) is cut and (, ) is added.Random rewiring for out-links is similar.The probabilities of choosing functional principle and random rewiring