The Rich Get Richer: Brain Injury Elicits Hyperconnectivity in Core Subnetworks

There remains much unknown about how large-scale neural networks accommodate neurological disruption, such as moderate and severe traumatic brain injury (TBI). A primary goal in this study was to examine the alterations in network topology occurring during the first year of recovery following TBI. To do so we examined 21 individuals with moderate and severe TBI at 3 and 6 months after resolution of posttraumatic amnesia and 15 age- and education-matched healthy adults using functional MRI and graph theoretical analyses. There were two central hypotheses in this study: 1) physical disruption results in increased functional connectivity, or hyperconnectivity, and 2) hyperconnectivity occurs in regions typically observed to be the most highly connected cortical hubs, or the “rich club”. The current findings generally support the hyperconnectivity hypothesis showing that during the first year of recovery after TBI, neural networks show increased connectivity, and this change is disproportionately represented in brain regions belonging to the brain's core subnetworks. The selective increases in connectivity observed here are consistent with the preferential attachment model underlying scale-free network development. This study is the largest of its kind and provides the unique opportunity to examine how neural systems adapt to significant neurological disruption during the first year after injury.


Clinical network neuroscience
In the clinical neurosciences it remains an important goal to understand the basic brain changes associated with neurological disruption and the implications these changes have for behavioral deficit and recovery trajectory. There has been widespread use of functional imaging methods to examine task-related brain changes (e.g., mean signal differences) in localized regions of the brain but there has been a recent shift to explore the covariance (i.e., connectivity) between brain regions in addition to fundamental signal amplitude changes.
With the more recent emphasis in connectivity modeling in functional neuroimaging, there is an expanding literature documenting the network alterations associated with brain injury and degenerative processes (see 9). Several studies to date have demonstrated that neurological disruption results in altered connectivity in large-scale neural networks [10][11][12][13][14][15] including evidence that even focal injury has widespread consequences for broader network functioning [16][17]. For example, both focal and diffuse injuries observed in TBI may disrupt distal connectivity which is a distinct and crucial feature to the small-world topology required for efficient transmission of information in neural systems [12][13]. While it would appear a paradoxical consequence to physical network disruption, we have observed that a primary response to neurological disruption in dynamic systems is hyperconnectivity [12,[18][19]. In this paper we aim not only to determine if hyperconnectivity is observable during the first 6 months post injury, but to also determine the specific sites (if they exist) where hyperconnectivity is likely to be observed.
In determining which networks may account for hyperconnectivity after injury, work outside the neurosciences has demonstrated that the small-world topology is particularly resilient to nonselective or ''random'' connectivity loss [20][21]. These authors also demonstrate however that that targeted ''attack'' on critical network hubs can lead to catastrophic consequences for network communication. Hubs provide a buffer to network disruption and similar effects may also be expressed in biological systems. For example, the focused loss of anterior-posterior connectivity (e.g., frontal to PCC to hippocampal connections) in Alzheimer's has devastating consequences for functioning in the areas of memory, spatial navigation, and maintaining semantic associations [9]. By comparison, the pathophysiology occurring in TBI is selective for certain regions (e.g., temporal and frontal poles), but does not function as a targeted attack on connections between essential subnetworks (e.g., default mode network, DMN) thus permitting the opportunity for their greater integration. We hypothesize that hyperconnectivity induced by injury will be expressed in the brain's most highly connected regions, or the ''rich club'', a high capacity but metabolically expensive network that forms the backbone for efficient information transfer in the brain's various subnetworks [22][23]. In order to examine the influence of TBI on network hubs, we will make use of functional MRI and graph theory to examine whole-brain connectivity in TBI early after injury. In doing so, this will be the first study to examine the effects of TBI on neural network hubs over the course of early recovery in moderate and severe TBI.

Network Analysis
Possibly the most important early decision in network modeling is determining the nodes, or brain regions, that will contribute to the model. In large-scale network analyses, the characterization of the network nodes has a direct influence on the graph properties observed [8,24]. Recent efforts to examine ''small-world'' properties in TBI have used 20-30 ROIs to create unweighted (i.e., binary) networks [14,17,25]. Anatomical ROIs are often used to avoid biased selection and circularity in data interpretation [26]; yet these approaches aggregate a number of functionally distinct signals within each ROI. For example, Brodmann's area 46 is one of the largest ROIs in anatomical atlases and maintains critical roles in a number of functions, yet in the absence of additional parcellation, the hundreds of voxels that can be sampled this region are averaged and treated as a single homogenous signal. To address these concerns, we use a data-driven approach for ROI parcellation through the use of spatial independent component analysis [27][28]. Each ROI is represented as a functional signature as opposed to an anatomically bound average of many functional signals [29]. We anticipated that the approach used here will be sensitive to the network changes associated the early recovery window in TBI. Moreover, in studies using fMRI to examine neurotrauma there is concern regarding the influence of brain lesions on the BOLD signal [30] and this is particularly problematic in local areas of hemorrhage where blood products cause susceptibility artifact and local signal attenuation [31][32]. However, the ICA procedure implemented here can isolate the effects of local signal drop-out as a ''component'' and model these data or remove the signal during ''denoising and nuisance'' identification. This approach addresses basic differences in brain morphology and local signal drop-out due to the effects of TBI early after injury.

Study Goals and Hypotheses
There are two hypotheses in this study. First, we propose that a common response to moderate and severe TBI during the first few months post injury is hyperconnectivity, or increases in the magnitude and/or number of connections. We test this hypothesis by examining both the number and strength of connections in the TBI sample over time as compared to a health control (HC) sample. Second, we hypothesize that enhanced connectivity during recovery will occur in three of the most highly connected subnetworks, or ''rich club'': the salience network (SN, e.g., anterior insula), the executive control network (ECN, e.g., dorsolateral prefrontal cortex and parietal cortex) and DMN (e.g., PCC and medial frontal cortex). There are three sources of evidence for this. First, in the work examining fMRI signal amplitude change during task, the most common finding is increased involvement of the ECN, or the PFC and parietal regions after TBI [33][34][35]. Second, there is recent evidence that the PCC and its distinct roles within the DMN has critical function in integrating other subnetworks and facilitating information transfer across a broad spectrum of neurological disorders [36]. Finally, recent work has demonstrated that TBI results in increased connectivity to the insula which maintains a central role in the salience network [37][38]. We tested this second hypothesis by examining the nodes most likely to show enhanced connectivity during recovery from TBI. Finally, given the relationship between the DMN and SN and cognitive performance [39], we also anticipated that hyperconnectivity in these networks would predict performance deficits on tests of processing speed and working memory, two critical areas of cognitive dysfunction after TBI [40][41].

Subjects
Study recruitment included 22 individuals with moderate and severe TBI between the ages of 18 and 53 years and 15 healthy adults of comparable age and education (see Tables 1 and 2 for demographic and clinical information). Due to significant frameby-frame head motion identified via ArtRepair [42] one individual with TBI was removed from the study, leaving a total study sample of 36 individuals at two time points. All study participants underwent two MRI scanning sessions separated by approximately three months. For the TBI sample, initial data collection occurred at three months after emerging from posttraumatic amnesia (PTA), or a period of confusion and amnesia following coma emergence, and the second scanning session followed three months later. These 3-and 6-month windows for measurement are consistent with animal studies examining ''very long'' outcome [43][44][45][46][47][48] and the TBI ''outcome'' literature based upon timepoints where significant change is expected behaviorally [49][50][51][52][53][54][55]. TBI severity was defined using the Glasgow Coma Scale (GCS) in the first 24 hours after injury [87] and GCS scores from 3-8 were considered ''severe'' and scores from 9-12 were considered moderate. In three cases, participants were included with a GCS score of 13-14 because acute neuroimaging findings were positive. Participants were excluded if they remain in treatment for concomitant spinal cord injuries, orthopedic injury, or other injury making it difficult to remain still in the MRI environment. So that findings were generalizable to a typical moderate and severe TBI sample, patients with focal contusions and hemorrhagic injuries were included unless injuries required neurosurgical intervention and removal of tissue resulting in gross derangement of neuroanatomy. Research was conducted with approval by institutional review board and Office of Human Subject Protection at the Pennsylvania State University (PSU). Informed written consent for all participants was obtained at the time of study enrollment. The current study includes individuals who may be cognitively impaired, so capacity for enrollment in the study was based upon how decisions were being made for medical treatment and for functioning independently. If an individual retained capacity to sign for medical procedures and functioned independently (i.e., lived alone, retained driver's license), consent to participate was accepted; however, if caregiver signature was required for medical procedures or the potential participant was not functionally independent this signature and a signature of assent by the potential participant were similarly required for study enrollment. PSU is positively and unequivocally committed to the promotion, encouragement, and facilitation of academic and clinical research in the broad area of general or specific measurements of human development, health, and performance. PSU is dedicated to the ethical treatment of human participants in all research activities conducted under the auspices of this institution and assumes responsibility for safeguarding their rights and welfare.

Cognitive Assessment
The most common cognitive deficits following TBI are in the areas of working memory and processing speed [56][57][58]. All participants completed a brief battery of tests assessing these areas of functioning to determine: 1) areas of cognitive deficit compared to a HC sample and 2) relationship between connectivity changes and cognitive deficit. To assess working memory and processing speed we used the visual search and attention task [VSAT; 59], the Stroop task [60][61], the Trail Making Test (A&B) [62][63] and the digit span subtest from the Wechsler Adult Intelligence Scale -Fourth Edition (Digit Span) [64]. Testing was completed at each data acquisition interval for the TBI sample and at Time 1 for the HC sample. Repeat testing has inherent problems with respect to the effects of practice and while prior exposure to the stimuli may have some small influence on Time 2 scores, the tests presented were chosen specifically because they show little practice effects (e.g., test-retest of the VSAT in healthy adults with a 2-month delay is r = 0.95; [59]). Moreover, tests of rapid decision making and information processing have been shown to demonstrate negligible practice effects when repeated after several months [65]. One method for controlling for practice effects is to compare to a HC sample also tested twice. However, comparisons with an HC sample to determine practice assumes equivalent learning/task acquisition between samples, yet there is a long history of research documenting slowed learning and task acquisition after TBI [40]. Therefore, it was not a goal to measure cognitive change in the HC sample over time, with the exception of the behavioral data collected during each of the fMRI tasks (i.e., 1-back) to verify stable cognitive status between time points.

Focal lesions
There are often whole-brain structural brain changes even in cases of TBI where the primary injury is isolated (e.g., subdural hematoma), [66][67] and diffuse axonal injury (DAI) is a nearly universal finding [68]. Moreover, focal injuries can have widespread consequences for brain function; so focal injury was not an exclusionary criteria in the current study, unless the injury was so severe so as to require neurosurgical intervention (i.e., craniotomy) and/or gross derangement of neuroanatomy. Inclusion of cases where identifiable injury was evident permitted direct examination of TBI as it naturally occurs even in brain regions directly influenced by injury.

MRI procedure and Data acquisition
Data were acquired using a Philips Achieva 3T system (Philips Medical Systems, The Netherlands, n = 8) with a 6-channel head coil, a Siemens Magnetom Trio 3T system (Siemens Medical Solutions, Germany, n = 13) with an 8-channel head coil both housed in the Department of Radiology, Hershey Medical Center, Hershey, PA, or a Siemens Allegra 3T MRI in the Department of Radiology at UMDNJ-NJMS in Newark, NJ, n = 15). Healthy and TBI samples were distributed between the MRI scanners and all subject data were collected on the same scanner over time to maximize intra-subject reliability.
Subjects were made aware of the importance of minimizing head movement during MRI scanning and trials containing significant motion were discontinued or repeated. High resolution brain anatomical images with isotropic spatial resolution of 1.2 mm61.2 mm61.2 mm were acquired using an MPRAGE Efforts were made to maintain consistency in parameters between MRI scanning sites (e.g., TR was 2000) and investigators consulted one another during data collection to monitor for any changes in data acquisition. We made use of a single run of a working memory task, the n-back [69]. In order to maximize accuracy, prior to entering the MRI environment, each subject was exposed to the task and permitted a practice trial to promote accurate and efficient performance. Each run was 135 or 142 volumes of eight ''on'' blocks of the 1-back, a low load task requiring the subject to maintain consecutive matching stimuli in mind when presented a string of letters [69]. Greater detail regarding the task and data collection are consistent with previously published work [70].
Data processing and region parcellation Figure 1 presents the processing stream for fMRI time series analysis. Initial steps of the processing stream involved preprocessing including slice-timing correction, realignment of the functional time series to gather movement parameters for correction, coregistration of the EPI data with a high resolution T1 image, and spatial normalization and smoothing [18,70]. ArtRepair was used to identify slice and volume movement effects using the recommended cut-offs as a heuristic (5% slices and 25% volumes) [42]. Based upon these criteria 1 TBI subject showed significant frame-to-frame movement at Time 1, and was removed from the study.

Independent component analysis
Group independent component analyses (ICA) were conducted using the Group ICA of fMRI Toolbox (GIFT). To achieve a detailed component structure and because a higher ordered dataset was desirable for graph theoretical analyses, we chose a relatively high model order ICA (100 components) for all analyses [71][72][73][74][75]. Subject-specific data reduction principal components analysis retained 120 principal components and group data retained 100 principal components. First, two separate ICAs were conducted to model two separate task timings for the hrf-convolved timecourses related to the influence of n-back performance. It was a goal to reduce the influence of task without removing relevant variance in the time series, so these initial ICA removed only the components with the highest regression coefficients (6-8 components) related to task. Then a second ICA was conducted including all subjects' residual timeseries in one group to provide the basis for the back-reconstruction to the individual level. Group-level ICA was chosen at this step because it has been demonstrated to be sensitive to individual effects while providing a framework for comparing the component structure across subjects [76][77][78][79]. Visual inspection of components was conducted by two raters and spurious components were removed using recommended guidelines for ICA [72,80]. A heuristic cut point was set at a dynamic range of 2.5 and low frequency to high frequency power ratio of 3.0 [72]. To examine the consistency of this component rating, we conducted an inter-rater reliability check and agreement was very high for categorizing components as ''retain'', ''equivocal'', and ''discard'' (r.0.95). Figure 2 illustrates the result of component selection based upon the frequency ratio and dynamic range and the range of values for the 52 retained components. In addition, to guarantee that component selection did not influence the results of graph theoretical analysis, we also conducted an analysis that included 8 ''equivocal'' components, resulting in an additional graph of 60 components (referred to as FDR-60, see Supplementary Table S1).
Finally, we used a spatially constrained ICA (scICA) which provides a hybrid approach enabling us to focus on specific subnetworks of interest in this paper (i.e., the rich club) by providing a set of masks or images to the algorithm while also allowing the data to refine the resulting component [81]. The sICA approach in GIFT estimates maximally independent spatial sources from fMRI signal (see [72]) and maps the spatial extent and labels each component without the need for user identification (e.g., anterior insula-anterior salience network). Overall, we anticipate that the approaches used here provide safeguards for conservative data analysis and interpretation while retaining optimal sensitivity to dynamic network effects over time in TBI.

Graph theory analysis
A representative network graph was created from the data parcellation described above, such that each node in the graph represented a resultant component of the whole-brain ICA [77][78][79][80][81][82], which is an approach previously used for connectivity modeling in clinical samples (see [83][84]). Pairwise correlations amongst all component time series were determined and, after thresholding using false discovery rate (FDR) at p,0.05 components with statistically significant correlations were joined by a weighted link in the network, where weights were determined as the value of the corresponding correlations [85]. Thresholding is a critical issue in creating a graph and has been shown to influence connectivity [86]. To address this issue a second graph was also created setting the lower bound threshold as the mean of the FDR corrected connectivity value from the HC Time 1 data during the first analysis (Sparse Graph, threshold: r = 0.403). This second, sparse graph provided the opportunity to examine connectivity in a graph composed of only moderate to highly connected nodes. The results of this graph analysis were largely consistent with the initial analysis (see Supplementary Table S2). The original 52-component FDR-corrected graph was used in two primary sets of analyses. First, we tested Hypothesis 1 using whole-brain analyses of global graph properties. Graph metrics of interest included: a) the degree distribution, that is, the probability distribution of the number of links per node, b) total number and sum total weight, or strength, of network links, c) weighted clustering coefficient, and d) average global path length. Second, we tested Hypothesis 2 by a) examining change in node degree over time and b) identifying network hubs at each time point. Network hubs were determined as nodes of highest degree, calculated for a weighted network by summing the weights on all links incident to given node. Based upon 1 and 2 standard deviation thresholds, we examined these most highly connected regions at the individual level in order to determine: 1) whether the most highly connected nodes, or the tail of the degree distribution, were disproportionately represented in the TBI samples and 2) which nodes, or components, most commonly appear as hubs. Further, we examined the mean degree values for the most highly connected regions for each of the samples at each time point.

Structural MRI analysis
In order to examine the morphometric changes associated with the TBI sample at time 1 and time 2, voxel-based morphometry (VBM) analysis was conducted using the VBM8 toolbox (http:// dbm.neuro.uni-jena.de/vbm/). The initial processing step in VBM was used to quantify the white, gray and CSF compartments for all subjects via segmentation. During this processing stream, in order to maintain sensitivity to volumetric changes, we used nonnormalized original high resolution T1 images for each subject and the TPM.nii tissue probability map (which is a modification of the ICBM Tissue Probabilistic Atlas) using a bias regularization of 0.001 (very light) and full-width-half-maximum 60 mm cutoff.

Demographic and Neuropsychological Data
Tables 1 and 2 provide the demographic information and neuropsychological information for the two samples. The samples are comparable for age and education and the gender differences between samples was non-significant. There are two important results in Table 2. First, this TBI sample shows classic deficits in working memory and processing speed compared to the HC sample both at Time 1 and Time 2. Second, the TBI sample shows significant improvements on tests of working memory and processing speed between measurements. With respect to the 1back task performed in the scanner there was little change in scores between time points, which we anticipate is due to the ceiling effects for RTs and elevated accuracy at the lowest n-back loads ( Table 3 shows the group and time-point white matter, gray matter, and cerebrospinal fluid values that are the result of segmentation within the VBM suite (SPM8). The results predictably revealed little volumetric change between group and between time points. We anticipate that the 6-month window of time in this study is early to observe volumetric changes that are common to samples of chronic TBI [97]. The general consistency in brain volume between time points also indicates that gross volumetric changes are unlikely to account for the connectivity changes observable in the graph analysis between Time 1 and Time 2.

Graph Theoretical Results: Global graph metrics
The data in Table 4 provide the global graph metrics for each group and time point. The data generally support a hyperconnectivity hypothesis during this early window after injury characterized by increased number and strength of network links globally (connectivity and clustering values non-significantly greater in TBI compared to HCs). Primary graph measures include 1) network strength (sum of weights over all edges in the graph), 2) number of network links, 3) path length (unweighted), 4) clustering coefficient, 5) small worldness (clustering/path length).
The mean correlation coefficient across the graph was also computed. Two-tailed independent sample t-tests revealed significant or near-significant between-group differences at Time 1 for network strength (p = 0.05), number of links (p = 0.046), mean correlation coefficient between nodes (p = 0.04), clustering coefficient in the TBI sample (p = 0.06) and small worldness (clustering/ path length) (p = 0.062). While the TBI retained relatively higher values for all indices at Time 2, the differences were not statistically significant and there were no between group differences, nor was there an effect of time on TBI connectivity. The data revealed comparable path length at both time points when comparing the two samples. The hyperconnectivity observed in global metrics is interpreted as a broad indicator of effects occurring at the local level as opposed to a global increase in connectivity (see below). The result of these regional increases in connectivity on global connectivity indices is modest (L to 1 standard deviation difference between groups across metrics).

Graph Theoretical Results: Degree distribution
We examined the degree distribution for the entire sample (n = 36) for two reasons. First, we aimed to determine if the heavy tail that is a defining characteristic in power-law distributions was evident in the current network data. Second, we aimed to determine the components that comprised the most highly connected nodes within the distribution. In order to do so, a histogram of the degree distributions of all nodes for all subjects was plotted for both samples (TBI n = 1092; HC n = 780) at both time points. Here, node degree is plotted against the probability that a randomly selected node from given group at given time point has corresponding degree. Notice in Figure 3 that the degree distribution has the heavy right tail evident in the classic powerlaw degree distribution observed in many real-world networks, but drops-off in frequencies of very low degree nodes. The left side of the distribution has few very low-level connection prior to peaking and this can be attributed to the thresholding used to create the representative functional network from time series correlations. Specifically, pairwise correlations below the FDR threshold were discarded and did not appear as links in the graph. To determine the ''hubs'' of the graph, a lower bound for highly-connected regions was set at 2 sds above the mean for the Time 1 HC sample (degree = 15.66). The distribution reveals nodes of the highest degree are more likely to be observed in the TBI samples. For example, nodes with a degree of .15.66 make up 7.6% (119/ 1560) of all nodes HC sample and 15.1% (331/2184) of the nodes at in the TBI sample. Figure 2. Component separation based upon dynamic range and low to high frequency power ratio. Dynamic range and power ratio for 100 components during inclusive ICA (all subjects). Rejected components (red) were determined by inspection of low to high frequency ratio and spatial extent consistent with Allen et al., (2011). In the primary analysis, ''equivocal'' components were discarded and the time series for the 52 remaining components composed the final graph. Supplementary materials include an additional analysis that included ''equivocal'' components in order to determine their influence on the graph and the results are nearly identical (see Table S1). doi:10.1371/journal.pone.0104021.g002   Graph theoretical Results, most highly connected nodes In order to examine the functional networks contributing to hyperconnectivity in TBI, we performed two separate calculations. First, we calculated the average degree for all nodes within each group and each time point and sorted the data based upon degree. Second, we calculated the frequency of components comprising the heavy tail in the probability distribution in Figure 3. The most frequently observed components for the TBI sample were the: 1) anterior insula-ACC (salience network), 2) right executive control network, 3) PCC to medial frontal (dorsal DMN) and 4) the retrosplenial cortex-medial temporal lobe (ventral DMN) (see Figure 3 inset). The connectivity in these core subnetworks in the TBI sample was not reflected in the HC sample, where there was a more even distribution of high-degree nodes across the classically recognized subnetworks in the brain with relative equally high connectivity in dDMN, sensorimotor, language and auditory networks. Figures 4-6 illustrate several of the most common components represented in both Table 5 and the distribution tail from Figure 3.

Behavioral performance and Hubs
We examined the relationship between the cognitive tests showing the greatest difference between the TBI and HC samples (see Table 2) and the most highly connected subnetworks (i.e., components) appearing at both Time 1 and Time 2 in TBI. To do so, the average scores for three cognitive measures showing significant decrements in TBI at Time 1 were computed: 1) Stroop Color-word, 2) VSAT total score, and 3) Trails B. These three cognitive measures were correlated with the components most commonly appearing in Table 5   The most highly connected nodes determined by cutoff of 9.9 (2 standard deviations above the mean degree for HC data at Time 1). Note: values for components listed at

Discussion
We used functional MRI and graph theory methods to examine whole brain connectivity changes early after moderate and severe TBI. Using a weighted graph we were able to track not only the influence of TBI on the number of connections but also their strength, which is a decided advantage to this approach. The primary hypothesis that brain connectivity will increase after TBI was generally supported; global metrics showed greater connectivity in the TBI sample and targeted analysis revealed specific nodes where hyperconnectivity is occurring after injury. This effect was evident when examining the mean degree for the 52 nodes, there was significantly greater connectivity in the TBI compared to HC sample. The between-group differences in global graph characteristics such as mean degree and mean number of links were modest (Table 4), and this finding is not unexpected given that we do not anticipate that all network nodes are significantly increasing during recovery. Instead, the highest degree nodes were selectively observed in several core subnetworks such as the SN, ECN, and parts of the DMN. The primary implication for these data is that physical disruption of networks results in an increase in connectivity in select nodes (see inset to Figure 3). The most impressive evidence for this is Figure 3, where the heavy power law tail is disproportionately composed of nodes coming from TBI cases.
The hyperconnectivity hypothesis proposed here is generally consistent with a greater literature, although some qualification is required. Recently in a cross-sectional study of TBI, Pandit and colleagues [13] found diminished connectivity in critical nodes and loss of small-worldness. These findings are not consistent with the results here and there are likely several reasons for this. First, the networks in Pandit et al. and the current work are quite different with respect to the scale of the networks investigated (15 vs. 52 nodes) and, therefore, the data in Pandit and colleagues may be capturing local effects or those within a relatively constrained set of subnetworks. As we elaborate upon below, hyperconnectivity is not uniformly expressed with examples of connectivity loss even within core subnetworks like the DMN. Second, the work by Pandit and colleagues examined chronic TBI, most greater than 2 years post injury, where we might expect to see maturation of the effects observable in the early sample presented here and possibly even connectivity loss in the case of older subjects (age ranges in that study from . Finally, the current approach uses a weighted as opposed to a binary network providing a richer representation of pairwise regional communication and sensitivity to changing connection strength. In general, the preponderance of this literature has established connectivity increases after moderate and severe injury (11-12, 14, 25, 37, 88] and the current data demonstrate that this general response is evident within the first few months after injury. Connectivity in hubs after injury: the rich do get richer It was also a goal to determine if core subnetworks disproportionately accounted for changing connectivity after TBI. We hypothesized that hyperconnectivity would be expressed in three large-scale subnetworks: the SN, ECN, and DMN. This hypothesis was generally supported in several different ways. First, when examining the subnetworks contributing to the heavy tail of the probability distribution, the subnetworks appearing with the highest frequency was the anterior insula of the SN and the ventral and dorsal DMN, which is consistent with two separate findings in the TBI literature. There is now a growing body of literature showing enhanced DMN connectivity after TBI [17,[37][38][88][89]. The role of hyperconnectivity in the DMN after TBI remains uncertain, with several possible explanations, including a failure to deactivate the DMN as a source of interference during goal-directed behavior [17]. While this explanation has intuitive appeal, it may be the case that a hyperconnectivity response both for goal-directed and internal-state networks observed here poses a challenge to seamless transition between these networks. Second, the finding that the anterior insular salience network may be heightened in brain injury has received recent support both in longitudinal [37] and cross-sectional studies [38] and even in a study of the minimally conscious state [90]. This finding indicates that of the most highly connected nodes (i.e., the tail of the probability distribution), the anterior salience network, including insula and anterior cingulate cortex, accounted for the highest percentage of observations. Enhanced involvement of the salience network has been interpreted elsewhere as providing attentional control and operating as a conduit between internal brain states and external stimulation [91]. Finally, when examining the nodes with highest average degree, multiple components within the right ECN network showed degree .2sds above the HC mean in the TBI sample and the RECN was the second most frequent component observed in the heavy tail of the degree distribution. This finding is consistent with a literature emphasizing the role of the right prefrontal cortex in processing novelty [92][93] and increased task load associated with neurological insult, including TBI [33,70]. Neurological insult creates an environment of reduced automaticity and increased supervisory demand for all levels of information processing and the mean connectivity observable in the right ECN may play a role in this. Overall, the observation that ''the rich get richer'' is consistent with a preferential attachment model of growth in scale free networks, where new connections are more likely to be collected by the most connected nodes [94]. It should be emphasized that the hyperconnectivity observed after injury may be dissociable within networks; while nodes within the DMN and SN were the most commonly represented in network hubs (or distribution ''tail''), decreased involvement within these networks was also observed. For example, there were also components within the ventral and dorsal DMN that showed connectivity decline over time in the TBI sample. This observation reveals the functional complexity of nodes within the DMN which more recent work has demonstrated to have divergent roles [95][96]. While the ICA used here aids in determining the functional signatures (components) that are spatially consistent with the DMN, this approach also permits differential analysis of how the distinct connectivity between these signatures are interacting both within and outside the DMN.

Connectivity and clinical and demographic factors
In any study of TBI there is inherent concern about heterogeneity in the nature and severity of the injury between the individuals comprising any group. Although this issue is certainly not unique to TBI, it is a concern universally echoed in this clinical sample. The current study holds significant control in one area often not considered: time post injury. In addition to the opportunity to examine network change during a critical recovery window, examining all individuals at 3 and 6 months post PTA resolution provides some equilibration for neurological status at the first measurement time point. However, several other critical factors require consideration to provide context for the current findings. First, while the location of injury and severity of injury are difficult to maintain constant in TBI work; it should be noted that there was no obvious relationship between injury factors and graph metrics in this sample. For example, when splitting the results based upon greater and lesser network degree, the TBI subgroups were equally likely to show evidence of diffuse injuries indicative of DAI, focal injury (subdural hematoma/contusion), or mixed injuries. This is entirely consistent with the signal amplitude literature in fMRI whereby pathophysiology was not a determinant of neural recruitment [33][34]. With respect to injury severity, GCS score was a poor predictor of connectivity indices (i.e., degree and number of connections), so gross indicators of injury severity may not reliably predict global brain response. While the age range in this sample is comparable to most of the connectivity work conducted in TBI to date [13][14]17,37], it was quite broad, including young adults to middle-aged participants (ages . Increasingly nuanced investigation of brain connectivity following TBI will likely reveal potentially important differences in the brain response across the developmental spectrum. For example, it may be the case that a common response to injury is hyperconnectivity but the degree to which this is expressed may be moderated by age or resource availability (for a critical review see [19]). Even given these considerations, it appears that increased connectivity in critical network nodes is a robust effect emerging even in the context of distinct injury severity and location of pathology.
While capturing the various elements of clinical recovery after TBI is difficult to do with a battery of cognitive tests, we have some indication of behavioral improvement between Time 1 and Time 2. Measures of connectivity, however, did not appear to mirror these changes in any straightforward way. When using the behavioral data to provide context for these imaging findings, the network x behavioral findings are low to modest (accounting for ,10-20% of the total variance). These relationships may indeed require more nuanced analysis including examining the specific clinical presentations associated with connectivity. It will also be important to model nonlinear relationships or examine ''windows of benefit'' for hyperconnectivity. As these relationships are established, network connectivity may begin to enhance our understanding of brain plasticity and its role in clinical recovery.

Summary and Future Directions
The findings here provide evidence that a common network response to traumatic brain injury is hyperconnectivity, observable over the course of the first six months of recovery in this sample of moderate and severe TBI. There is also evidence that hyperconnectivity may be differentially observed in the ''rich club'' or nodes that form the backbone to information transfer in the brain. However, there are several areas for study refinement and future work to continue to clarify the meaning of hyperconnectivity after neurological disruption. While the sample heterogeneity here matches the literature, there may be some subtle but important influences of age on connectivity result. Similarly, if the use of more than one MRI scanner added error to the data, it would reduce the sensitivity to detect subtle effects in connectivity. With regard to MRI facility, analysis of data from separate facilities revealed no systematic bias (e.g., data from one MRI machine was no more likely to reveal increased/decreased connectivity in the TBI sample). In addition, the time series data used here are appropriately long for these analyses, but the approach does assume stationarity within each time series. While this approach is fine for our purposes of examining large-scale network change, for a nuanced understanding of the network changes including network modularity and flexibility, within-time-point analyses are required. There are also questions to be answered regarding the how the onset of goaldirected behavior (i.e., task) influences global and local connectivity. Overall, it should be a goal of future work to establish those contextual demands giving rise to hyperconnectivity and the physical resource thresholds that permit its expression.

Supporting Information
Table S1 General graph properties in TBI and HC using an inclusive graph (components: 60). The current table includes ''equivocal'' components to demonstrate the robustness of the primary findings in Table 3. The TBI sample shows greater, but non-significant increases in global metrics of connectivity (*p, .0.10 during independent samples one-tailed test). Note: no significant results survive corrections for multiple comparisons at alpha = 0.05. (DOC)