Skip to main content
Advertisement

< Back to Article

Fig 1.

The end-to-end tractometry pipeline beginning with raw dMRI data and proceeding through preprocessing (e.g., dcm2bids, QSIPrep, etc.) to pyAFQ-based tractometry.

pyAFQ accepts preprocessed data from BIDS, but does not require BIDS. Outputs can be visualized with tools such as AFQ-Browser, or integrated into machine learning workflows and statistical models. A typical tractometry pipeline flows from raw data, to the BIDS format, to QSIPrep, to pyAFQ. Then, we use the suite of tools shown in the green boxes on the results of pyAFQ. Descriptions for these software are available in Table 1.

More »

Fig 1 Expand

Table 1.

Descriptions of the different elements of the ecosystem and related software. Their connections are show in Fig 1. Many of these are hosted on https://tractometry.org/.

More »

Table 1 Expand

Table 2.

List of white matter tracts built-in to pyAFQ. Custom definitions can also be provided to pyAFQ. The Reference column shows the paper used to create the definitions, and the Set column shows how these are grouped in the software. However, these groups can also be combined and modified.

More »

Table 2 Expand

Fig 2.

Extending the pyAFQ software.

Panels A and B show select tracts recognized using Baby pyAFQ, with MRtrix3 for tractography [37], in an example baby subject. Panels C and D show select tracts recognized using Recobundles [39] in the Stanford HARDI subject [11]. Tracts in panels A-D were selected for visual clarity. Panels E and F show the acoustic and optic radiations recognized in subject NDARAA948VFH from the Healthy Brain Network Processed Open Diffusion Derivatives dataset [54,55]. All of these recognized tracts are results of Python scripts from pyAFQ’s examples library, which demonstrates pyAFQ’s extensibility. Panels A, C, and E show the axial plane from above and panels B, D, and F show the sagittal plane from the left.

More »

Fig 2 Expand

Fig 3.

Visualization tools available in the tractometry ecosystem.

All tools are visualizing subject NDARBZ913ZB9, randomly chosen from the HBN study [54,55]. (A) FURY visualization of five tracts (all in the left hemisphere): Corticospinal tract (orange), arcuate fasciculus (blue), inferior fronto-occipital tract (brown), uncinate fasciculus (yellow), and inferior longitudinal fasciculus (pink). (B) Plotly visualization of the same tracts, using the same color scheme, which can be rendered into an html webpage for quality control. (C) Visualization of this subject and tracts in AFQ Browser. The color scheme is defined on the left in the BUNDLES column, the anatomy is shown in the middle, and the subject’s tract profiles are shown on the right. Tract names can be selected on the left column to highlight in on that tract’s tissue properties and anatomy (a running example is also available at https://yeatmanlab.github.io/AFQBrowser-demo). Additionally, if multiple subjects are provided, AFQ Browser will show all of the tract profiles overlaid in the “Bundle Details” display on the right-hand side. (D) Web-based visualization of this subject and all their tracts with Tractoscope. Tractometry results of all HBN subjects are available to view at https://nrdg.github.io/tractoscope/.

More »

Fig 3 Expand

Fig 4.

The Tractable software was used to fit a GAM to corticospinal tract profiles of fractional anisotropy (FA) in participants with ALS (orange) and matched controls (gray).

The software includes functionality to visualize empirical data (top) and model fit (bottom).

More »

Fig 4 Expand

Fig 5.

Performance of all brain age models increases with the number of subjects in the training set.

(a) PCR Lasso, the linear baseline model starts at a relatively high R2, even at the smallest sample used here, and then increases from there. (b) MLP4, a fully connected algorithm has a similar learning curve, but it starts at much lower R2, (c) Convolutional neural networks (CNNs) start at an even lower R2 with small sample sizes, but increase precipitously reaching higher levels of performance at large sample sizes. (d) Recurrent Neural Networks (RNNs) have a variety of different performance characteristics, but are also overall very data hungry.

More »

Fig 5 Expand

Fig 6.

A MSMT-CSD computation time using Ray versus the number of CPU cores provided.

With 32 cores, this takes just under 3 minutes. B DIPY tractography timing comparison across three methods: Serial (the DIPY default), Ray with 16 cores, and using the GPU accelerated version. Error bars show standard deviation across 10 trials.

More »

Fig 6 Expand