Fig 1.
Dyslexia neural-biomarkers interpretability and classification mechanism.
In the above mechanism, multi-sites neuroimaging datasets are smooth using a Gaussian filter followed by normalization using a modified histogram method, template registration, ROI segmentation, GM estimations, and DL classification. Other pre-processing steps are required during DL training and classification.
Fig 2.
Visual comparison of a neuroimage sample at a different value of σ.
In (a), (b), and (c), the left side is a noisy image; the right side is a Gaussian smooth image. (a) GM area which constitutes the main ROI of the study is distinguishable from WM and background areas after smoothing; for (b) GM is fairly distinguishable after smoothing; (c) GM area becomes blur when σ>3; (d) shows the symmetrical property of the filter at σ = 1.25.
Fig 3.
Time-behaviour of Gaussian filter at a different scale, σ values on a neuroimage data sample.
This shows that all smoothing process requires less than 1 second to complete regardless of the value of σ. Sigma (SD) axis represents σ values.
Fig 4.
Mapping function curve for MHN.
The first mapping is from [P1, μi] to [S1, μs] and the second mapping is from [μi, P2] to [μs, S2]. The lower and the upper boundaries of the standard scale are m1’ and m2’, respectively.
Fig 5.
Visual inspection of image histograms and CFCs for four randomly selected neuroimages before MHN.
(a) the first column shows image samples from both controls and dyslexic group, (b) the second column shows their transformed histograms, while (c) the third column shows their CFCs. The first two rows are samples drawn from the control group, while the last two rows are samples drawn from the dyslexic group.
Fig 6.
Blocks in dotted lines represent modules that can be removed in this experiment. (a) is the Inception-V3 model, (b)-(f) are architectures of Inception modules A-E.
Fig 7.
A diagrammatical representation of two-pathways cascaded CNN model.
Fig 8.
ResNet blocks are enclosed in red broken boxes. (a) is the architecture of ResNet50; (b) is the architecture of identity block; (c) is the structure of the convolution block.
Table 1.
Parameters settings for DL models utilized in the study.
Conv, convolutional layer; fc, fully connected layer; SGD, stochastic gradient descent.
Fig 9.
Summary of neuroimaging datasets patching algorithm.
Fig 10.
Schematic illustration of training and classification procedure for DL models.
Fig 11.
Visualizing the pdf curves of the reference image, input image, and normalized images.
HMN-histogram matching normalization; MHN-modified histogram normalization. MHN shows better performance for the input image.
Table 2.
Mean DSC for segmentation of all 97 subjects (dyslexic and control) before and after smoothing and normalization using GM tissue overlaps (mean ± SD).
Table 3.
Summary of MSE within and across the subject groups for dyslexic and control (mean ± SD).
Table 4.
Mean segmented brain region volume for all 97 subjects (dyslexic and controls) before and after normalization (mean ± SD).
Fig 12.
Boxplot showing classification accuracy for the three DL models at a 95% CI level.
Table 5.
Performance evaluation of DL models for dyslexia neural-biomarker classification without/with smoothen and normalized dataset (mean ± SD after 10 repeated 10-fold CV).
Fig 13.
10-fold cross-validation experimental performance for test accuracy.
Table 6.
10-fold cross-validation experimental performance for test accuracy (%).
Fig 14.
Performance comparison with state-of-the-art DL model for fMRI-based dyslexia study.