Skip to main content
Advertisement

< Back to Article

Fig 1.

Object categories in our images.

(A-C) The SCD pathogenetic pathway and changes undergone by the diseased RBC. A: A healthy RBC with biconcavity. The latter appears as a dimple viewed from the top. B (i-iii): Partially sickled sRBCs at increasing stages of sickling. The bi-concavity distends out to give a shallower dimple, and elongation in profile. This is the category we identify as deformable sRBC (see Section 1.3). B (iv-vi): Additional representative image variants of this category. C (i-iii): Highly sickled sRBCs. The dimple has completely disappeared and the shape is highly elongated. We classify these into our non-deformable category. C (iv-vi): More variants in the non-deformable category. Factors like local flow patterns, applied shear forces, and oxygen levels in the environment give rise to various shapes (teardrop, star-like, amorphous) for different sRBCs. D: White blood cells (WBCs). E: Non-functionally adhered objects. F: Other unclassified objects, like (i) platelet clusters, (ii-iii) lysed cells, (iv-v) dirt and dust. In our workflow types D-F are classified together in the non-sRBC category.

More »

Fig 1 Expand

Fig 2.

Overview of processing pipeline.

A: SCD BioChip and cartoon illustration represents an in-vitro adhesion assay and adhesive dynamics of sRBCs within a mimicked microvasculature. B: Generated input image fed into the Phase I network. C: Phase I segmentation network predicts pixels belonging to adhered sRBCs, shaded red in the images. D: Drawing bounding boxes around segmented objects. E: Extracting adhered objects into individual images. F: The input layer of the Phase II classifier network receives an image from the Phase I detection network, then performs a series of convolutions and nonlinear activations to finally output class predictions.

More »

Fig 2 Expand

Table 1.

Details of data sets used for training / validating the neural networks in the two phases of our workflow.

For both Phase I and II, we use k-fold cross validation with k = 5, and split the respective data sets so that the training and validation sets correspond to approximately 80% and 20% of the whole dataset for each fold.

More »

Table 1 Expand

Fig 3.

Phase II network.

A schematic of the transfer learning workflow used to train our classifier network. We employ the ResNet-50 architecture and start with weights pre-trained on the 1000 category reduced ImageNet ILSRVC database. The final fully connected learnable classification layer is swapped out for a 3 class classification layer suited to our problem.

More »

Fig 3 Expand

Fig 4.

Cell deformability analysis.

A: Schematic for estimation of change in cell aspect ratio (AR) between flow and no flow conditions. (i-ii) show a deformable type cell, and (iii-iv) a nondeformable. B: Mapping deformability to morphology: Cells visually identified as the deformable morphological subtype show significantly higher percentage change in cell AR between flow and no flow conditions compared to the non-deformable subtype.

More »

Fig 4 Expand

Fig 5.

Phase I network performance metrics.

(A) Two examples of typical input image tiles for the Phase I network, along with the corresponding manually labeled segmentation mask assigning each pixel in the image to one of three pixel classes (listed on the right). A(i) shows a tile with deformable sRBCs and non-functionally adhered / other objects, while A(ii) shows one with a non-deformable sRBC and other object. (B) (i) Training and validation history of the total cross entropy / Jaccard loss function for the Phase I network. The solid curve corresponds to the average loss over 5 folds, while the same colored light band denotes the spread (standard deviation) in the loss over these folds. Training history is shown in red and validation in blue (purple indicates overlap). (ii) Final 5-fold averaged performance metric values for both training and validation reached by our Phase I network at the end of training over 50 epochs. Uncertainties indicate spread around the mean of each metric over the 5 folds.

More »

Fig 5 Expand

Fig 6.

Phase II network performance metrics.

(A) Representative examples of single-cell images for each classifier category, the input for Phase II. (B) (i) Training and validation history of the loss function for the Phase II network. The solid curve corresponds to the average loss over 5 folds, while the same colored light band denotes the spread (standard deviation) in the loss over these folds. Training history is shown in red and validation in blue (purple indicates overlap). (ii) Final 5-fold averaged performance metric values for both training and validation reached by our Phase II network at the end of training over 30 epochs. Uncertainties indicate spread around the mean of each metric over the 5 folds.

More »

Fig 6 Expand

Table 2.

Results from the 5-fold cross-validation of the Phase II network.

More »

Table 2 Expand

Fig 7.

Interpreting the fine-tuned ResNet-50 model.

Class activation maps for representative cell types, highlighting the cell features that allow the Phase II network to classify each cell as either deformable or non-deformable sRBC. These heat maps are a measure of the model’s attention [51, 52], where red corresponds to the highest activation, i.e. attention. Top rows show the original images, while the bottom rows show activation heat maps. (A-C) correspond to the deformable sRBC class, while (D-F) correspond to the non-deformable sRBC class. For each panel consisting of cell images and class activation maps, the first column represent the original cell image with no implemented data augmentation. The next column, however, is the same cell image with additional data augmentations like reflection and rotation. The last 2 columns for each panel contain single cell images intentionally modified to remove certain regions (black blocks) in order to confuse the network. The number in each panel is the probability assigned by the network of the cell being a deformable (A-C) or a non-deformable (D-F) sRBC. For the deformable sRBCs in (A-C), the network still classifies accurately when part of the dimple is blocked, but the probability drops when the entire dimple is blocked. Hence for these types of cells the dimple is the key distinguishing feature. Analogously for the non-deformable sRBC cell in (D-F) the network needs to see at least one sharp endpoint or majority of the edge to classify reliably.

More »

Fig 7 Expand

Fig 8.

Manual vs machine learning (ML) performance.

Results from pitting count estimates from 19 whole microchannel images processed through our automated two-part processing pipeline vs. manual characterization. Error bars along the manual axis are obtained from variance in repeated manual counts on a set of test images. The red line is the line of perfect agreement. Error bars on ML counts are estimated from the precision rates reached by our Phase II classifier network in predicting true positive outcomes in relevant categories on a validation set (see Fig 6). R2 statistic values, indicating goodness of agreement between manual and ML counts, are indicated in each graph. A: Results for total sRBC (deformable + nondeformable) cell counts. This plot is illustrative of the high degree of accuracy achieved by our ML in identifying sRBCs. B and C: Results for number of sRBCs in each channel image classified manually and by ML as deformable or non-deformable respectively. This measures the agreement reached in classification of the two morphological categories.

More »

Fig 8 Expand