Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

< Back to Article

Fig 1.

Pipelines of peripheral leukocyte recognition.

(A) treat leukocyte recognition as traditional feature engineering: segmentation, feature extraction & selection by manual and then classifier based on the feature matrix; (B) treat leukocyte recognition as object classification: get patches containing leukocyte candidates from original image by manual or segmentation approaches, and then feed these patches into CNN-based deep learning classifier to output the leukocyte types; (C) treat leukocyte recognition as object detection: feed the original images into CNN-based deep learning detector, and then output the leukocyte types and the corresponding locations.

More »

Fig 1 Expand

Fig 2.

Selected samples of peripheral leukocytes.

(A) blast; (B) promyelocyte; (C) myelocyte; (D) metamyelocyte; (E) band neutrophil; (F) segmented neutrophil; (G) lymphocyte; (H) monocyte; (I) eosinophil; (J) basophil; (K) reactive lymphocyte.

More »

Fig 2 Expand

Fig 3.

Dataset organization and categories distribution.

(A) proportion of train/detection test/classification test sets; (B) train set (14700images, 11 types of peripheral leukocytes) distribution; (C) detection test set (reddish-brown chart, 1120 images) and classification test set (green chart, 7868 images) distribution.

More »

Fig 3 Expand

Fig 4.

SSD architecture.

To multi-scale feature maps for detection, several feature layers (Conv6_2, Conv7_2, Conv8_2 and Conv9_2) were added to the end of base network (VGG16), where the larger size feature maps, such as Conv4_3, were used to detect small size leukocytes while the smaller size feature map to detect the large size.

More »

Fig 4 Expand

Table 1.

Comparison of detection results using SSD and YOLOv3 series models.

More »

Table 1 Expand

Fig 5.

YOLOv3 architecture.

(A) YOLOv3 pipeline with input image size 416×416 and 3 types of feature map (13×13×69, 26×26×69 and 52×52×69) as output; (B) the basic element of YOLOv3, Darknet_conv2D_BN_Leaky ("DBL" for short), is composed of one convolution layer, one batch normalization layer and one leaky relu layer.; (C) two "DBL" structures following with one "add" layer leads to residual-like unit ("ResUnit" for short); (D) several "ResUnit" with one zero padding layer and "DBL" structure forward generates residual-like block, "ResBlock" for short, which is the module element of Darknet-53; (E) some detection results of peripheral leukocyte using YOLOv3 approach, resize the 732×574 images to 416×416 size as input.

More »

Fig 5 Expand

Fig 6.

YOLOv3-tiny architecture.

(A) the pipeline of YOLOv3 with 2 branch outputs, y1 and y2; (B) the basic element, composed of one convolution layer, one batch normalization layer and one leaky relu layer; (C) one maxpool layer and one "DBL" structure form into "MDBL_Unit" for short.

More »

Fig 6 Expand

Fig 7.

Convergence studies on train loss with respect to the number of iterations.

(A) Effect of different scales of default boxes by changing hyper parameter Smin and input size on SSD model convergence; (B) Effect of different input size and backbone net on YOLOv3 model convergence.

More »

Fig 7 Expand

Fig 8.

Precision versus recall.

(A) SSD300×300_Smin = 0.2 with a mAP of 0.931; (B) YOLOv3_320×320 with a mAP of 0.925.

More »

Fig 8 Expand

Table 2.

Comparison of time consuming using different models.

More »

Table 2 Expand

Fig 9.

Detection of small cells and dense scenes.

(A) NRBC detection where the top was the results of SSD300×300_Smin = 0.2 model and the bottom for YOLOv3_320×320; (B) leukocytes dense scene detection where the top was the results of SSD300×300_Smin = 0.2 model and the bottom for YOLOv3_320×320.

More »

Fig 9 Expand

Fig 10.

Confusion matrix for selected top-2 models with the 7868 image as classification test set.

(A) confusion matrix for SSD300×300_Smin = 0.2 model with the mean accuracy of 90.09%; (B) confusion matrix for YOLOv3_320×320 model with the mean accuracy of 89.36%.

More »

Fig 10 Expand