Fig 1.
The features were extracted from the origin radiographs by the backbone network. RoI was screened out by RPN. RPN produced two losses and
during the training time. The classification network and the mask network produced their losses and predictions, respectively. The softmax was used for outputting the probability for being a lesion area for each RoI. The per-pixel sigmoid was used for outputting a mask. RoI, rectangular region of interest; RPN, region proposal network.
Fig 2.
IoU, intersection over union; AP50, average precision at IoU = 0.50.
Fig 3.
ROC curves of CheXLocNets on validation set.
Each plot illustrates the ROC curves of CheXLocNets on the validation set. The ROC curve of the algorithm is generated by varying the discrimination threshold (used to convert the output probabilities to binary predictions). ROC, receiver operating characteristic.
Fig 4.
The working procedure of six CheXLocNets.
We first trained and evaluated six CheXLocNets separately. Then we selected the two CheXLocNets with the best sensitivity or the best specificity to join together forming an ensemble model.
Table 1.
The classification performance of CheXLocNets on the validation set.
Fig 5.
ROC curves of CheXLocNets on testing set.
Each plot illustrates the ROC curves of CheXLocNets on the testing set. The ROC curve of the algorithm is generated by varying the discrimination threshold (used to convert the output probabilities to binary predictions). ROC, receiver operating characteristic.
Table 2.
The classification performance of CheXLocNets on the testing set.
Fig 6.
An example of chest radiology report.
We highlight the location of the pneumothorax lesion in the chest radiograph (left). The probabilities of segmentation output by CheXLocNet are present in varying shades of red (right). CheXLocNet correctly detected the pneumothorax and masked the lesion area roughly.
Table 3.
The comparison of IoU and Dice score on the testing set.