Fig 1.
The structure of multi-edge features fusion attention network.
Fig 2.
Details of the multi-edge features fusion attention block.
The position features are from the pseudo-mask while the image features and the edge features are from image and edge, respectively.
Table 1.
Pre-experiment tuning of c1 and c2.
Table 2.
Details of generating the original pseudo-mask based on U-Net.
Fig 3.
Details of generating the enhanced pseudo-mask.
Fig 4.
Details of the well-performed MEFFA-Net training using the semi-supervised pseudo-mask augmentation strategy.
Table 3.
Details of the datasets used in our experiments.
Fig 5.
The training and validation graphs on the MoNuSeg dataset.
Fig 6.
Qualitative results of our models with 100% training samples.
The first row is from MoNuSeg, the second row is from CPM-17 and the last row is from CoNSeP. In the Ours(Overlaid) (b), the overlay based on our prediction is shown. In Prediction (c), the output of the MEFFA-Net is shown. In the difference maps (d), referring to [38], blue, green and red areas indicate true positive, false positive and false negative segmentation, respectively.
Table 4.
Comparison experimental results on the multiple datasets.
Fig 7.
Qualitative comparisons of data dependency.
The first, second and the last row are from the MoNuSeg dataset, the CPM-17 dataset and the CoNSeP dataset, respectively. On each row, element (a) is the Original image, while elements (b), (c), (d), and (e) are difference maps of semi-supervised and full-supervised methods which is referring to [38]. The blue, green and red areas indicate true positive, false positive and false negative segmentation, respectively. Then, elements (f), (g), (h), and (i) are the segmentation result of semi-supervised and full-supervised methods.
Table 5.
Comparison experimental results on the multiple datasets.
Table 6.
Ablation studies results on the MoNuSeg dataset.
Fig 8.
Overlaid of training sample and details of segmentation results.
The first, second and last rows are from the MoNuSeg, CPM-17 and CoNSeP dataset, respectively. The (d) and (e) are the ground truth and segmentation results of Ours. In the difference map (f), blue, green and red areas indicate true positive, false positive and false negative segmentation, respectively.