Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

< Back to Article

Fig 1.

The structure of multi-edge features fusion attention network.

More »

Fig 1 Expand

Fig 2.

Details of the multi-edge features fusion attention block.

The position features are from the pseudo-mask while the image features and the edge features are from image and edge, respectively.

More »

Fig 2 Expand

Table 1.

Pre-experiment tuning of c1 and c2.

More »

Table 1 Expand

Table 2.

Details of generating the original pseudo-mask based on U-Net.

More »

Table 2 Expand

Fig 3.

Details of generating the enhanced pseudo-mask.

More »

Fig 3 Expand

Fig 4.

Details of the well-performed MEFFA-Net training using the semi-supervised pseudo-mask augmentation strategy.

More »

Fig 4 Expand

Table 3.

Details of the datasets used in our experiments.

More »

Table 3 Expand

Fig 5.

The training and validation graphs on the MoNuSeg dataset.

More »

Fig 5 Expand

Fig 6.

Qualitative results of our models with 100% training samples.

The first row is from MoNuSeg, the second row is from CPM-17 and the last row is from CoNSeP. In the Ours(Overlaid) (b), the overlay based on our prediction is shown. In Prediction (c), the output of the MEFFA-Net is shown. In the difference maps (d), referring to [38], blue, green and red areas indicate true positive, false positive and false negative segmentation, respectively.

More »

Fig 6 Expand

Table 4.

Comparison experimental results on the multiple datasets.

More »

Table 4 Expand

Fig 7.

Qualitative comparisons of data dependency.

The first, second and the last row are from the MoNuSeg dataset, the CPM-17 dataset and the CoNSeP dataset, respectively. On each row, element (a) is the Original image, while elements (b), (c), (d), and (e) are difference maps of semi-supervised and full-supervised methods which is referring to [38]. The blue, green and red areas indicate true positive, false positive and false negative segmentation, respectively. Then, elements (f), (g), (h), and (i) are the segmentation result of semi-supervised and full-supervised methods.

More »

Fig 7 Expand

Table 5.

Comparison experimental results on the multiple datasets.

More »

Table 5 Expand

Table 6.

Ablation studies results on the MoNuSeg dataset.

More »

Table 6 Expand

Fig 8.

Overlaid of training sample and details of segmentation results.

The first, second and last rows are from the MoNuSeg, CPM-17 and CoNSeP dataset, respectively. The (d) and (e) are the ground truth and segmentation results of Ours. In the difference map (f), blue, green and red areas indicate true positive, false positive and false negative segmentation, respectively.

More »

Fig 8 Expand