Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

< Back to Article

Fig 1.

Training data representations for the training of deep learning models.

Image (a) and ground truth (b) show a crop of the simulated Cell Tracking Challenge data set Fluo-N2DH-SIM+ [7, 9]. Generated boundaries (c) and borders (d) can be used to split touching cells. Many training data sets contain only few touching cells resulting in few training samples for borders and boundaries between cells. The combination of cell distances (e) with neighbor distances (f) is aimed to solve this problem since models can also learn from close cells.

More »

Fig 1 Expand

Fig 2.

Overview of the proposed segmentation method using distance predictions (adapted from [13]).

The CNN consists of a single encoder that is connected to both decoder paths. The network is trained to predict cell distances and neighbor distances that are used for the watershed-based post-processing. The input image shows a crop of the Cell Tracking Challenge data set Fluo-N2DH-GOWT1 [7, 9].

More »

Fig 2 Expand

Fig 3.

Main steps of the neighbor distance creation.

After the automated selection of a cell (a), indicated with red, the selected cell and the background are converted to foreground (white in b) while the other cells are converted to background (black in b). Then, the distance transform is calculated (c), cut to the cell region and normalized (d). After inversion (e), the steps are repeated for the remaining cells (f). Finally, the grayscale closed neighbor distances (g) are scaled (h). Shown is a crop of the Broad Bioimage Benchmark Collection data set BBBC039v1 [26].

More »

Fig 3 Expand

Fig 4.

Robustness of training data representations to annotation inconsistencies.

Small changes in the ground truth, simulated with morphological erosions and dilations, result in different boundaries and borders (first and second row). The difference images between the first row and the second row show that the changes for the distance labels are smoother. Shown is a crop of the Cell Tracking Challenge data set Fluo-N2DH-SIM+ [7, 9].

More »

Fig 4 Expand

Fig 5.

Overview of the watershed post-processing for segmentation.

The post-processing consists of a threshold-based seed extraction and mask creation, and a watershed. The predictions show a 2D crop of the Cell Tracking Challenge data set Fluo-N3DL-TRIC [7, 9].

More »

Fig 5 Expand

Fig 6.

Graph construction steps exemplary for four segmented objects.

Edges added in a construction step are black, edges added in previous steps are gray. The gray nodes (O) correspond to segmented objects. The segmented objects from {t − Δt, …, t} are the last matched objects of all active tracks, whereas the segmented objects of t + 1 are not matched to tracks yet. The blue node models the appearance of objects (A), the red node the disappearance of objects (D), and the green node split events (S). Split event nodes (S) are added for each pair of objects at t + 1. Therefore, a split event node (S) has exactly two outgoing edges but can have several ingoing edges from object nodes (O). Source (S) and sink nodes (S+) are added for the formulation as coupled minimum cost flow problem.

More »

Fig 6 Expand

Fig 7.

Cell tracking challenge data set structure.

Since no ground truths are publicly available for the challenge sets, the two provided training sets need to be split into a set used for training and a set for evaluation.

More »

Fig 7 Expand

Table 1.

Boundary and border information in the CTC training set.

More »

Table 1 Expand

Fig 8.

Segmentation results on the BF-C2DL-HSC test set.

Shown are raw predictions and segmentations of a 140 px×140 px test image crop (a-g, best OPCSB models). For multi-channel outputs, channels are color-coded (cell/seed class: white, boundary/border/touching class: red, gap class: blue). The plot at the bottom shows the evaluation on the test set (h).

More »

Fig 8 Expand

Fig 9.

Segmentation results on the Fluo-N3DH-CE test set.

Shown are raw predictions and segmentations of a 140 px×140 px test image crop (a-g, best OPCSB models). For multi-channel outputs, channels are color-coded (cell/seed class: white, boundary/border/touching class: red, gap class: blue). The plot at the bottom shows the evaluation on the test set (h). Note: this is a 3D data set and the erroneous merging of cells can result from any of the slices a cell appears.

More »

Fig 9 Expand

Fig 10.

Segmentation results on the Fluo-N2DL-HeLa test set.

Shown are raw predictions and segmentations of a 140 px×140 px test image crop (a-g, best OPCSB models). For multi-channel outputs, channels are color-coded (cell/seed class: white, boundary/border/touching class: red, gap class: blue). The plot at the bottom shows the evaluation on the test set (h).

More »

Fig 10 Expand

Fig 11.

Segmentation results on the BF-C2DL-MuSC test set.

Shown are raw predictions and segmentations of a 360 px×360 px test image crop (a-g, best OPCSB models). For multi-channel outputs, channels are color-coded (cell/seed class: white, boundary/border/touching class: red, gap class: blue). The plot at the bottom shows the evaluation on the test set (h).

More »

Fig 11 Expand

Table 2.

Cell tracking benchmark and cell segmentation benchmark results (5th edition).

Top 3 rankings in the overall performances OPCSB and OPCTB are written in bold. The corresponding Cell Tracking Benchmark and Cell Segmentation Benchmark leaderboards are available on the Cell Tracking Challenge website.

More »

Table 2 Expand

Fig 12.

Segmentation result of the Fluo-N3DH-CE challenge data.

The maximum intensity projection of the raw data (left) and of the segmentation (right) show that cells can be segmented well even on this challenging data set. S1 Video shows a video of a tracked developing embryo.

More »

Fig 12 Expand

Fig 13.

Tracking results on the Fluo-N2DL-HeLa challenge data.

The first raw image is overlaid with the tracks starting in the first frame. For better visibility, tracks starting in later frames are excluded. S2 Video shows a video of the tracked cells.

More »

Fig 13 Expand