Fig 1.
Example of a tracked organoid.
Microscopy image slices in the xy and xz planes using H2B-mCherry to visualize the cell nuclei. Blue arrows indicate that both image slices in the same panel represent the same pixels. Note that the resolution in the z-direction is lower. The images show that there is both variation in intensity and visual overlap in the nuclei (indicated by the orange arrows), especially in the z-direction. The scale bar is 10 μm.
Fig 2.
Overview of the tracking software.
Using ground-truth data of nucleus locations in microscopy images (obtained from manual detection) a convolutional neural network is trained. This trained network is then used to detect (step 1) cells in new images. The detections are then linked over time (step 2), after which the user manually corrects the output with the help of an error checker (step 3). In principle, the manually corrected data can be used as additional ground-truth data to improve the training of the convolutional neural network, leading to an efficient, iterative improvement in performance of the convolutional neural network (dashed line).
Fig 3.
Automated detection of cell nuclei by a convolutional neural network.
(A) Schematic overview of the network. The network is a standard convolutional network with absolute pixel coordinates added as input values. This makes it possible to adjust its detection network both for cells deep inside the organoid and cells near the objective. (B) Example of training data. On the left a slice of a 3D input image, on the right Gaussian predictions created from manual tracking data. The brightness of each pixel represents the likeliness of that pixel being the nucleus center. (C) Example of a network prediction, which shows the probability of each pixel being a nucleus center, according to the neural network. (D) Accuracy of the neural network over time compared to manual tracking data obtained in a subsection of a single organoid. For six time points (annotated with red circles) tracking data was compared to manually annotated data for an entire organoid. In this case, we observed a lower precision. For this organoid, the recall is 0.97 while the precision is 0.98. (E) Same data as in Panel d, now plotted over z. The accuracy of the network drops at the deepest imaged parts, likely because part of each nucleus falls outside the image stack here.
Fig 4.
Overview and results of the linking algorithm.
(A) Lineage trees obtained by linking using the nearest-neighbor method. (B) Raw lineage trees obtained by our linking algorithm. (C) Example of a network of links. There are many possibilities to link the cell detections in different time points together. Of all possible links (displayed as arrows), the most likely ones (displayed as blue arrows) are selected using a scoring system. (D) Original microscopy image, Gaussian fit and intensity profile of both. The intensity profile runs along the marked row in the image. We have chosen a section that highlights the fact that our approach can select nuclei and ignore cell debris, even though both have a similar fluorescence intensity profile. (E) Nucleus volume of three nuclei extracted from a Gaussian fit. The volume (V = Cov(x, x) ⋅ Cov(y, y) ⋅ Cov(z, z)) is calculated for three non-dividing cells. The outliers are caused by errors in the Gaussian fit. (F) Accuracy of the linking algorithm presented here for a single organoid, over time and over the image depth. Note that the precision is higher than for nearest-neighbor linking.
Fig 5.
Analysis of a fully tracked organoid.
(A) Single microscopy image slices of an intestinal organoid from a time lapse movie of 65 hours, in steps of 16.25 hours. H2B-mCherry was used to visualize the nuclei. Scale bar is 40 μm.(B) Digital reconstruction of the same organoid as in Panel a. Cells of the same color originate from the same cell at the beginning of the experiment. (C) Selected lineage trees. The colors match the colors in the reconstruction from Panel b. (D) Number of cellular divisions over time in a single organoid. (E) Number of times each cell divided during the time-lapse integrated over time, from 0 (white) to 5 (red). For gray cells, we could not count the amount of divisions because the cell was outside the image at an earlier point in time.