Skip to main content
Advertisement

< Back to Article

DeepD3, an open framework for automated quantification of dendritic spines

Fig 1

Inter-rater variability is high in spine detection, dendrite tracing and pixel-wise spine annotation.

A, Workflow overview. Multiple experts annotated the data analyzed in this study. N = 7 experts annotated spines by identifying their center of mass, whereas N = 3 used pixel-wise annotations. N = 3 experts traced dendrites. This Figure was created with the help of Biorender. B, The top panel shows a z-projection of the benchmark dataset with two regions of interest (ROIs), A and B. Scale bar indicates 50 µm. The two bottom panels enlarge the two ROIs and show the z-projection of the benchmark dataset stack together with single spine annotations, color-coded by the individual cluster size (left). Scale bar indicates 5 µm. The next three subpanels show the reconstruction of the traced dendrite (magenta) and the pixel-precise annotated dendritic spines (green) across three individual raters (U, V, and W). C, Left, inter-rater reliability across individual raters (N = 7) measured as recall. Each rater was tasked to identify a single spine by clicking on the center of mass of the spine head (see left subpanels in panel B). Right, intra-rater reliability of n = 2 human experts in the benchmarking dataset. Manual annotations (1 and 2) per rater K and L were separated by at least 14 days. D, Intersection over union score (IoU) across individual human annotations for reconstructed dendrite tracings (left) and pixel-precise dendritic spine annotation (right). E, Overview of all pixel-wise human annotations across individual raters U, V, and W. Scale bar indicates 50 µm.

Fig 1

doi: https://doi.org/10.1371/journal.pcbi.1011774.g001