DeepD3, an open framework for automated quantification of dendritic spines
Fig 2
A, Raw microscopy data (pictogram left) is used as input for a deep neural network (center) to semantically segment dendrites (magenta) and dendritic spines (green) against background (black; right). This color code will be used throughout the manuscript. B, DeepD3 database generation for paired ground-truth data. Before training, dendritic spines (top center) and dendrites (top right) are annotated in raw microscopy data (top left) using pixel-wise and semi-automatic tracing approaches (magenta circles in top far right image), respectively. During training, tiles from the DeepD3 database are streamed, dynamically augmented to increase variability, and fed into the DeepD3 training pipeline. C, The DeepD3 architecture features a dual-decoder structure that emerges from a common latent space ΞΎ and receives skip connections from the encoder. Modules in the encoder are based on residual layers together with max pooling operations, whereas modules in the decoder contain upsampling operations, incorporate encoder input and use conventional convolutional layers. Example network input (left) and output (right) are shown as a microscopy image tile and a localization probability map ranging from 0 (background) to 1 (foreground). D, Overview of the DeepD3 open framework. DeepD3 consists of open datasets, a model zoo with training environment for custom neural networks, and a graphical user interface.