Skip to main content
Advertisement

< Back to Article

Fig 1.

An illustration of the shape-to-graph mapping.

Algorithm input is a binary image with the foreground (value 1) shown in white and the background (value 0) shown in black. Algorithm output is an image-scale graph structure. The part of the graph in the foreground (defined later in the text as in-graph) is shown in blue, while the part in the background (out-graph) is shown in orange.

More »

Fig 1 Expand

Fig 2.

Sweep-circle Voronoi algorithm for graph construction.

In this algorithm, a sweep circle (grey circle) expands from the center of the image (purple dot). Each input point (red, green, and cyan dots) forms a bisector (red, green, and cyan ellipses) with the expanding sweep circle. The beachfront is a set of all outermost portions (solid elliptical arcs) of these bisectors. The intersections between the ellipses (black dots) trace out Voronoi edges (blue lines). When two intersection points merge, pinching out a beachfront arc, a new Voronoi vertex is formed.

More »

Fig 2 Expand

Fig 3.

Boundary tracing.

A. A simplified example of an input image. B. The conventional tracing of the boundary (implemented in MATLAB) along the centers of the pixels at the edge of a foreground object. C. Our algorithm traces the boundary directly along the lines separating the foreground and background pixels. D. An illustration of how the algorithm eliminates all boundary self-crossings by a small non-disruptive off-diagonal shift (here the shift was exaggerated for the illustration purposes).

More »

Fig 3 Expand

Fig 4.

A. Boundary annotation: exterior boundaries are shown in red, while interior boundaries are should blue. B. Overall graph annotation: in-graph is shown in red, while the out-graph is shown in cyan.

More »

Fig 4 Expand

Fig 5.

The key elements of the graph.

A. All bridges (red), hubs (green), and connectors (blue) of the in-graph. B. Partitioning of the in-graph into subgraphs (shown with unique colors). Each non-overlapping subgraph is associated with exactly one interior or exterior boundary.

More »

Fig 5 Expand

Fig 6.

The primary graph metrics.

A. An example of paths along the graph edges from the root path (magenta) to the tips of object protrusions. The inscribed circles (green) provide a measure for the width profile. The parts of the paths (blue) outside the circles provide a measure for the normalized boundary profile. B. An illustration of path (blue) branching from a root (green) to the boundary, so that each boundary point has an associated root node and a shortest path to this node along the graph edges. C. The resulting width profile showing the inscribed circle radii for every node on the root path. D. The resulting boundary profile before subtracting the radii of the corresponding root nodes (red) and after subtracting (blue). The colored points at the local maxima of the boundary profile correspond to the protrusion tips in A.

More »

Fig 6 Expand

Fig 7.

Boundary type identification.

We use 40 metrics extracted for each boundary from all the images in a given set and use k-means to associate each boundary with one of the N classes (here N = 12).

More »

Fig 7 Expand

Fig 8.

Per-image characterization.

For each image, we extract the counts of boundaries that belong to each of 12 boundary types, which were determined using k-means clustering on the 40 graph features.

More »

Fig 8 Expand

Table 1.

Graph-based metrics used for creating boundary type classes.

The same 20 features were measured for the in-graphs (representing the cellular structure associated with each boundary) and for the out-graphs (representing the corresponding areas in the background), giving 40 metrics in total.

More »

Table 1 Expand

Fig 9.

Comparison of in-vitro tube formation assay structures with eight different phenotypes.

A. The eight phenotypes resulted from WT and the knockdown of three CCM proteins, all with and without treatment by the ROCK inhibitor. Knockdown of the CCM proteins is associated with the disruption of the otherwise connected mesh. ROCK inhibitor leads to a more connected but still noticeably disorganized network. The scale bar is 200 μm. B. The first two principal components of each image’s boundary type histogram. Images of a similar type and appearance tend to have similar histograms. Here, the markers indicate the corresponding images in A. C. Two images from WTH1152 and CCM1H1152 that appear visually similar but have significantly different boundary type counts. Boundaries that are responsible for the difference are highlighted in blue and cyan. The scale bar is 200 μm. D. The difference in the normalized counts of boundaries of 12 types between CCM1H1152 and WTH1152 images shown in C. Boundary types 2 and 3 (indicated with blue and orange arrows in D and highlighted with the corresponding colors in C) represent small, isolated objects and small holes in wider locations in the network and appear significantly more often in CCM1H1152 formations as compared to otherwise similar WTH1152 structures. In contrast, WTH1152 structures tend to have more boundaries of type 10 (indicated with green arrow in D and highlighted with the same color in C), which represent medium sized holes with frequent bumps and protrusions extending into the hole.

More »

Fig 9 Expand

Fig 10.

A. Nine representative images of multicellular formations out of 100 that were generated by varying two parameters: the strength of cell-ECM adhesion (vertical axis) and the stability of cell-cell contacts (horizontal axis). B. Variations of the two parameters result in visible changes in the boundary type histograms (normalized counts of boundaries of each of 12 types).

More »

Fig 10 Expand

Fig 11.

A linear regression model was trained to predict log-transformed model parameters from the boundary type histograms.

The mean average error in predicting cell-cell adhesion was 0.2392, while predicting the strength of cell-ECM adhesion had the mean average error of 0.2782.

More »

Fig 11 Expand

Fig 12.

A modified boundary tracing for individual cells in a tight cluster.

A. With the previously described boundary tracing, boundaries of contacting cells will overlap. B. The tracing routine is modified to place boundary points halfway between the pixel center and our original half-pixel type tracing. This creates a half-pixel gap between bordering cells. C. Parts of the out-graph for each cell (orange) lies within this gap. Thus, the image out-graphs will include the out-graph nodes between all the contacting cells, effectively encoding the spatial distribution of the cells in the image.

More »

Fig 12 Expand

Fig 13.

Images from the U2OS dataset.

Red channel is phalloidin, blue is Hoechst 33342, and green is WGA. A. Example image from the untreated group. B. Image of cells treated with taxol from the tubulin modulators group. C. Image of cells treated with metoclopramide from the modulator of neuronal receptors group. D. Image of cells treated with digoxin from the structurally related cardenolide glycosides group.

More »

Fig 13 Expand

Fig 14.

Held-out plates were classified with a decision tree trained on the remainder of the dataset.

The new metrics derived with our approach tends to have better classification accuracies, especially for the control class (DMSO) and the modulator of neuronal receptors (NRM). Mean F1 score is 0.916 with the graph derived metrics, and 0.826 with the CellProfiler shape metrics.

More »

Fig 14 Expand

Fig 15.

Sensitivity to image resolution, filtering of small imaging artifacts, and cell density.

A. Mean F1 score for classification of in-vitro tube formation images using smaller (down to 25%) and higher (up to 175%) threshold values for filtering out small holes and debris. B. Mean Average Error in predicting cell-cell (red) and cell-ECM (blue) stability using the same set of simulated images but re-rendered at lower resolutions (down to 50% of the original). C. Errors in parameter prediction increases if only in-graph features are utilized (compare with Fig 11). Using the features from both in-graphs and out-graphs increases the pipeline accuracy when cell/object density is an important characteristic of the image content. D. Confusion chart for the classification of U2OS data without out-graph features. If in-graph features are sufficient to capture an image content, the pipeline becomes insensitive to the removal of out-graph features (compare with Fig 14A).

More »

Fig 15 Expand

Fig 16.

Sensitivity to over- and under-thresholding.

The features of the graph capture the variations in the pattern resulted due to a suboptimal segmentation process, which emphasizes the importance of preserving the proper image content at the pre-processing steps. A. A gray-scale image of endothelial cells forming an interconnected mesh. B. Boundary and width profiles for the subgraph associated with the boundary of the large hole in the middle of the image. C. All in-graphs of the pattern (top row) and a single subgraph associated with the hole boundary (bottom row) for the threshold values that are too low (left), optimal (middle), and too high (right). Under-thresholding can expand the pattern and create artificial (non-existing) connections between cells or cell clusters, while over-thresholding can shrink the pattern and create artificial (non-existing) holes or break the existing contacts.

More »

Fig 16 Expand

Fig 17.

Two Graphical User Interfaces for demonstrating the graph construction and analysis.

A. GUI for illustrating the shape-to-graph approach and the key concepts such as subgraph, in- and out-graphs, and the width and boundary profiles. The user can cycle through the boundaries and see the 40 metrics extracted for each boundary. B. GUI for processing multiple images. Boundaries are automatically clustered and colored according to a user-specified number of boundary types. The bottom graphs are the frequency of boundary types in the current image, a t-SNE of all the boundaries calculated by their features and colored by their resulting class, and a PCA plot of all the images derived from their boundary type histograms.

More »

Fig 17 Expand