Skip to main content
Advertisement

< Back to Article

Fig 1.

Comparable CAR expressions between patient #3 and patient #4.

PBMCs from patients #3 and #4 were transduced with the kappa-CAR retrovirus, respectively. The ratio and expression (MFI) of CD8 and CD4 subsets are calculated. (A) Flow cytometry analysis of CD8 and CD4 positive population from patients #3 and #4. (B) The ratio of CD3 and CAR positive subsets is calculated. (C) Different subsets of CD3+ T cells and viability are summarized. Data are pooled from at least two independent experiments.

More »

Fig 1 Expand

Fig 2.

The model shows the process of perforin and pZeta cluster formation, and accumulation of F-actin formation after the initial contact of the CAR-T and planar lipid bilayer.

(A) At the initial contact of the CAR with the tumor antigen, micro clusters are formed around the receptor, and the cell starts to spread. (B) The cell spread, and multiple microclusters form. (C) After the cell spread, F-actin polymerizes at the cell periphery. The perforin and pZeta are transported toward the cell center along with F-actin. (D) Perforin and pZeta populate the actin-sparse center and form a cluster. In the experiment, we labeled the different substances with different colors, and different channels of images were obtained using different lasers. We use six single-cell samples in five channels using the best Z position.

More »

Fig 2 Expand

Fig 3.

The overall model of instance segmentation for CAR-T cells using multi-scale cell instance segmentation.

(A) Demonstrates the training phase. In this phase, CAR-T IS images are used for training sets. (B) Shows the model in the evaluation phase. In this phase, each sample has five channels, of which four of them are applicable for evaluation. Channel 3 is used to select the best Z slide, and Channel 1 provides the best possible representation of the CAR-T IS. From Channel 1, the network produces bounding boxes, instance segmentation, and contours. The generated masks and contours are applied on all channels for statistical analysis.

More »

Fig 3 Expand

Fig 4.

Comparison of generated instance segmentation masks in the evaluation phase with their ground truths.

We applied colormaps ’Magenta’, ’Green’, and ’Yellow’ for better representation of the images. Different shades are used to separate the cells from each other. Four different zooming areas are selected for analysis. The images with the same number point to the same zooming area.

More »

Fig 4 Expand

Fig 5.

The comparison of the model’s loss with different sets of training data.

(A) Represents a test image sample in the evaluation phase using the defined trained networks. In (B), four different zoomed areas are selected for analysis. Images with similar numbers point to the same boxes in (A). These images present the effect of having access to more training data and its role in removing discrepancies. (C) Shows the training loss, and (D) Shows the validation loss from 0 to 100 training iterations with 100% of the training data. As expected, the training loss shows a more predictable pattern than validation loss.

More »

Fig 5 Expand

Table 1.

The evaluation accuracy (%) for bounding box generation and instance segmentation.

The upper part of the table is related to object detection (bounding boxes), and the lower part is related to instance segmentation. We used DCAN, CosineEmbedding, and Mask R-CNN for the other ANN architectures. When the entry is not applicable, dash (-) is used.

More »

Table 1 Expand

Table 2.

Characteristics of patients with NHL or CLL [31].

More »

Table 2 Expand

Fig 6.

The total intensity in 4 channels.

F-actin at row 1 (channel 1), perforin at row 2 (channel 2), tumor antigen at row 3 (channel 3), pZeta at row 4 (channel 4). (A) Shows one sample for each patient. The left side is for patient #3 and the right side for patient #4. In these images, the regions that do not belong to any predicted masks from ANNs are removed. Auto-contrast makes cells visible to human eyes (they do not affect real analysis). The (B), (C), (D), and (E) show the total intensity distribution and cumulative probability of two patients using fully trained networks across all channels for all counted cells from the evaluation phase. The figures also show the mean, variance, and the number of cells detected for each channel separately.

More »

Fig 6 Expand

Fig 7.

Successful image data extraction in a python environment.

(A) Is a sample of 11 Z slides with five channels: F-actin at row 1 (channel 1), perforin at row 2 (channel 2), tumor antigen at row 3 (channel 3), pZeta at row 4 (channel 4) and, the DIC of the cells at row 5 (channel 5). To have clear representations of the cells in the figure, colormap filters are added to the original grayscale images: F-actin received ’RdGy_r’ colormap, perforin received ’PRGn_r’ colormap, tumor antigen received ’RdBu_r’ colormap, pZeta received ’PuOr_r’ colormap and, the DIC received ’binary’ colormap. The colormaps [61] are only used for representation purposes and do not affect the evaluation of the IS. (B) Plots the mean intensity values for grayscale images through Z slides for all channels.

More »

Fig 7 Expand

Fig 8.

The outputs of multi-scale cell instance segmentation.

For illustration, we use three different images in three different rows. The first row is for a sparsely populated image, the second row is for a moderately populated image, and the third row is for a highly populated image. The framework contains two modules: (a) bounding box detection module and (b) individual cell segmentation module. The bounding box detection outputs the bounding boxes over each detected cell. The bounding box determines an object by indicating the top-left, top-right, bottom-left, bottom-right, and center points, respectively. The bounding boxes are used to create patches of cells, used for instance segmentation. The instance segmentation masks can be used to create borders (contours) and segmentations without borders, which are inside the areas of the segmented objects.

More »

Fig 8 Expand