Fig 1.
Custom-designed end-to-end deep learning-based image analysis pipeline.
The pipeline consists of 6 major blocks for performing the preprocessing, detection, cropping, segmentation, post-processing, and measurement. Note: All the black boxes have been added later to obscure the protocol number and any animal identifying information.
Fig 2.
Images were sometimes directly illuminated by a warm tone light source (A) and other times they were not (B). The measuring tape is often placed outside the frame or occluded (C, D). The wound surface area can often become significantly occluded by plastic coverings, reflections, debris, etc (E, F). There are two wounds on each mice, it is possible for an image intended to observe only the left wound contains a clear instance of the right wound (G). It is also possible that stitches holding the splint in place inflict secondary wounds around the wound of interest (H). Note: All the black boxes have been added later to obscure the protocol number and any animal identifying information.
Fig 3.
Cartoon image showing different wound area.
“Region A” shows normal skin. “Region B” shows the regeneration of epidermis. “Region C” shows wound center.
Fig 4.
Red lines delineate the wound while green lines delineate the splint. Early day zero images (A, B). Midway images (days 4–8) with splint (C, D, E F) and without splint (G, H).
Fig 5.
Details of detection and cropping process.
A) Manual labeling bounding box, Training, and Validation, B) Testing (i.e., detecting the wound-of-interest and cropping the wound with highest confidence score. Note: All the black boxes have been added later to obscure the protocol number and any animal identifying information.
Fig 6.
Camera can appear to be at unknown angles relative to the surface of the wound, 90° (A) unknown angle (B). Camera is often not focused at the wounded area or the Tegaderm dressings are still there, introducing significant blurriness (C, D). Hair may grow back before the end of the experiment (i.e., before the complete wound closure), occluding the wounded area (E, F). Initially, we believed the day zero wound sizes to be the same across all mice given that the same tool was used to produce the wounds but close inspection seems to show variability (G, H). Note: the images shown here are a direct output of our automatic cropper, no alterations have been made.
Fig 7.
Details of image segmentation process.
A) Manual annotation, Training, and Validation, B) Testing (i.e., mask prediction).
Fig 8.
Training and validation curves, including dice-based loss and IoU.
The blue plots show the training dice-based loss and training accuracy (i.e., dice-based loss and IoU coefficients) and the orange plots show the validation dice-based loss and validation accuracy (i.e., dice-based loss and IoU coefficients). The loss graph shows that the loss is gradually decreasing toward zero and then stabilized. The IoU coefficients show that the accuracy is gradually increasing toward 100% and then remain steady.
Table 1.
Comparing architectures.
Fig 9.
Automatic and manual measurements of wound periphery for all C1 (left-plot) and C2 (right-plot) mice.
The orange line shows the average wound closure percentage measurements based on the manual periphery annotations and the blue line shows the average wound closure percentages based on the automatic estimation using the developed pipeline. Training, validation, and test datasets are included.
Fig 10.
Automatic and manual measurements of wound periphery for test C1(M-4 L) (left-plot) and C2(M-1 R) (right-plot) mice wounds.
The orange line shows the wound closure percentage measurements based on the manual periphery annotations and the blue line shows the wound closure percentages based on the automatic estimation using the developed pipeline.
Fig 11.
Automatic measurements of wound periphery using post-processing technique 3 for C1 (left-plot) and C2 (right-plot) mice when 75% (baseline), 60%, and 50% of the images have splints.
The blue line shows the average wound closure percentages based on the automatic estimation using the developed pipeline for dataset having 75% (baseline) of the splints, the orange shows the same results but for having 60% of the splints, and finally green shows the same results but for 50% missing splints.
Fig 12.
Automatic measurements of wound periphery using post-processing technique 4 for C1 (left-plot) and C2 (right-plot) mice when 75% (baseline), 60%, and 50% of the images have splints.
The blue line shows the average wound closure percentages based on the automatic estimation using the developed pipeline for dataset having 75% (baseline) of the splints, the orange shows the same results but for having 60% of the splints, and finally green shows the same results but for 50% missing splints.
Fig 13.
Automatic measurements of wound periphery using post-processing technique 4 for C1 (left-plot) and C2 (right-plot) mice dealing with a biased missing reference object.
The blue line shows the average wound closure percentages based on the automatic estimation using the developed pipeline for dataset having 75% (baseline) of the splints, the orange shows the same results but for those with all the splints removed for days 0 − 7, the green shows the same results but for those with all the splints removed for days 4 − 11, and finally red shows the same results but for those with all the splints removed for days 8 − 15.
Fig 14.
Comparison of the results for automatic analysis of wound periphery for wounds with changed camera angle by 10 − 15 degrees with those in the original dataset.
(A) shows the RMSE for all mice in the angled images dataset and those in the original dataset comparing Day 1 to Day 0. (B,C) show the raw image and its masks for the outlier in the RMSE of the angled images.
Fig 15.
Automatic analysis of wound periphery for wounds with different initial sizes using developed pipeline.
The raw image and its masks for sample 1 (A,C) and sample 2 (B,D). Results show that the developed pipeline can provide accurate masks for both wounded area and inner area.
Fig 16.
Experts vs. periphery annotations.
The left column (A, D) are two sample tracings performed by expert #1, middle column (B, E) are two sample tracings performed by expert #2, and the right columns (C, F) are two sample tracings performed by non-expert. (A, B, C) show similar trends in tracing for the same wound (taken before day 3), while (D, E, F) show different trends in tracing for the same wound (taken after day 3).
Fig 17.
Automatic and experts measurements of wound for C1 (left-plot) and C2 (right-plot) mice.
The orange line shows the average wound closure percentage measurements based on expert #1 annotations, the green line shows the expert #2 annotation results, and the blue line shows the average wound closure percentages based on the automatic estimation using the developed pipeline. Training, validation, and test datasets are included.
Fig 18.
Automatic and experts measurements of wound for test C1(M-4 L) (left-plot) and C2(M-1 R) (right-plot) mice wounds.
The orange line shows the wound closure percentage measurements based on expert #1 annotations, the green line shows the expert #2 annotation results, and the blue line shows the wound closure percentages based on the automatic estimation using the developed pipeline.
Table 2.
Correlation.