High-throughput widefield fluorescence imaging of 3D samples using deep learning for 2D projection image restoration

Fluorescence microscopy is a core method for visualizing and quantifying the spatial and temporal dynamics of complex biological processes. While many fluorescent microscopy techniques exist, due to its cost-effectiveness and accessibility, widefield fluorescent imaging remains one of the most widely used. To accomplish imaging of 3D samples, conventional widefield fluorescence imaging entails acquiring a sequence of 2D images spaced along the z-dimension, typically called a z-stack. Oftentimes, the first step in an analysis pipeline is to project that 3D volume into a single 2D image because 3D image data can be cumbersome to manage and challenging to analyze and interpret. Furthermore, z-stack acquisition is often time-consuming, which consequently may induce photodamage to the biological sample; these are major barriers for workflows that require high-throughput, such as drug screening. As an alternative to z-stacks, axial sweep acquisition schemes have been proposed to circumvent these drawbacks and offer potential of 100-fold faster image acquisition for 3D-samples compared to z-stack acquisition. Unfortunately, these acquisition techniques generate low-quality 2D z-projected images that require restoration with unwieldy, computationally heavy algorithms before the images can be interrogated. We propose a novel workflow to combine axial z-sweep acquisition with deep learning-based image restoration, ultimately enabling high-throughput and high-quality imaging of complex 3D-samples using 2D projection images. To demonstrate the capabilities of our proposed workflow, we apply it to live-cell imaging of large 3D tumor spheroid cultures and find we can produce high-fidelity images appropriate for quantitative analysis. Therefore, we conclude that combining axial z-sweep image acquisition with deep learning-based image restoration enables high-throughput and high-quality fluorescence imaging of complex 3D biological samples.

1 Point-by-point response to comments 1

.1 Associate Editor's Comments
Associate Editor: Comments to the Author: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file? id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/ s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf 2. In your Methods section, please include information on the source or supplier of the cell lines used. Author response: Thank you for this suggestion. We have added the information in bold in the paragraph directly under the Experiments heading: "To establish proof of concept across multiple biological protocols, we tested our proposed workflow on three common protocols for culturing live 3D tumor spheroids, each of which present different imaging challenges. Four common immortalized cancer cell lines were used across the experiments: A549 (lung cancer), MCF-7 (breast cancer), MDA-MB-231 (breast cancer) and SK-OV-3 (ovarian cancer). Commercially available cell lines stably expressing a nuclear-restricted mKate2 FP were used for A549 and MCF-7, while standard SK-OV-3 (EACC) and MDA-MB-231 (ATCC) cell lines were purchased and stably transfected with the Incucyte® NucLight Red lentivirus reagent (EF1 Alpha Promoter; Sartorius) per manufacturer's instructions. Because the FP is nuclear restricted, it can be used to measure cell viability, i.e. a loss of viability results in a compromised nuclear membrane to cause a loss of fluorescence. The sections below describe these three protocols."

General questions
1. Is the manuscript technically sound, and do the data support the conclusions?
The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.
Reviewer 1: Yes Reviewer 2: Partly 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer 1: Yes Reviewer 2: No Author response: In response to these questions 1 and 2, we have added two additional metrics (MS-SSIM and MSE) to the results to further support our conclusions. During the process of calculating these new metrics, we also found a bug in our metric calculation script for the single spheroid data and have updated the metrics accordingly.
3. Have the authors made all data underlying the findings in their manuscript fully available?
The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data-e.g. participant privacy or use of data from a third party-those must be specified.

Reviewer 1: No Reviewer 2: No
Author response: Thank you for noticing this. While we previously only included the repository information in our submission letter, we have now added a sentence to the Experiment section where we point the reader to a link on figshare where the data is available (https://figshare. com/projects/Dataset_of_fluorecent_3D-samples_projected_to_2D/126629).

Is the manuscript presented in an intelligible fashion and written in standard English?
PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.
Reviewer 1: Yes Reviewer 2: No Author response: We noted a number of grammatical mistakes and typos in our previous submission, and have gone through and corrected them to the best of our ability. We hope that our manuscript will now fulfill the language requirements for the journal.

Reviewer 1
1. Comparison study is not provided. Author response: We have added a number of additional references in the Introduction (see response to comment 1 from Reviewer 2) to studies that are also combining deep learning methods with fluorescent imaging. A direct comparison study is hard to come by since our acquiring method of z-sweep without hardware modifications or several z-slices has not been done before. We have made this clearer by stating that this gap exists in the field of fluorescent imaging of complex 3D samples.
2. As you mentioned that 96 images were used for research. It is not available in public. Author response: Thank you for acknowledging this. We have added a sentence in the Experiment section where we point the reader figshare https://figshare.com/projects/Dataset_of_ fluorecent_3D-samples_projected_to_2D/126629 where the data is available.
3. The deep learning model techniques, results need to compare. Author response: We have expanded on the metrics in Table 1 to further compare the deep learning models results. Also, as mentioned in comment 1 above, we have added references containing similar works. Although no direct comparison study could be found. 4. Provide in detail about OSA-CGAN. Author response: We thank you for this comment, as it made us realized that we did properly define the OSA-CGAN architecture. This has been remedied and is now addressed in the whole last paragraph in the Conditional Generative Averserial Network section. We hope this addition more clearly and explicitly defines both the OSA-CGAN and U-Net models: "We investigated the use of two different generators for our CGAN architectures, an U-Net generator and an OSA-U-Net variant, both described in section Image Generator. The corresponding architectures are refereed to as CGAN (U-Net generator) and OSA-CGAN (OSA-U-Net generator) respectively and both versions uses the PatchGAN discriminator described in above section." 5. Overall the idea was good. Author response: Thank you!

Reviewer 2
Comments to the Author Widefield fluorescence microscopy is commonly used to study biological phenomena. However, fluorescent microscopic imaging of complex 3D samples, such as tumor spheroids, is burdensome since it often entails the collection of a stack of multiple optical sections along the z-dimension, which is time-consuming and has high risk of causing phototoxicity. To alleviate this problem, we propose a workflow combining axial z-sweep acquisition and deep learning-based image enhancement. This paper is novel and the contributions are good for a journal article. The revised version of the paper may be considered for publication in this journal.
1. The literature of the paper is poor, the authors may be consider the most recent articles for literature. Author response: We have added the information in bold in the Introduction: "Whereas traditional PSF-based image restoration is incredibly challenging for complex 3D samples, convolutional neural networks (CNNs) are now routinely used for such difficult computer vision problems. Across various 3D fluorescence imaging studies, CNNs are used to segment nuclei [11,12] and cell bodies [13,14], image restoration [15,16], image super-resolution [17,18], as well as blind deconvolution in 2D widefield fluorescence imaging [19,20]. In the 3D fluorescence imaging space, recent studies have outlined methods to use CNNs for virtual refocusing to predict a user-defined 3D surface from a single plane 2D fluorescent image [21,22], using recurrent neural networks (RNNs) to reconstruct a complete 3D volume with far fewer optical sections than typically needed [23], and optimization of phase mask filters for PSF-engineering to capture 2D extended depth-of-field images or 3D volumetric images [24,25]. While these notable advancements demonstrate the potential of CNNs to increase throughput for 3D fluorescent imaging, they have primarily been limited to high-resolution, subcellular imaging. These studies do not address the need for improving throughput capabilities for imaging large 3D models, such as tumor spheroids and organoids, which can be larger than a millimeter in diameter. Therefore, there exists a gap in the field whereby high-throughput fluorescent imaging of large, complex 3D samples can be accomplished without complicated hardware or acquiring multiple z-slices of samples." 2. The reasons to achieve the superior performance of the article may be included in the revised version of the article. Author response: We have clarified the advantage of our approach in the first paragraph of the discussion: ". . . Notably, for our embedded tumor multi-spheroid samples, which required acquisition of data across a 1.5mm volume, the conventional z-stack approach took 101 seconds for a single sample whereas the z-sweep only took 1.2 seconds, nearly achieving a 100-fold speedup. . . " We have also expanded on the reasons why neural networks are suitable for this task in the third paragraph of the discussion as: "Regardless of the neural network architecture used, images that result from a z-sweep acquisition are well-suited for a deep learning based restoration workflow. While indeed appearing low in quality, z-sweep images effectively represent the fluorescent intensity of the sample integrated over the z-axis. Therefore, we speculate that the z-sweep sufficiently provides a neural network with enough real information about the sample to accurately predict the restored unintegrated fluorescent signal." 3. List the limitations of the proposed work. Author response: The limitations of the proposed work is expanded on in comment 2 above where the acquisition speed is 1.2 seconds per sample. The almost 100-fold speedup compared to the z-stack approach is a big step in the right direction but still requires 1.6 minutes per 96 well plate. This is enough for a significantly higher throughput but one can always aim for even faster acquisition in the future. Additionally, in comment 4 below we expand on the post processing of the images with the proposed work. Furthermore, we have trained the model on one of the three common types of experimental preparations for culturing spheroids we consider and have noted a transferability across these three different cell cultures. This is surely promising for further exploration of the z-sweep approach on other types of cell cultures, FPs, organoids etc. but not a guarantee. This is also discussed in the Discussion. 4. Discuss the complexity of the proposed model and how efficient it is over the existing ones. Author response: A z-sweep is acquired in 1.2 seconds compared to the 101 seconds a z-stack takes to acquire resulting in an almost 100-fold speedup. Therefore, the big advantage in our approach is the acquisition of the images. However, the trained DL-model does not cause a bottleneck either. An OSA-CGAN model evaluates a z-sweep image in 0.33s on a laptop equipped with an NVIDIA Quadro RTX 3000 and requires 4015MiB VRAM. Running the same model on an Intel Core i7-CPU @ 2.30GHz requires 10s per image. 5. Provide the dataset details or citations. Author response: Thank you for acknowledging this. We have added a sentence in the Experiment section where we point the reader figshare https://figshare.com/projects/Dataset_of_ fluorecent_3D-samples_projected_to_2D/126629 where the data is available.
6. There are several performance metrics in the literature, whereas the authors consider only few in the paper. Author response: We have considered two additional metrics from the literature which we consider relevant. MS-SSIM to extend the analysis on multiple scale and capture information about details from multiple resolutions and MSE since it's one of the most common metrics to asses the difference of individual pixels. These are added to Table 1 in the results as well as discussed in the Discussion.