Figures
Abstract
Compressed fluorescence lifetime imaging (Compressed-FLIM) is a novel Snapshot compressive imaging (SCI) method for single-shot widefield FLIM. This approach has the advantages of high temporal resolution and deep frame sequences, allowing for the analysis of FLIM signals that follow complex decay models. However, the precision of Compressed-FLIM is limited by reconstruction algorithms. To improve the reconstruction accuracy of Compressed-FLIM in dealing with large-scale FLIM problem, we developed a more effective combined prior model 3DTGp V_net, based on the Plug and Play (PnP) framework. Extensive numerical simulations indicate the proposed method eliminates reconstruction artifacts caused by the Deep denoiser networks. Moreover, it improves the reconstructed accuracy by around 4dB (peak signal-to-noise ratio; PSNR) over the state-of-the-art TV+FFDNet in test data sets. We conducted the single-shot FLIM experiment with different Rhodamine reagents and the results show that in practice, the proposed algorithm has promising reconstruction performance and more negligible lifetime bias.
Citation: Ji C, Wang X, He K, Xue Y, Li Y, Xin L, et al. (2022) Compressed fluorescence lifetime imaging via combined TV-based and deep priors. PLoS ONE 17(8): e0271441. https://doi.org/10.1371/journal.pone.0271441
Editor: Li Zeng, Chongqing University, CHINA
Received: February 11, 2022; Accepted: June 30, 2022; Published: August 12, 2022
Copyright: © 2022 Ji et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: All relevant data are within the manuscript and its Supporting Information files
Funding: The authors received no specific funding for this work.
Competing interests: The authors have declared that no competing interests exist.
1. Introduction
Widefield fluorescence lifetime imaging (FLIM) is widely used in biomedical diagnostics and flow quantitative measurements, such as cancer diagnosis and treatment monitoring [1, 2], identifying species concentration from reactive-flow systems [3], and understanding the transient evolutionary behavior of eddies in highly turbulent flames [4]. Most of these examples are non-repeatable transient events that demand a single-shot widefield measurement method. However, performing high precision widefield lifetime measurements and quantitative analyses have always been a significant challenge in this field.
The traditional widefield FLIM approaches, including time-correlated single-photon counting (TCSPC) [5, 6], streak camera [7], and single-photon avalanche diode (SPAD) [8, 9] possess high temporal resolution. Nevertheless, they require repeated measurements to obtain the widefield fluorescence lifetime. Recently, a snapshot compressive imaging (SCI) method, compressed ultrafast photography (CUP), has emerged as a potential solution for snapshot widefield FLIM [10]. Compared to traditional methods, CUP is the only passive 2D technology with picosecond to femtosecond time resolution, which can acquire complete 2D transient processes within a snapshot.
The CUP system is a combination of the streak camera and compressive sensing methods. The typical CUP process is to map 3D encoded data onto a 2D detection array, and then restore the original information through compressed sensing algorithms. However, the data reconstruction step of CUP is a complex task. Significantly, the reconstruction quality of the image deteriorates rapidly with increasing sequence depth. To solve this issue, numerous algorithms have been designed through the exploration of underlying sparsity structures. Plug and Play(PnP) [11] is a typical SCI framework that allows the matching of flexible state-of-the-art forward models with advanced priors or denoising models. On this basis, GAP-TV has become a popular low memory and fast SCI algorithm that combines generalized alternating projection (GAP) and Total Variation (TV) [12]. Denoiser based on block similarity such as block-matching and 3D filtering (BM3D) [13] and weighted nuclear norm minimization (WNNM) [14] enjoy more effective sparsity representation than TV. However, these methods have high computational complexity and often take several hours, while the TV algorithm only takes a few minutes. As a result, BM3D and WNNM are rarely used in Compressed-FLIM, especially when real-time imaging is required.
In contrast to conventional denoisers, Deep denoiser networks such as FFDNet [15] and FastDVDnet [16, 17] resolve the common sparsity representation problem in local similarity and motion compensation while enjoying fast computing speed. However, due to limited priors with the training sets, Deep denoiser networks are required to extract artifacts in the reconstruction process, leading to confusing results. To take advantage of both the Deep denoiser network and TV model, H. Qiu et al. proposed a combined denoiser TV+FFDNet and achieved superior performance to previous algorithms [18].
Inspired by combined priors, we further explore a more effective combination of traditional denoisers and Deep denoiser networks. In this paper, we devise a 3DTGp V denoiser by exploring the underlying sparsity of signals in space-time and the superiority of the non-convex ℓp(0 <p < 1) norm in minimizing convergence. Meanwhile, by further combining the video denoising network FastDVDnet, we develop a novel combination prior, named 3DTGp V_net.
We make various simulations based on the CUP framework and determine that the proposed 3DTGp V_net prior offers a ~4dB improvement in peak signal-to-noise ratio (PSNR) compared with the TV+FFDNet prior in runner test sets. Meanwhile, the reconstruction artifacts caused by Deep denoiser networks are successfully eliminated. Besides, we conduct a widefield Compressed-FLIM experiment and obtain 70 consecutive high-resolution images within a single snapshot. Compared with the lifetime bias of the reconstructed data with TV+FFDNet, our method provides higher lifetime evaluation accuracy.
2. Principle of compressed-FLIM
A schematic diagram of the compressed ultrafast photography-FLIM (Compressed-FLIM) is illustrated in Fig 1. It comprises three parts: generation of widefield fluorescence signals, data acquisition, and data reconstruction. Unlike the previous scheme [10], we use a transmissive mask rather than the reflective a digital mirror device (DMD) as the spatial encoder.
M1: a pre-designed circular mask with a central cross; M2: the fixed binary mask; BS: beam splitter.
2.1 Generation of widefield fluorescence signals
A 515nm femtosecond laser (200fs) beam passes a cylindrical lens into a laser sheet. The laser sheet illuminates a Rhodamine water solution. Behind the Rhodamine solution, a 515nm filter is positioned to filter excitation light. The fluorescence signals pass through a pre-designed circular mask (M1) cut with a cross to highlight spatial recognition, generating shaped fluorescence signals. The diameter of M1 is 35mm.
2.2 Data acquisition
After passing through a lens, the shaped signals are divided into two beams by a beam splitter (BS). One sub-signal is directly detected with an external charge-coupled device (CCD) image sensor (Hamamatsu C11440). The other is spatially encoded through a binary mask M2 and recorded by a Streak Camera (XIOPM 5200). The layout of M2 is a random pattern with a pixel resolution of 250 × 250, and the size of a single-pixel is 20 × 20μm. To ensure entire imaging of the targets, the slit of the Streak Camera is fully open (~5 mm), and the image plane of the Streak Camera is adjusted at M2.
In the acquisition section of Streak Camera, the encoded signal undergoes photoelectric conversion at the cathode, then the electric signals at different times are deflected by the slope voltage to various positions on the fluorescent screen. Finally, photons are emitted and collected by the internal CCD (512 × 512 binned pixels; 4 × 4 binning). The size of the binned pixels is 26 × 26 μm.
A DMD is the typical encoder in the CUP system. However, for weak fluorescence acquisition, the fixed binary mask [19] significantly improves the signal-to-noise ratio (SNR) by its transmission characteristics. We randomly generated multiple groups of coding layouts through MATLAB, and selected the best coding layout through simulation results. The signals transmittance rate is ultimately set to 25%.
2.3 Data reconstruction
The fluorescence signals can be regarded as a data cube I(x, y, t). In the external CCD view, the cube is directly integrated along the time direction, and the measured data from the CCD can be expressed as Ec = ∫I(x, y, t)dt. From the perspective of Streak Camera, operator T carries out spatial coding of the cube, and operator S executes the shearing of signals from the coding cube to the tilted coding cube. Ultimately, the accumulation of tilted coding cubes along the time direction is represented by operator C. The entire data acquisition process in Streak Camera view can be described as Es = TSCI(x, y, t).
Data reconstruction is an ill-condition inverse process. Adding sparsity constraints to the least-squares method realizes the stable reconstruction of the algorithm. The optimization problem of CUP-FLIM can be expressed as:
(1)
where the first and the second terms are fidelity terms with data collected by the Streak Camera and external CCD, respectively. The last term φ(I) represents the prior used to impose sparsity features to signals while μ and λ are weight parameters. In the next section, we will describe the implementation process of the proposed algorithm and the innovative sophisticated prior.
3. Reconstruction algorithm
3.1 3DTGpV priors
Prior plays a key role in the reconstruction algorithms of compressed sensing. The ℓ0 norm prior is the sparsest representation, as it counts the number of nonzero entries in signals. However, it is extremely challenging to process numerically. For solving the dilemma of algorithms without convergence, Donoho. et al. verified the approximate equivalence of the ℓ1 and ℓ0 norms [20].
Formally, the ℓ1 norm minimization can be expressed as
(2)
In the research area of images, by considering the spatial smoothing properties of natural signals, the generalized form of ℓ1 norm total variation (TV) has been proved to be far sparser when applying the minimization principle of image gradient shown in Fig 2A. The TV prior obeys
(3)
Associated elements among the different priors (a)TV (b) TGV (c) 3DTV.
Next, we will briefly introduce three generalized forms (3DTV, TGV, TpV) based on the TV prior that strengthen sparsity representation.
3.1.1 Stretch in the spatial domain–TGV.
Total generalized variation (TGV) is a second-order gradient minimization proposed by Kunisch. et al. [21]. It incorporates more adjacent elements than TV and regards the second-order gradient of images as the sparse coefficients, as Fig 2B illustrates. For mathematical imaging problems, TGV is an effective approach that enhances the details of high-frequency signals and eliminates staircasing effects [22]. It can be expressed as
(5)
3.1.2 Stretch in the time domain– 3DTV.
The TV prior merely considers image similarity in continuous 2D space but ignores the similarity of adjacent elements [23] in the time direction. 3DTV introduces the 3D sparsity constraint of fluorescence signals shown in Fig 2C, and can be represented as
(6)
Furthermore, given that the time-domain correlation decreases with the increase of motion scale, we add a time-domain weight parameter τ (0 ≤ τ ≤ 1) to flexibly balance the relevancy between motion scale and time-domain correlation. Therefore, Eq (6) can be rewritten as
(7)
3.1.3 ℓp(0 <p < 1) norm of gradient–TpV.
The ℓp (0 <p < 1) norm is defined as , which is closer to ℓ0 norm than ℓ1 norm in mathematical form, thus approaching the sparsest solution. Our previous work has proved that non-convex optimization algorithm based on ℓp norm has more vital sparsity constraints even when it belongs to non-convex optimization problem [24]. With superior sparsity performance, the TpV prior eliminates artifacts and achieves superior reconstruction results [25, 26]. The TpV prior is expressed as
(8)
By combining the different merits of the three TV-based priors, we proposed the following 3DTGpV prior:
(9)
3.2 PnP- 3DTGpV_net algorithms
In this section, we propose a novel algorithm based on the PnP-framework 3DTGp V_net prior by combining 3DTGp V prior and Deep denoiser network.
We will introduce the overall algorithm flow presented in Fig 3 and Algorithm 1. The sparse signals of interest u are rebuilt by determining the minimum solution from the following constrained formula
(10)
where bs denotes data measured by Streak Camera. bc signifies data measured by the external CCD, As and As represent the corresponding projection matrices, respectively, v is an auxiliary variable, and λ is an added weight.
According to the generalized alternating projection (GAP) algorithm [12], we update the fidelity and prior term separately. Fig 3 displays the workflow of the PnP-3DTGp V_net algorithm. For each iteration stage, we first apply the Euclidean projection for updating u(t):
(11)
The update of v is a denoising problem, we execute the denoising process by using 3DTGpV and FastDVDnet [16], respectively
For the 3DTGpV update section
(12)
(13)
For the Deep denoiser network update section
(15)
Algorithm 1. PnP- 3DTGp V_net framework.
- Input As, Ac, bs, bc, Given p ∈ (0,1):
- Initialize v0 = As’bs, μ = 0.1,λ = 0.07
- for iteration in range(0, 250):
- Update streak camera’s reconstruction data
by
;
- Update CCD’s reconstruction data
by
;
- Update ut by
;
- 3DTGpV denoising: Update
by
where
;
- Deep network denoising: Update
by
;
- Update v(t) by
- Update streak camera’s reconstruction data
- Obtain reconstruction result: u
4 Simulation results
In the simulation, we compare the reconstruction performances of six priors (TV, 3DTGpV, BM3D, TV+FFDNet, TV+FastDVDnet, and 3DTGp V_net) by applying widely-used drop and runner datasets. Each dataset comprises 30 video clips. For initialization, we set v(0) = AsTbs, z(0) = 0, λ = 0.07, μ = 0.1, and τ = 0.2. Each algorithm performs 250 iterations independently. For related Deep network, we directly use the FFDNet model and parameters from https://github.com/cszn/KAIR. Besides, the FastDVDnet model and parameters are from https://github.com/m-tassano/fastdvdnet. The drop and runner datasets are from https://github.com/zsm1211/PnP-SCI/tree/master/dataset/simdata/benchmark.
Figs 4 and 5 present the reconstructed frames (seen in Visualization 1 and Visualization 2) restored with different priors using the two above-mentioned datasets. The 3DTGpV prior achieves superior detailed than the TV prior. Although BM3D has a more rigorous denoising ability than the TV-based methods, excessive smoothing leads to the loss of image detail. The combined priors based on TV and Deep denoiser networks (TV+FFDNet and TV+FastDVDnet) provide better reconstruction contrast and detail than traditional priors but they expose unsatisfactory artifacts in the reconstructed images. Our proposed combined prior 3DTGp V_net succeed in eliminating artifacts, leading to more accurate representations of the original images.
We evaluate the quality of the reconstructed images by two indicators: peak signal-to-noise ratio (PSNR) and structural similarity (SSIM).
The PSNR can be calculated by
(16)
where x and y represents the original image and the reconstructed image, respectively. m and n indicates the height and width of the image, respectively.
The SSIM can be calculated by
(17)
where μx and μy represents the mean value of the original image and the reconstructed image, respectively.
and
are the corresponding variance. σxy means the covariance.
Table 1 presents the average PSNR and SSIM results. We can conclude that the 3DTGp V_net prior outperforms the other priors in both PSNR and SSIM. Significantly, the proposed prior improve the reconstructed accuracy by approximately 4dB (PSNR) over the state-of-the-art TV+FFDNet in runner test sets.
5. Experiments
In the experiments, we record widefield fluorescence data of Rhodamine 6G and Rhodamine B by CUP. The results are reconstructed using both the PnP—TV+FFDNet algorithm and the proposed PnP - 3DTGp V_net algorithm. We set the CUP time resolution to be 330 ps. The reconstruction process is implemented in Ubuntu 20.04 with an NVIDIA GeForce GTX 1650Ti GPU.
Fig 6A presents the streak camera measurement data for Rhodamine 6G. The scanning direction of the data is from top to bottom. Fig 6B and 6C show the widefield fluorescence data rebuilt by the PnP-TV+FFDNet and PnP - 3DTGp V_net algorithms, respectively. Also, the reconstructed movies are shown in Visualization 3 and Visualization 4. By comparing the two sets of data, it is apparent that our proposed algorithm achieves smoother reconstruction results with fewer artifacts.
Measurement and reconstruction data of Rhodamine 6G: (a) Streak Camera image; (b) Reconstructed frames using PnP-TV+FFDNet algorithm; (c) Reconstructed frames using PnP-3DTGp V_net algorithm.
In Fig 7A, the measured data of Rhodamine B is displayed. It has a shorter glow duration than Rhodamine 6G. The corresponding rebuild results are shown in Fig 7B and 7C, and the movies are presented in Visualization 5 and Visualization 6. The results indicate that the proposed algorithm achieves better detail reconstruction in various fluorescence environments.
Measurement and reconstruction data of Rhodamine B: (a) Streak Camera image; (b) Reconstructed frames using PnP-TV+FFDNet algorithm; (c) Reconstructed frames using PnP-3DTGp V_net algorithm.
To further analyze the measurement accuracy of widefield fluorescence lifetime, we implement the exponential fitting with the least square method, based on a mono-exponential decay model for each pixel [27].
The measured decay h(t) can be expressed as
(18)
where A represents the amplitude, τ denotes the lifetime, ε signifies noise, and irf (t) is the instrument response function (IRF) of the measurement system. Since the full width at half maximum (FWMH) of the laser pulse is 200fs, irf (t) can be regarded as a delta function for fluorescence decays with nanosecond lifetimes.
Fig 8A and 8B display the two groups of 2D lifetime images rebuilt using the PnP-TV+FFDNet and PnP-3DTGp V_net algorithms, respectively. Besides, Fig 9 shows the reconstruction lifetime bias. For Rhodamine 6G (R6G), the mean lifetime and standard deviation of the proposed algorithm are 3.91 ns and 0.57 ns, respectively. Also, the corresponding values for PnP-TV+FFDNet are 4.41 ns and 1.1 ns. For Rhodamine B (RB), the mean lifetime and standard deviation of the proposed algorithm are 1.68 ns and 0.52 ns, while for PnP-TV+FFDNet they are 1.72 ns and 0.54 ns.
In the slit-scanning mode of the Streak Camera, we re-obtain non-superimposed fluorescence lifetime data as a reference. The single exponential fitting results of Rhodamine 6G and Rhodamine B are 3.62 ns and 1.51 ns, respectively. These improved results demonstrate that our proposed PnP-3DTGp V_net algorithm produces a bias that is 0.29 ns and 0.17 ns lower than the PnP-TV+FFDNet algorithm.
6. Conclusion
In this study, we propose 3DTGp V_net, a highly effective Compressed-FLIM combined prior. Results from numerous simulations and experiments confirm that our proposed method has better reconstruction performance than the existing algorithms and presents higher evaluation accuracy for wide-field FLIM. Besides, this study further confirms that combined priors can effectively complement the advantages of traditional priors and Deep denoiser networks to improve the reconstruction performance of compressive video imaging technology. Lastly, it is noted that our algorithm is a general framework and demanding to relevant SCI systems.
Acknowledgments
The authors would also like to thank Prof. Jinshou Tian and Prof. Liang Sheng for their valuable help and contribution.
References
- 1. Ouyang Y, Liu Y, Wang ZM, Liu Z, Wu M. FLIM as a Promising Tool for Cancer Diagnosis and Treatment Monitoring. Nano-Micro Lett. 2021;13: 133. pmid:34138374
- 2. Marcu L. Fluorescence Lifetime Techniques in Medical Applications. Ann Biomed Eng. 2012;40: 304–331. pmid:22273730
- 3. Wang Z, Stamatoglou P, Li Z, Aldén M, Richter M. Ultra-high-speed PLIF imaging for simultaneous visualization of multiple species in turbulent flames. Opt Express. 2017;25: 30214. pmid:29221053
- 4. Zhou B, Brackmann C, Li Q, Wang Z, Petersson P, Li Z, et al. Distributed reactions in highly turbulent premixed methane/air flames. Combustion and Flame. 2015;162: 2937–2953.
- 5. Fruhwirth GO, Ameer-Beg S, Cook R, Watson T, Ng T, Festy F. Fluorescence lifetime endoscopy using TCSPC for the measurement of FRET in live cells. Opt Express. 2010;18: 11148. pmid:20588974
- 6. Isbaner S, Karedla N, Ruhlandt D, Stein SC, Chizhik A, Gregor I, et al. Dead-time correction of fluorescence lifetime measurements and fluorescence lifetime imaging. Opt Express. 2016;24: 9429. pmid:27137558
- 7. Krishnan RV, Saitoh H, Terada H, Centonze VE, Herman B. Development of a multiphoton fluorescence lifetime imaging microscopy system using a streak camera. Review of Scientific Instruments. 2003;74: 2714–2721.
- 8. Ulku AC, Bruschini C, Antolovic IM, Kuo Y, Ankri R, Weiss S, et al. A 512 × 512 SPAD Image Sensor With Integrated Gating for Widefield FLIM. IEEE J Select Topics Quantum Electron. 2019;25: 1–12. pmid:31156324
- 9. Zickus V, Wu M-L, Morimoto K, Kapitany V, Fatima A, Turpin A, et al. Fluorescence lifetime imaging with a megapixel SPAD camera and neural network lifetime estimation. Sci Rep. 2020;10: 20986. pmid:33268900
- 10. Ma Y, Lee Y, Best-Popescu C, Gao L. High-speed compressed-sensing fluorescence lifetime imaging microscopy of live cells. Proc Natl Acad Sci USA. 2021;118: e2004176118. pmid:33431663
- 11.
Venkatakrishnan SV, Bouman CA, Wohlberg B. Plug-and-Play priors for model based reconstruction. 2013 IEEE Global Conference on Signal and Information Processing. Austin, TX, USA: IEEE; 2013. pp. 945–948. https://doi.org/10.1109/GlobalSIP.2013.6737048
- 12.
Yuan X. Generalized alternating projection based total variation minimization for compressive sensing. 2016 IEEE International Conference on Image Processing (ICIP). Phoenix, AZ, USA: IEEE; 2016. pp. 2539–2543. https://doi.org/10.1109/ICIP.2016.7532817
- 13. Ehret T, Arias P. Implementation of the VBM3D Video Denoising Method and Some Variants. arXiv:200101802 [cs]. 2020 [cited 22 Jan 2022]. Available: http://arxiv.org/abs/2001.01802
- 14. Liu Y, Yuan X, Suo J, Brady DJ, Dai Q. Rank Minimization for Snapshot Compressive Imaging. IEEE Trans Pattern Anal Mach Intell. 2019;41: 2990–3006. pmid:30295611
- 15. Zhang K, Zuo W, Zhang L. FFDNet: Toward a Fast and Flexible Solution for CNN based Image Denoising. IEEE Trans on Image Process. 2018;27: 4608–4622. pmid:29993717
- 16. Tassano M, Delon J, Veit T. FastDVDnet: Towards Real-Time Deep Video Denoising Without Flow Estimation. arXiv:190701361 [cs, eess]. 2020 [cited 24 Oct 2021]. Available: http://arxiv.org/abs/1907.01361
- 17.
Yuan X, Liu Y, Suo J, Dai Q. Plug-and-Play Algorithms for Large-Scale Snapshot Compressive Imaging. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, WA, USA: IEEE; 2020. pp. 1444–1454. https://doi.org/10.1109/CVPR42600.2020.00152
- 18.
Qiu H, Wang Y, Meng D. Effective Snapshot Compressive-spectral Imaging via Deep Denoising and Total Variation Priors. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Nashville, TN, USA: IEEE; 2021. pp. 9123–9132. https://doi.org/10.1109/CVPR46437.2021.00901
- 19. Yao J, Qi D, Yang C, Cao F, He Y, Ding P, et al. Multichannel-coupled compressed ultrafast photography. J Opt. 2020;22: 085701.
- 20. Donoho DL. Neighborly Polytopes and Sparse Solution of Underdetermined Linear Equations.: 21.
- 21. Bredies K, Kunisch K, Pock T. Total Generalized Variation. SIAM J Imaging Sci. 2010;3: 492–526.
- 22. Zhang H, Wang L, Yan B, Li L, Cai A, Hu G. Constrained Total Generalized p-Variation Minimization for Few-View X-Ray Computed Tomography Image Reconstruction. Zeng L, editor. PLoS ONE. 2016;11: e0149899. pmid:26901410
- 23.
Le Montagner Y, Angelini E, Olivo-Marin J-C. Video reconstruction using compressed sensing measurements and 3d total variation regularization for bio-imaging applications. 2012 19th IEEE International Conference on Image Processing. Orlando, FL, USA: IEEE; 2012. pp. 917–920. https://doi.org/10.1109/ICIP.2012.6467010
- 24. Ji C, Tian J, Sheng L, He K, Xin L, Yan X, et al. Reconstruction of compressed video via non-convex minimization. AIP Advances. 2020;10: 115207.
- 25. Cai S, Liu K, Yang M, Tang J, Xiong X, Xiao M. A new development of non-local image denoising using fixed-point iteration for non-convex ℓp sparse optimization. Hatt M, editor. PLoS ONE. 2018;13: e0208503. pmid:30540797
- 26.
Zuo W, Meng D, Zhang L, Feng X, Zhang D. A Generalized Iterated Shrinkage Algorithm for Non-convex Sparse Coding. 2013 IEEE International Conference on Computer Vision. Sydney, Australia: IEEE; 2013. pp. 217–224. https://doi.org/10.1109/ICCV.2013.34
- 27. Li Y, Natakorn S, Chen Y, Safar M, Cunningham M, Tian J, et al. Investigations on Average Fluorescence Lifetimes for Visualizing Multi-Exponential Decays. Front Phys. 2020;8: 576862.