Skip to main content
Browse Subject Areas

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Guided-deconvolution for correlative light and electron microscopy

  • Fengjiao Ma,

    Roles Data curation, Formal analysis, Investigation, Methodology, Software, Validation, Visualization, Writing – original draft, Writing – review & editing

    Affiliations Institute of Physical Chemistry and Abbe Center of Photonics, University of Jena, Jena, Thuringia, Germany, Leibniz Institute of Photonic Technology, Jena, Thuringia, Germany, Jena Center for Soft Matter, University of Jena, Jena, Thuringia, Germany

  • Rainer Kaufmann,

    Roles Data curation, Resources, Writing – review & editing

    Affiliations Centre for Structural Systems Biology, Hamburg, Germany, Department of Physics, University of Hamburg, Hamburg, Germany

  • Jaroslaw Sedzicki,

    Roles Data curation, Resources, Writing – review & editing

    Affiliation Biozentrum, University of Basel, Basel, Switzerland

  • Zoltán Cseresnyés,

    Roles Data curation, Methodology, Resources, Software, Writing – review & editing

    Affiliation Applied Systems Biology, Leibniz Institute for Natural Product Research and Infection Biology - Hans Knöll Institute, Jena, Thuringia, Germany

  • Christoph Dehio,

    Roles Data curation, Resources, Writing – review & editing

    Affiliation Biozentrum, University of Basel, Basel, Switzerland

  • Stephanie Hoeppener,

    Roles Conceptualization, Data curation, Funding acquisition, Investigation, Resources, Supervision, Visualization, Writing – review & editing

    Affiliations Jena Center for Soft Matter, University of Jena, Jena, Thuringia, Germany, Laboratory of Organic Chemistry and Macromolecular Chemistry, University of Jena, Jena, Thuringia, Germany

  • Marc Thilo Figge,

    Roles Conceptualization, Data curation, Funding acquisition, Software, Visualization, Writing – review & editing

    Affiliations Jena Center for Soft Matter, University of Jena, Jena, Thuringia, Germany, Applied Systems Biology, Leibniz Institute for Natural Product Research and Infection Biology - Hans Knöll Institute, Jena, Thuringia, Germany, Institute of Microbiology, Faculty of Biological Sciences, University of Jena, Jena, Thuringia, Germany

  • Rainer Heintzmann

    Roles Conceptualization, Funding acquisition, Methodology, Project administration, Software, Supervision, Writing – review & editing

    Affiliations Institute of Physical Chemistry and Abbe Center of Photonics, University of Jena, Jena, Thuringia, Germany, Leibniz Institute of Photonic Technology, Jena, Thuringia, Germany, Jena Center for Soft Matter, University of Jena, Jena, Thuringia, Germany


Correlative light and electron microscopy is a powerful tool to study the internal structure of cells. It combines the mutual benefit of correlating light (LM) and electron (EM) microscopy information. The EM images only contain contrast information. Therefore, some of the detailed structures cannot be specified from these images alone, especially when different cell organelle are contacted. However, the classical approach of overlaying LM onto EM images to assign functional to structural information is hampered by the large discrepancy in structural detail visible in the LM images. This paper aims at investigating an optimized approach which we call EM-guided deconvolution. This applies to living cells structures before fixation as well as previously fixed sample. It attempts to automatically assign fluorescence-labeled structures to structural details visible in the EM image to bridge the gaps in both resolution and specificity between the two imaging modes. We tested our approach on simulations, correlative data of multi-color beads and previously published data of biological samples.


Electron microscopy (EM) of biological samples provides the opportunity to image the structures of cells down to the level of detail of a single membrane. Yet its low specificity of the sample structures provides little functional information. The EM images typically only contain unspecific contrast information in contrast to fluorescence microscopy where dyes can be easily targeted to specific molecules. Several technologies have been developed to specify the EM structures [13]. One approach is to label these structures with a fluorescent dye, and measure the same region of interest using light microscopy [47], a technique called correlative microscopy. The LM imaging of this correlative approach is applied to living structures shortly before fixation or to previously fixed samples. Yet, even with super-resolution light microscopy techniques, the light microscopy images’ resolution is still far away from the EM structural resolution. If the correlative images are simply overlaid, it is often hard to directly identify corresponding objects and assign functional LM information to a structural EM features.

Light microscopy deconvolution [813] is a method that exploits the knowledge of the process of imaging, modeled as a convolution of the sample with a point spread function (PSF) to mathematically restore the sample information. Many deconvolution methods and regularization schemes have been proposed in the past [14], yet due to the ill-posed nature of the problem [15], the restoration of very high spatial frequency information is still very limited.

In this manuscript, we investigate an algorithm, which we term ‘EM-guided deconvolution’, to link the LM image to its correlated EM image in a model-based approach. Our goal is an algorithm which is capable of using both, the EM and LM data with mutual benefit for a joined LM/EM reconstruction preserving both specificity and structural detail. The specific information only comes from the LM data, yet the idea is that the EM data serves as a high-resolution template defining where LM specific emission can possibly be generated. Our method is based on the theory of maximum likelihood deconvolution. As detailed below, we observe a resolution improvement in EM-guided reconstructions of simulated data compared to deconvolution results based on LM images only combined with classical priors.

To validate the practical accuracy of our approach, we validated the algorithm on a sample of multi-color beads or the same size and on previously published EM datasets. We processed each color channel of the mixture of differently colored 40 nm beads individually using the EM-guided deconvolution approaches. Closely neighboring beads were identified with different colors, provided that image alignment was performed with great care. For the preparation details of the multi-color beads, see Supplemental information. We then applied the EM-guided deconvolution algorithms to correlated data of biological samples. Cryo-fixed resin-embedded HEK293T [16] cells were imaged and PHEM-fixed Brucella-abortas infected Hela cells [17] for structured illumination microscopy with subsequent cacodylate fixation for FIB-SEM. For detailed preparation methods regarding these images, see [16, 17]. The results exhibit a realistic appearance of fluorescence-labeled membrane structures.

All the simulation and experimental data sets in this article are processed using MATLAB, with the help of the DIPimage toolbox [18] and Cuda [19] acceleration. The L-BFGS (a quasi-Newton) method of the MinFunc plugin [20] is used to minimize the loss function.


The detected light microscopy image ‘I’ can be described as: (1) where f is the sample, h is the point spread function (PSF). N is the noise that follows the Poisson distribution, with its mean being described by the ideal image f convolved with h. From the theory of maximum a posterior likelihood deconvolution (MAP) [8], we know that the MAP loss function is given by connecting the data term, the negative log likelihood L(f, I) with the prior R(f) through Bayes’ rule, which yields for the total negative log-likelihood loss:. (2) where L(f, I) contains the forward- and the noise model comparing the simulated measurement with the detected image, R(f) is a penalty function that accounts for the known properties of the reconstructed sample, and λ is the coefficient which controls the strength of penalty function. The reconstructed sample is calculated by minimizing this loss function. Prior knowledge about the sample being all-positive, could be implemented by a penalty term, but we chose to implement it as part of the forward model optimizing for an auxiliary function such as f = f2 mapping all numbers to the set of positive numbers, rather than optimizing for f [8].

The basic idea of the EM-guided deconvolution is to introduce the preprocessed and registered EM images as a position-dependent parameter in the penalty function, whereas in regular MAP deconvolution only the reconstructed image is used together with the global penalty weight λ.

Intensity-guided deconvolution

In the intensity-guided deconvolution (IG) method, we directly use the intensity information of the EM image as local guidance. To make the EM guidance more clear-cut and to be able to include expert knowledge of structural details, the EM image is preprocessed to a binary image (EM0) containing all the structures of interest, which could possibly correlate with a fluorescent label being present in these locations. If the guided deconvolution generates sample emission density within EM0 a much smaller penalty value has to be paid compared to an emission density outside of EM0. Note that the measured input fluorescence data to all deconvolution algorithms is not affected by the EM data. We used to following penalty term: (3) where ε is a small value adjusting the contrast of the EM guidance. This means that a small ε enforces zero fluorescence intensity in non-segmented EM regions, whereas larger values are less stringent on enforcing darkness.

This ad hoc definition is a simple way to introduce the EM brightness information into the deconvolution algorithm. However, we observed an overly strong dependence on the parameter ε. By introducing the entropy distribution: (4) we obtained a less critical dependence. Here ε is a small value adjusting the weight of the EM guidance and e is Euler’s constant. We term this approach the entropy-guided deconvolution (EG).

Gradient-guided deconvolution (GG)

A less direct way is to exploit the boundary information of objects in the EM images. That boundary information can be quantified by calculating the spatial gradient of the EM images based on the pixels. EM1 = ∇xyzEM where EM is the preprocessed EM image. Then, the gradient image is normalized to [0, 1]: With the new image EMG we define the penalty function of the method gradient-guided deconvolution (GG) as: (5) where the small value ε is controlling the strength of the EM guidance. The power n is used to balance the uneven strength of the guidance inside the guidance image. We set n = 2 as default, thus it has the same form as the numerator.


Here we use a Siemens star as ground truth sample to simulate the EM image in Fig 1a. For the corresponding LM image of the sample (Fig 1b) we used the same Siemens star, which is inverted and two spokes were removed (Fig 1c). We further introduced smooth variations in emission intensity over each spoke. The maximum expected number of photons per pixel is 1000. To avoid a problem caused by the high-frequency noise, we started our iterative deconvolution with a uniform sample estimate, equal to the mean value of the LM image.

Fig 1. Deconvolution with the simulated example.

A comparison of the results of different deconvolution methods. a) The simulated LM image with Poisson noise correlates with the EM image. b) The simulated EM image correlates with the LM image. c) The ground truth of the LM emission. d)—h) Results of modified total variation, intensity-guided, entropy-based intensity-guided, combined total variation with intensity-guided and gradient-guided deconvolution.

We observed a significant improvement in the quality of the EM-guided deconvolution results (Fig 1e–1h), compared to regular deconvolution (Fig 1d). Here, we use the modified total variation (TV) deconvolution [21] as an example for the regular deconvolution. For more details on the dependence of the results on adjustable parameters, see (S1S4 Figs). The corresponding parameters are listed in the table below these figures. With the guidance, the borders of the structures become clear for those large structures. The EM-guided deconvolution can also restore the small structures which are well represented by conventional deconvolution. From the plot of the normalized cross-correlation (the mean square error) comparing the restored image to the ground truth (Fig 2), we see that classical approaches quickly reach a steady state, whereas the EM-guided methods continues to increase (decrease), improving similarity. The EG method stopped at slightly more than 50 iterations, since the L-BFGS algorithm reached its smallest possible step size (Step Size below ProgTol). The small decrease (increase) of IG in normalized cross correlation (NCC) after hundreds of iterations indicates slightly too little regularization (over-fitting).

Fig 2. Error analysis of deconvolution methods.

The restored image is compared to the ground truth after each iteration. Shown on the left is the plot of the normalized cross-correlation. The right graph is the plot of the mean square error which is normalized to the value of comparing the ground truth to the initial image. A larger (smaller) value in the normalized cross-correlation (mean square error) means a higher similarity. The parameters we select are the same as shown in the table in Fig 1. With these parameters, the algorithms generate the best restorations in visualization.

The result of IG deconvolution is closely related to the selected range of parameters. If λ is much larger than ε, the deconvolution provides a solution that over emphasizes the EM guidance (see S1c Fig). If λ is slightly larger than ε, the result shows a very good description of the intensity distribution (Fig 1e). If λ approximates ε, the algorithm blackens the high-frequency information at the center (see the center of S1a Fig), leading to lower NCC values for a large number of iterations (dark blue curve in S1e Fig). If λ is smaller than ε, the algorithm cannot well restore the brightness variations within each spoke (S1b Fig). If λ is much smaller than ε, the effects of the penalty fades and it yields a restoration similar to that without any constraint.

The result using IG regularization is very sensitive to the precise choice of λIG and ε. To obtain less sensitive results, we recommend using it together with another classical penalty term. For instance, adding a small constraint of IG regularization to the TV deconvolution. (6) This yields a merge of TV and IG penalties, enforcing constant areas, yet being efficiently guided by small structural detail. This combination forces the algorithm to propagate the flatness along the non-labeled spoke, removing it also well at locations of dense structural detail (compare the location indicated by the red arrow in Fig 1e and the corresponding part in Fig 1g and S2b Fig).

The EG deconvolution is, compared to IG, less sensitive to the parameter settings. The algorithm reaches a similar result because of the same underlying EM guidance data and approach. Yet, due to the logarithmic term in the penalty function, the parameters in the EG method do not have such a dramatic influence on the final restored image. The value of λ will mostly influence the speed of convergence, if roughly in the right range (S3f Fig).

The GG deconvolution has more freedom to which regions the fluorescence is assigned to, since only spatial boundaries but no preferred assignments to specific segmented EM-structures are enforced. If the boundary information of the object is accurately provided by the EM image, the restored image can recover the morphological characteristics of the fluorescence labeled structures (the missing bar in Fig 1h), with surprisingly high resolution. Compared to the intensity-based algorithms (IG and EG), GG is more precise in restoring the highly frequency structures. To obtaining a good reconstruction, the algorithm requires λ to be smaller than ε. It takes more iterations than the other EM-guided methods to reach convergence.

Experimental results

Beads sample

To check the performance on samples of known structure, we applied our algorithm to CLEM images of fluorescent beads. The sample was a mixture of orange (565/580 nm) and red (639/720 nm) beads (S1 Methods). They are not identifiable based on the EM image alone because all of them have roughly the same diameter and are made of the same material. If we overlap the CLEM images, the colors of the dispersed beads can be easily determined. However, the determination becomes difficult, if the two types of beads are too close to each other, especially in clusters (Cluster A), as seen in Fig 3a. We use these multi-color beads as a test by individually processing each color channel and comparing the results. Note that algorithms which exploit prior knowledge that multiple colors should be assigned to different EM structural detail have not been used here and remain part of future research. Note also that a pair of two adjacent beads of separate color by itself would not form a useful sample for this test, as deconvolution algorithms can, depending on the settings, reconstruct single emitters down to essentially point-like objects even without any guidance information.

Fig 3. Overlay of CLEM images of fluorescent beads and its total variation deconvolution.

a) The identification of each beads’ color is difficult in the overlay of the CLEM image. b) The result of modified total variation deconvolution after 100 iterations, at λ = 10−7, ε = 10−10. c) The binary mask created from the EM image with watershed segmentation. It can improve the image resolution, but is not sufficient to distinguish the colors of clustered beads. d) Multiplication of (b) and (c). Morphology is visible but colors cannot clearly be identified. We clipped values above 40% of the maximum brightness for better visualizations of small signals.

From the result of the TV deconvolution, using the LM image as an input, we see an improvement in the image resolution. The individual beads are becoming clearer. However, it fails in distinguishing the beads in the clusters (Clusters A and B in Fig 3b). Even though multiplying a mask (Fig 3c) created from the EM image can visualize the shape of the beads (Fig 3d), there is no improvement in assigning the LM color information (Cluster A in Fig 3d). Moreover, this direct multiplication might mistakenly remove fluorescent signals if the CLEM images are not very well registered (Cluster B in Fig 3d). A measured PSF was used for the deconvolution to minimize artefacts caused by a disagreement between a theoretically calculated PSF and the ground-truth PSF.

Considering the aberration difference between LM and EM images, we performed non-rigid registration of CLEM images. The alignment of the EM image was based on matching the TV deconvolved data to EM brightness information using the software BigWarp [22] by adding landmark points manually on the correlative images. The beads in Cluster B are not very well registered because the TV deconvolution result could not provide sufficient information for such precise registration in this area.

Fig 4 shows the results of various EM-guided deconvolution methods on the same region of interest as shown in Fig 3. If the LM image is not perfectly matching to EM guidance information, the restored images can contain disturbing spike pixels. To avoid such spike pixels, we combined Tikhonov regularization with the EG and GG methods respectively. The EG result (Fig 4a) is quite close to the result of TV deconvolution multiplied by the binary mask (Fig 3d). However, the EG deconvolution accounts for structural and functional information simultaneously. This decreases the risk accidentally removing the functional information due to a small registration error. This accidental removal of the deconvolved signal (Fig 3b) can be observed in Cluster B in Fig 3d after its multiplication with the misaligned mask.

Fig 4. EM-guided deconvolution of the bead sample.

The restored LM images from the same region as shown in Fig 3 using a) EG & Tikhonov, b) combined TV with IG and c) GG & Tikhonov deconvolution. Respective overlays with the aligned EM images are presented in the panels below. The window on the top-left corner of each image shows the details of the cluster (scale bar is 100 nm). The orange beads were color coded in cyan and we stretched the lookup table to 40% of the maximum value for a better visualization of dim structures.

The combined TV & IG method (Fig 4b) has advantages. Although the uniformity of structure identification may be below other EM-guided methods, it requires less precise image registration. It clearly shows both beads as labeled with similar brightness (Cluster B in Fig 4b), whereas the GG & Tikhonov combination (Fig 4c) shows only one of the beads.

The GG deconvolution (Fig 4c) can more precisely separate the color of the beads because the GG scheme allows more flexibility for the reassignment of photons during deconvolution. If the EM image is perfectly aligned, the algorithm can precisely detect that the beads at the top are cyan and the rest of the beads in Cluster A are red. The white color shown in the result is caused by two reasons:

  • The segmentation of the beads is not perfect because the beads are in contact with each other;
  • The TEM image only shows the projection of the beads, potentially causing a 3D cluster where beads can be on top of each other looking like a single bead.

If the EM image is not well aligned, the algorithm is incapable of performing a sensible assignment. We observe only one reconstructed fluorescent bead in the deconvolution as well as a distributed cloud of fluorescence in Fig 4c’ Cluster B.

EM-guided deconvolution of 2D biological samples

We applied the algorithm to correlative in-resin super-resolution fluorescence and electron microscopy imaging data [16]. The fluorescence microscopy image displays EphA2-mVenus-labeled membrane structures (plasma membrane, endoplasmic reticulum (ER)) of a HEK293T cell embedded in resin, which was published in [16].

We use the wide-field image (Fig 5a), i.e. the sum of the intensity values of the raw data frames, as the input image for the deconvolution. The PSF, accounting for the experimental parameters, is calculated using the Richards & Wolf method [9]. The correlative LM/EM images were aligned in the ImageJ plugin BigWarp. The deconvolution algorithms are applied to the whole data sets, but we use only the cropped region marked in the white the squares (ROI1 and ROI2) to analyze the restorations. The image registration accuracy in ROI1 is higher than that in ROI2 because the membrane density in ROI1 is lower than that in ROI2, allowing us to extract more structural information from the LM image as landmarks for the CLEM image registration. To better extract the useful information from the EM image as the guidance, we used the software-trainable Weka Segmentation [23] on the image preprocessed by the software denoiseEM [24], to deal with the complex biological structures. The segmented image is shown in Fig 5d.

Fig 5. CLEM data of the EphA2-mVenus labeled membrane structures.

a) The overlay of the wide-field fluorescence image and the correlative TEM image remains unspecific with respect to the fluorescent labeled double-layer membranes. The LM image (b) and the TEM image (c) show the same region of interest as (a). d) Membrane information was extracted by machine learning with the software Weka Segmentation.

The EM-guided deconvolution shows its robustness in the restoration of the fluorescence labeled membranes (Fig 6d–6d1). The segmentation result from machine learning contains much more structural detail than needed. Such redundant information can be eliminated by EM-guided deconvolution. The EM-guided deconvolution can restore the double membranes from the wide-field image of ROI1 (the blue circles in Fig 6). The membrane information is clearly enhanced in the GG deconvolution. It shows a very high similarity to the restored image obtained by the single-molecule localization microscopy (SMLM, Fig 6a), validating the EM-guided deconvolution method. The EG deconvolution shows the benefit of restoring the structures where the membranes are dense (Fig 6d1). In this case, the CLEM image registration is less precise due to the lack of correlative detail. In cases where other deconvolutions (Fig 6c1, Fig 6d1 and Fig 6f1) create large patches of color, the TV & IG deconvolution (Fig 6e1) still convincingly assigns the fluorescence to membranes.

Fig 6. Restored images of ROI1 and ROI2.

a)—a1’) The restored images of single-molecule localization microscopy. b)—b1’) The wide field LM image is used for the all deconvolutions of this figure. c)—c1’) The results of modified total variation deconvolution without guidance. d)—d1’) The results of EG & Tikhonov deconvolution. e)—e1’) The results of the combined TV & IG deconvolution. f)—f1’) The restored images of GG & Tikhonov deconvolution. The results of the EM-guided deconvolution shows more information than the regular LM deconvolution. Their results show good agreement with the single-molecule localization microscopy data. *L-BFGS method reached the minimum update step size (Step size below ProgTol).

The digitized EM image does not necessarily need to be binarized, however a sharp edge that can describe the outer line of the objects is required. The intensity values of the EM image can be sufficient (S5 Fig top row). We can also create the guidance by labeling more than two classes (S5 Fig bottom row). As soon as the guidance is effective, the GG deconvolution can generate a convincing result. The restoration of the detailed structures is highly dependent on the EM guidance. Thus, there is some difference between the deconvolution results shown in S5 Fig and Fig 6f. Here we observe that a strong LM out-of-focus back ground signal may generate artefacts in the guided deconvolution, since the algorithm is forced to make this background compatible to the EM guidance information. This underlines that with the advantages of guided deconvolution, comes also the danger of misinterpretation, which arises whenever the LM data is not closely agreeing to the underlying assumptions or the EM data is inappropriately preprocessed.

EM-guided deconvolution of a biological 3D sample

We then applied the EM-guided deconvolution algorithms to 3D-CLEM images of HeLa cells infected with Brucella abortus. The data were published in reference [17]. The membranes of the endoplasmic reticulum (ER) as well as Brucella-containing vacuoles (BCVs) in the host cells were labeled with the GFP-Sec61β fusion protein. Structured illumination microscopy (SIM) was used to obtain high-resolution images of the labeled structures. The labeled structures could be identified to reside outside the bacteria (Fig 7f and 7f1, 7g and 7g1). However, the resolution improvement by the SIM technology is not sufficient to clearly identify the ER markers (see Fig 7b and 7b1).

Fig 7. CLEM images of GFP-labeled endoplasmic reticulum (ER) and Brucella-containing vacuoles.

a) The wide-field fluorescence image. b) The SIM restoration image. c) The intensity guidance map is obtained via Weka segmentation. d) The gradient guidance is generated by calculating the absolute spatial gradient of the intensity guidance map. e) The restored images of the modified total variation deconvolution (brightness -20%, contrast +40%). f) The restoration of the combined TV & IG deconvolution from the wide-field fluorescence microscopy image (brightness +40%, contrast -40%). g) The restored images of the gradient-guided deconvolution (brightness +60%). a1), b1), e1), f1) g1): The overlay of the CLEM image. Labeled membrane structures outside the bacteria are identified as Brucella-containing vacuoles (BCVs) and labeled membrane structures inside the bacteria as ER.

We used the wide-field image (Fig 7a and 7a1), the sum of all the phases of the SIM dataset, as the input image for guided deconvolution, such that the SIM reconstruction can serve as validation data. The SEM image was aligned by using eC-CLEM [25] based on the position of the bacteria. Since the data set was aligned rigidly and the distortion of the LM image may differ from that in the focused ion beam/scanning electron microscopic tomography (FIB/SEM) image, there may be some disagreement in the details. The alignment accuracy of the center area is higher than in the rest of the image. The data set was processed in 3D using a theoretical calculated 3D PSF. Due to the large size, the image was processed in segments, which were recombined afterwards.

The membranes were segmented in the Weka plugin in Fiji (Fig 7c). It can be directly used as the intensity guidance, even though the segmentation contains more information, including details of the mitochondrial membrane. The guidance for the GG deconvolution is generated by taking the gradient of the segmented image, as shown the bottom row in Fig 7d.

The EM-guided deconvolution (Fig 7f and 7g) shows ER markers at a level of detail far beyond what can be achieved by regular deconvolution (Fig 7e). S1 Video show the restored images with the TV, the TV & IG and the GG deconvolution methods. From the results of the EM-guided deconvolution, we see that the GFP labeled membrane covers the bacterial cell body and some of the ER markers outside the bacteria in the host cells, which agrees well with the SIM results (see Fig 7b). Here, we only show the result of the TV & IG deconvolution as representative of the intensity-guided deconvolution because it always provides better performance than the other intensity-related deconvolutions. The ER membranes could be restored at high quality. Compared with the IG method, the GG deconvolution is more dependent to the accuracy of the image registration. The guided deconvolution did not convincingly represent the details of the ER structures because the disagreement between the EM information and the LM information does not guide the deconvolution in the expected direction (see the blue arrows in Fig 7). The dot that the second yellow arrow is pointing to, is not visible in the reconstructed SIM image. This may be a guided deconvolution artefact due to poor image registration. However, it is also possible that this weak feature was clipped in the SIM reconstruction, if the PSF model was imperfect.

Conclusion and outlook

In this paper, we investigate the algorithm ‘EM-guided deconvolution’ for the CLEM images to automate recognizing the fluorescence information in the registered EM image. Both the intensity-guided and the gradient-guided deconvolution outperformed the state-of-the-art deconvolution of LM data alone. The intensity-guided deconvolution does not require excessively accurate image registration compared to gradient-guided deconvolution. If the images are precisely registered, the gradient-guided deconvolution yielded a better result than the intensity-guided deconvolution, yet being more susceptible to alignment errors.

There are still challenges to overcome for the EM-guided deconvolution. E.g., how to integrate super-resolution fluorescence microscopy data, such as STED, SIM or single molecule localization microscopy data, into our framework of EM-guided deconvolution is still a question because these methods often cannot be approximated by a convolution with a well-known, spatially invariant point spread function. Furthermore, there are still some other ways to define a quality metric of the reconstructed object compared to the EM data, such as using the mutual information or the structural similarity index. Especially when dealing with multiple fluorescence labels, there is still room for improvement exploiting the assignment to separate EM structures, possibly with support by deep-learning approaches. We hope that EM-guided deconvolution will be developed further and become part of the standard tool-set of correlative light and electron microscopy imaging.

Supporting information

S1 Fig. Restorations of IG deconvolution with different combination of parameters.

a) The restoration becomes dim if λ ≃ ε. It yields in missing information of the structures in high-frequency. b) The algorithm will not very well restore the intensity distribution if λ < ε. c) The weight is too much on the EM-image which led to an incomplete removal (red arrow) of the non-fluorescent spokes if λ > > ε. d) The EM guidance will not contribute if λ is too small. The IG deconvolution generates good results if λ > ε (see Fig 1e). f) The NCC curves when the parameters are selected the same as in this figure.


S2 Fig. Restorations of TV & IG method at varying strength if the IG part.

At low IG strength, a) TV regularization dominates. b) Balanced TV and IG strength yields good recovery of both, functional and morphology information. c) Strong weights of IG over emphasize the EM structural detail which overwrites the functional information.


S3 Fig. Restorations of EG deconvolution when the parameters in different ranges.

The images in the left column are the restorations with the same λ at various ε. The images in the top row are the restorations with the same ε but at different λ. The final restoration of the EG deconvolution is less dependent on the parameters. Unless λ is too large, which enforces the EM information too much, the other restored images are very similar to the ground truth in terms of NCC. A larger λ can lead to faster convergence. The influence of ε is so small that there is no perceivable difference to the final result as supported by overlapping curves.


S4 Fig. Restorations of GG deconvolution at different parameter settings.

a) The gradient guidance. b) The restored image only reconstruct good low-frequency structures if λ is slightly smaller than ε. c) The algorithm will not do restoration if λ ≥ ε. d) The EM information will not provide sufficient strength for the guidance if λ is too small. the best restoration happens when λ < ε (see also Fig 1h).


S5 Fig. Restored images of GG deconvolution with different EM guidance.

a) The preprocessed EM images. b) The GG restored images (λGG = 10−9, εGG = 10−4, λTikhonov = 10−8). c) The overlay of the restored images and the EM images. The preprocessing of the EM image was based on the Isodata algorithm [26]e. The result in the bottom row was generated when the EM image was trained in 3 classes with the Weka segmentation in Fiji.


S1 Video. Restorations of the bacterial sample from the stack of the wide-field images.

The video shows the whole stack of the images corresponding to the ROI as shown in Fig 7. It is a comparison of the wide-field (a), SIM restoration, total variation restoration, gradient-guided restoration and the combined total variation with intensity-guided restoration. The bottom row shows the overlay of the LM images and the EM images.


S1 Methods. Fluorescence beads sample preparation and images acquisition.



We thank the Microverse Imaging Center (Aurélie Jost / Patrick Then) for providing microscope facility support for data acquisition. We also thank Martin Reifarth, Daniel Friedrich, Jennifer Lippincott-Schwartz, Harald F. Hess, Wanda Kukulski, Shen Han, Ingo Lieberwirth and Lucy Collinson for providing the experimental data sets to test the algorithms.


  1. 1. Chklovskii DB, Vitaladevuni S, Scheffer LK. Semi-automated reconstruction of neural circuits using electron microscopy. Current opinion in neurobiology. 2010;20(5):667–675. pmid:20833533
  2. 2. Lebbink MN, Geerts WJ, van der Krift TP, Bouwhuis M, Hertzberger LO, Verkleij AJ, et al. Template matching as a tool for annotation of tomograms of stained biological structures Journal of Structural Biology. 2007;158(3):327–335. pmid:17270464
  3. 3. Xiao C, Chen X, Li W, Li L, Wang L, Xie Q, et al. Automatic mitochondriasegmentation for EM data using a 3D supervised convolutional network. Frontiers in Neuroanatomy. 2018;12:92. pmid:30450040
  4. 4. Ando T, Bhamidimarri SP, Brending N, Colin-York H, Collinson L, De Jonge N, et al. The 2018 correlative microscopy techniques roadmap. Journal of physics D: Applied physics. 2018;51(44):443001 pmid:30799880
  5. 5. Reifarth M, Preußger E, Schubert US, Heintzmann R, Hoeppener S. Metal–Polymer Hybrid Nanoparticles for Correlative High-Resolution Light and Electron Microscopy. Particle & Particle Systems Characterization. 2017;34(10):1700180
  6. 6. Spiegelhalter C, Tosch V, Hentsch D, Koch M, Kessler P, Schwab Y, et al. From dynamic live cell imaging to 3D ultrastructure: novel integrated methods for high pressure freezing and correlative light-electron microscopy. PloS one. 2010;5(2):e9014. pmid:20140253
  7. 7. Ader NR, Hoffmann PC, Ganeva I, Borgeaud AC, Wang C, Youle RJ, et al. Molecular and topological reorganizations in mitochondrial architecture interplay during Bax-mediated steps of apoptosis. eLife. 2019;8:e40712. pmid:30714902
  8. 8. Verveer PJ, Gemkow MJ, Jovin TM. comparison of image restoration approaches applied to three-dimensional confocal and wide-field fluorescence microscopy. Journal of microscopy. 1999;193(1):50–61. pmid:12558687
  9. 9. Richards B, Wolf E. Electromagnetic diffraction in optical systems, II. Structure of the image field in an aplanatic system. Proceedings of the Royal Society of London Series A Mathematical and Physical Sciences. 1959;253(1274):358–379.
  10. 10. Tikhonov AN, Arsenin VI. Solutions of ill-posed problems. vol. 14. Winston, Washington, DC; 1977.
  11. 11. Gao Q, Eck S, Matthias J, Chung I, Engelhardt J, Rippe K, et al. Bayesian joint super-resolution, deconvolution, and denoising of images with Poisson-Gaussian noise. In: 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018). IEEE; 2018. p. 938–942.
  12. 12. Richardson WH. Bayesian-based iterative method of image restoration. JoSA. 1972;62(1):55–59.
  13. 13. Medyukhina A, Figge MT. DeconvTest: Simulation framework for quantifying errors and selecting optimal parameters of image deconvolution. Journal of Biophotonics. 2020;13(4):e201960079. pmid:31957214
  14. 14. Sibarita JB. Deconvolution microscopy. Microscopy Techniques. 2005; p. 201–243. pmid:16080270
  15. 15. Holmes TJ, Bhattacharyya S, Cooper JA, Hanzel D, Krishnamurthi V, Lin Wc, et al. Light microscopic images reconstructed by maximum likelihood deconvolution. In: Handbook of biological confocal microscopy. Springer; 1995. p. 389–402.
  16. 16. Johnson E, Seiradake E, Jones EY, Davis I, Grüunewald K, Kaufmann R. Correlative in-resin super-resolution and electron microscopy using standard fluorescent proteins. Scientific reports. 2015;5(1):1–9. pmid:25823571
  17. 17. Sedzicki J, Tschon T, Low SH, Willemart K, Goldie KN, Letesson JJ, et al. 3D correlative electron microscopy reveals continuity of Brucella-containing vacuoles with the endoplasmic reticulum. Journal of cell science. 2018;131(4). pmid:29361547
  18. 18. Hendriks CL, Van Vliet L. D DIPimage user manual: a scientific image processing toolbox; 2001.
  19. 19. Heintzmann R. CudaMat; 2009.
  20. 20. Schmidt, M. minFunc: unconstrained differentiable multivariate optimization in Matlab (2005); 2013.Ë?schmidtm/Software/minFunc.html.
  21. 21. Soulez F, Denis L, Tourneur Y,Thiébaut É. Blind deconvolution of 3D data in wide field fluorescence microscop. In: 2012 9th IEEE International Symposium on Biomedical Imaging (ISBI). IEEE; 2012. p. 1735–1738.
  22. 22. Bogovic JA, Hanslovsky P, Wong A, Saalfeld S. Robust registration of calcium images by learned contrast synthesis. In: 2016 IEEE 13th international symposium on biomedical imaging (ISBI). IEEE; 2016. p. 1123–1126.
  23. 23. Arganda-Carreras I, Kaynig V, Rueden C, Eliceiri KW, Schindelin J, Cardona A, et al. Trainable Weka Segmentation: a machine learning tool for microscopy pixel classification. Bioinformatics. 2017;33(15):2424–2426. pmid:28369169
  24. 24. Roels J, Vernaillen F, Kremer A, Gonçcalves A, Aelterman J, Luong HQ, et al. An interactive ImageJ plugin for semi-automated image denoising in electron microscopy. Nature communications. 2020;11(1):1–13. pmid:32034132
  25. 25. Paul-Gilloteaux P, Heiligenstein X, Belle M, Domart MC, Larijani B, Collinson L, et al. eC-CLEM: flexible multidimensional registration software for correlative microscopies. Nature methods. 2017;14(2):102. pmid:28139674
  26. 26. Velasco FR. Thresholding using the ISODATA clustering algorithm. 1979.