Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Photophysical image analysis for sCMOS cameras: Noise modelling and estimation of background parameters in fluorescence-microscopy images

  • Dibyajyoti Mohanta ,

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Software, Writing – original draft, Writing – review & editing

    dibyajyoti.mohanta@cec.lu.se

    Affiliations Centre for Environmental and Climate Science, Lund University, Lund, Sweden, Department of Chemistry, The State University of New York at Buffalo, New York, United States of America

  • Radhika Nambannor Kunnath ,

    Contributed equally to this work with: Radhika Nambannor Kunnath, Erik Clarkson

    Roles Methodology, Resources, Writing – review & editing

    Affiliation Department of Life Sciences, Chalmers University of Technology, Gothenburg, Sweden

  • Erik Clarkson ,

    Contributed equally to this work with: Radhika Nambannor Kunnath, Erik Clarkson

    Roles Conceptualization, Formal analysis, Writing – review & editing

    Affiliation Centre for Environmental and Climate Science, Lund University, Lund, Sweden

  • Albertas Dvirnas,

    Roles Software, Writing – review & editing

    Affiliations Centre for Environmental and Climate Science, Lund University, Lund, Sweden, Department of Life Sciences, Chalmers University of Technology, Gothenburg, Sweden

  • Fredrik Westerlund,

    Roles Funding acquisition, Writing – review & editing

    Affiliation Department of Life Sciences, Chalmers University of Technology, Gothenburg, Sweden

  • Tobias Ambjörnsson

    Roles Conceptualization, Formal analysis, Funding acquisition, Project administration, Software, Supervision, Writing – original draft, Writing – review & editing

    Affiliation Centre for Environmental and Climate Science, Lund University, Lund, Sweden

Abstract

Fluorescence microscopy is an effective tool for imaging biological samples, yet captured images often contain noises, including photon shot noise and camera read noise. To analyze biological samples accurately, separating background pixels from signal pixels is crucial. This would ideally be guided by the knowledge of a parameter called the Poisson parameter, , representing the mean number of photons collected in a background pixel (for the case when quantum efficiency = 1 and the dark current is negligible).

This study introduces a method for estimating , from an image which contains both background and signal pixels, using probabilistic noise modeling for an sCMOS camera. The approach incorporates Poisson-distributed photon shot noise and sCMOS camera read noise modelled with a Tukey-Lambda distribution. We apply a chi-square test and a truncated fit technique to estimate directly from a general sCMOS image, with camera parameters determined through calibration experiments.

We validate our method by comparing estimates in images captured by sCMOS and EMCCD cameras for the same field of view. Our analysis shows strong agreement for low to moderate exposure images, where estimated values for align well between the sCMOS and EMCCD images. Based on our estimated , we perform image thresholding and segmentation using our previously introduced procedure.

Our publicly available software provides a platform for photophysical image analysis for sCMOS camera systems.

Introduction

Fluorescence imaging of biological samples is fundamental in studying cellular and molecular components. Fundamentally, a fluorescence microscope operates on the principle of distributing a limited photon budget across the spatial dimensions of a detector or an image plane. Fluctuations due to a limited number of photons is referred to as photon shot noise [1]. In addition, the image recorded in fluorescence microscopy is affected by the camera read noise which is the random electronic noise introduced by the sensor’s readout electronics [2,3].

Scientific complementary metal oxide semiconductor (sCMOS) cameras have gained significant popularity in recent years for imaging dynamic biological samples [4,5]. Their advantages, such as higher frame rates (images captured per second), larger sensor areas, high quantum efficiency (photon-to-photoelectron conversion rate), and low effective read noise , make them an increasingly popular alternative to EMCCD cameras, which are limited by multiplicative noise (noise multiplied along with the signal by the electron multiplying gain) [69]. Processing sCMOS fluorescence images (i.e. image thresholding, image segmentation) still poses challenges, as each pixel may exhibit independent response characteristics, such as noise and offset, due to the independent readout units [2,7]. This is also different in comparison to EMCCD where the incoming photoelectrons are shifted serially through a gain multiplication register [10].

A common challenge in fluorescence microscopy image analysis is image thresholding, which distinguishes signal pixels (emitted photons from fluorophores) from background pixels (dominated by non-specific photons). Thresholding methods are categorized into supervised and unsupervised approaches. Supervised methods require manual parameter adjustments tailored to each image or extensive labeled training data, while unsupervised techniques automatically classify pixels based on heuristic criteria or intrinsic image properties such as photon statistics and intensity distributions or calibration data [10]. In low-intensity images, distinguishing signal from background becomes especially difficult due to minimal contrast and overlapping photon distributions [9]. However, incoming photons follow probability distributions, and understanding these statistical patterns can allow for improved image classification. It is the overarching purpose of this study to introduce a new probabilistic photophysics-based method for unsupervised image thresholding for low-intensity sCMOS images.

The gold standard for unsupervised image thresholding is the Otsu method, which has been widely used for decades. Otsu’s approach is a heuristic method that finds an optimal intensity threshold by minimizing the weighted sum of intra-class variances (or equivalently maximizes inter-class variances) [11]. The Otsu method does not explicitly account for the noise characteristics inherent in imaging systems, nor does it provide rate of misclassifications - issues that are especially important in low-intensity image analysis.

Probabilistic unsupervised image thresholding methods include the Poisson-Gaussian (PG) framework which seeks to separate photon shot noise (Poisson) and stationary noise sources (Gaussian) [12], Generalized Anscombe Transformation (GAT)-based denoising [13], and pixel-wise MLE calibration [14]. However, these methods oversimplify noise sources by modeling them (including read noise) collectively as Gaussian with pixel-dependent variances. Recently, K. Wei et al [15], addressed the non-Gaussian nature of sCMOS read-noise featuring heavy-tail behavior in low-light regime images. They subsequently used a convolutional neural network (CNN) based image enhancing algorithm, which required large training datasets.

Despite advances in camera-specific noise modeling, a universal physics-driven framework for unsupervised probabilistic thresholding, with a priori control over misclassification rates, is lacking. To remedy this, we adapt the EMCCD photophysical image analysis pipeline by J. Krog et al [10] to sCMOS images. Following [15], we model the sCMOS readout noise using a Tukey-Lambda distribution to deal with low-light conditions. Our framework automatically estimates the background Poisson parameter () without user intervention directly from an image that contains both background and signal pixels (at arbitrary ratios). Using this estimate, we subsequently perform unsupervised probabilistic sCMOS image thresholding and segmentation.

The paper is structured as follows: In “Materials and methods” we outline the method, including noise modeling at different sCMOS imaging stages, the probabilistic model for intensity distribution, and techniques for estimating camera parameters. In “Results” we present results, detailing noise model parameter estimation, Poisson parameter for background pixels, and probabilistic image thresholding and segmentation. We also demonstrate the robustness of our approach by comparing estimates of in images acquired from both sCMOS and EMCCD cameras of the same field-of-view (FOV) with identical experimental settings. In the “Summary and outlook” section, we conclude with key findings and future directions.

Materials and methods

The photophysical image analysis pipeline integrates three components. First, modeling the sCMOS imaging cascade: from photon-to-photoelectron conversion, charge transfer, and amplification to digitization, with stage-specific noise sources (shot noise, read noise, quantization noise), see subsection “Theory” below. The model is expressed as an expression for the probability mass function (PMF) of the recorded image counts. Second, empirically estimating sCMOS camera parameters (gain, offset and the Tukey-Lambda distribution parameters) to align model with experimental data, see subsections “Experiments” and “Camera parameter estimation”. Third, equipped with camera parameters, a probabilistic method estimating and performing photophysical image thresholding and segmentation, see the last subsections “Estimating ” and “Unsupervised probabilistic image thresholding and image segmentation”.

Experiments

Two types of cameras were used in this study: 1) sCMOS (Photometrics Prime 95B 22 mm) and 2) EMCCD (Andor iXon Ultra). The sCMOS camera has a pixel size of 11 μm and a sensor format of 1412 x 1412 pixels. The camera has three software-controllable gain modes: sensitive, balanced, and full well [16]. Throughout this study, we consistently used the balanced gain setting mode for all sCMOS camera experiments. The EMCCD camera has a pixel size of 13 μm and a sensor format of 1024 x 1024 pixels [16]. The EM gain setting of the camera is variable from 2 to 300.

All the images were captured using a Zeiss Axio Observer Z1 microscope which has two side ports to mount the cameras and a software option to switch between the camera ports. The microscope is coupled to a Colibri 5 multicolor LED light source that can emit four wavelengths. A 100X oil immersion microscope objective lens was used to capture all the signal images at a scaling of 110 nm per pixel.

In the following, we categorize the experiments performed in this study in three subsections i) Calibration experiments ii) fluorescent DNA and bacterial cell imaging experiments iii) Dual camera same FOV experiments.

sCMOS calibration experiments.

We performed two types of sCMOS calibration experiments:

In the Cap-on experiment, the illumination was turned off and the camera shutter was kept closed to capture a dark-field image of the sCMOS camera with exposure time s. We estimated offset and read noise parameters by analyzing this image (see camera parameter estimation).

In the Illuminated white wall experiment, bright field movies were captured by focusing the sCMOS camera on a uniformly illuminated white wall and varying the illumination intensity from 10% to 100% in increments of 10 percentage points. We captured 1000 time frames at each intensity keeping the exposure time fixed at 400 ms. We used these stacks of bright images of different intensities to determine the gain by a mean-variance analysis 19.

sCMOS images of fluorescent DNA and bacterial cells.

Fluorescently stained DNA was stretched on a silanized glass slide [17]. The DNA was stained with YOYO-1 dye that has an absorption maximum of 489 nm and an emission maximum of 509 nm. In the Fluoroscent DNA experiments, these DNA molecules were imaged with a sCMOS camera with blue light excitation (469/38 nm bandpass excitation filter) at 8% illumination intensity and exposure time varying from 1 ms to 1000 ms [18].

For the cellular imaging experiments purpose, Bacillus subtilis bSS82 cells (genotype trpC2 amyE::spc PrpsD-gfp) which overexpress the green fluorescent protein (GFP) were used [19]. Live bacterial cells were fixed on agarose pads on microscopy glass slides. During imaging, the light intensity was fixed at 8% and the images were captured using the sCMOS camera at the following exposures: 10, 100, and 400 ms.

Dual camera imaging of the same FOV.

We mounted the sCMOS and the EMCCD cameras on each of the two mounting ports (left and right) of the microscope. Using the fluorescently stained DNA sample on glass (prepared as described above), we captured an image using one camera, and then switched port and captured the same field of view (FOV) using the other camera. The camera port was switched using the microscope software. We recorded the images in both cameras under identical experimental conditions (same light source intensity, and exposure time). In these experiments, the EMCCD camera had a gain setting of 100 and the sCMOS camera used the balanced-gain setting. To minimize the effects of photobleaching, a different FOV on the glass slide was imaged for each exposure setting and the videos were limited to 20 frames. Also, the cameras were alternated, in the sense that if an image was captured at one exposure setting first with sCMOS and then EMCCD, the image for the next exposure setting was captured first with EMCCD and then sCMOS.

In addition, to estimate the camera model parameters of the EMCCD camera (required for analysis of the same FOV images of the EMCCD), we performed a set of calibration experiments (Cap-on and Illuminated white wall) similar to the sCMOS camera calibration [10]. To this end, the dark field image was captured at a minimum EM gain 2 and at a fixed exposure of s. We recorded 100 bright field time frames by EMCCD camera at each different intensity (10%, 20%, , 100%) keeping exposure time and EM gain fixed at 500 ms and 100, respectively.

Theory

The output from each image pixel is a digital image count (nic), which can be expressed as a sum of independent random contributions,

(1)

where noe is the number of output photoelectrons generated in each pixel area. Nread represents the cumulative noise generated in the readout circuit during the conversion of photoelectrons to voltage signals. g is a constant representing overall gain of the system in units of ADU/e, during the conversion of electrons to digital counts. Nq represents the quantization error (in digital counts or ADU) during conversion to a digital number, and Δ is the offset, a constant (in ADU) added to each pixel to prevent any undesirable negative output. The quantities noe due to the quantum nature of light, due to electronic fluctuations in the read-out circuitry, and Nq due to rounding to the next integer value, are random numbers, and as a consequence, nic is also a random variable. The aim of the modelling is to derive an expression for the probability mass function (PMF) of nic. To this end, we start by briefly discussing the generation of each of these components and the underlying processes, before deriving an explicit formula for the PMF.

Photons hit the sensor region.

In an imaging process, photons arriving at the camera sensor are typically assumed to be Poisson-distributed [10,15]. This underpins the principal noise source in sCMOS imaging: photon shot noise, which is non-deterministic due to the quantum nature of light, and varies randomly about a mean number of photons (Λ) directly proportional to the camera’s exposure time. These photons then undergo photoelectric conversion to the mean number of photoelectrons (λ) at the sensor. The conversion factor is known as quantum efficiency (QE). The resulting random variable noe has a PMF:

(2)

where is the PMF for the Poisson distribution with parameter λ. c is a constant corresponding to the mean number of photoelectrons in the absence of incoming photons (e.g. dark currents) [20].

Transfer of photoelectrons through readout circuitry.

After the photoelectric conversion of photons to electrons within each pixel area of the sensor region, these electrons are amplified and converted into voltage signals by the output readout circuitry inside each pixel [2,16]. The conversion of electrons (e) to voltage signals at each pixel is mediated by a multiplication factor called the analog conversion gain (). Usually, the voltage signals and are expressed in terms of μV and μV/e, respectively.

The readout process in sCMOS sensors introduces read noise (), which arises from electronic perturbations during charge-to-voltage conversion in the pixel’s readout circuitry. Such noise also includes contributions from thermal noise and source follower noise [15]. Under certain conditions (e.g., high readout speeds), source follower noise can exhibit non-Gaussian outliers, leading to a long-tailed distribution that deviates from purely Gaussian behavior [15]. Recent work argues that the Tukey-Lambda (TL) distribution family can effectively model the long-tail behavior of read noise () in sCMOS cameras. The TL distribution is defined by its quantile function for a uniformly distributed random variable [21]:

(3)

with

(4)

where is the shape parameter that determines the type of distribution, such as approximates a Gaussian distribution and corresponds to a logistic distribution. , the location parameter is set to zero following the zero mean noise assumption [15]. is the scale parameter.

Each pixel in a row is then connected to the appropriate column voltage bus, where the on-chip analog-to-digital conversion (ADC) process in sCMOS cameras maps a pixel’s voltage signal (voltage signal converted from the charge of electrons in a given pixel) to a digital value (ADU) using the relation: , where nb is the bit depth (resolution) of the ADC, and is the ADC’s reference voltage (full-scale range) [16,22]. Therefore, the overall gain g of the sCMOS camera is equal to the magnitude of analog gain (charge-to-voltage conversion factor) with units , where represents the unit of transfer function of ADCs [23]. We operated in balanced mode with 12-bit depth ADC, which balances sensitivity and noise [16,24].

The conversion of overall voltage signal () within a pixel to the nearest integer introduces a quantization noise (in units of ADU) which approximately follows a uniform distribution [10],

(5)

with q = 1 is the quantization step [15].

After the conversion processes described above, we have for each pixel in units of ADU. To this number, the camera adds offset (Δ) to form the final output image count (nic) given by Eq (1).

PMF for the image counts

In the above section, we have seen that the random variables noe, Nread and Nq have distinct sources of origin and are characterized by PDFs/PMFs of known form. In such cases, the characteristic function (CF), which is the Fourier transform of the PMF given by , is a useful construct. The CF of a sum of independent random variables factorizes into the CFs of the individual random variables [25]. Hence, we may first calculate the CF and then Fourier-invert it back to its corresponding PMF. So, the characteristic function of the recorded image count nic (see Eq (1)) in a pixel is given by,

(6)

where k is the Fourier variable associated with . As the mean number of incoming photons and outgoing photoelectrons (Λ and λ, respectively) both follow Poisson distributions, the individual characteristic function of the outgoing photoelectrons λ has a closed form [26] and is represented by,

(7)

with as before.

The CF of the read-noise (3), is:

(8)

where the choice of integration limits follows from the fact that expectation value on the right-hand side is with respect to a uniform distribution over the range [0,1]. The integral above cannot be solved analytically. Thus, we approximate it by discretizing R into n points: , , . We evaluate the integrand at each Rl:

(9)

and, lastly we apply the trapezoidal rule:

(10)

Next, the CF of the quantization noise Nq (Eq 5) is given by:

(11)

Combining all independent components, the total CF becomes:

(12)

where yl is given in equation (9).

To compute the PMF p(nic), we apply the Gil-Pelaez Fourier inversion [27] to the CF :

(13)

where denotes the real part of the integrand. The upper limit is π because the CF of the discrete variable nic is periodic with period , so the inversion integral is taken over . We again discretize the integral by using a trapezoidal quadrature similar to Eq (10).

The cumulative distribution function CDF(nic) is calculated by the summation of PMF p(nic) [10]

(14)

We use this CDF for the image thresholding algorithm which discriminates background from signal pixels based on their probabilistic distributions.

The sCMOS-PMF given by the Eq (13) is different from EMCCD-PMF [10] in two fundamental ways. First, here the gain step does not include electron-multiplication (the sCMOS camera does not amplify the converted output electrons). Second, the read noise for sCMOS cameras is a Tukey-Lambda distributed random number instead of a Gaussian random number as for EMCCD [10].

Camera parameter estimation

We here demonstrate how to estimate the camera model parameters, , through theoretical analysis of calibration experiments. In the “Results” section, we will use chipParams in the estimation of .

Estimation of the gain parameter, g.

We estimate the sCMOS camera gain, g, using a mean-variance approach, where g is obtained as the slope of the experimental pixel-based mean-variance relationship.

For the mean-variance analysis we use data from the Illuminated white wall experiment (see subsection “Experiments”), at all different illumination intensities. For each pixel, the experiments provide a time series of the image counts from which we estimate the means and the variances. This procedure yields experimental mean-variance data, i.e., , where j labels the pixels in all experiments (, where m is the number of pixels).

To match the experimental mean-variance data to theory, we need to derive an expression for the relation between the mean image count and its variance. Taking the expectation value of Eq (1), we obtain the mean image count at a pixel as

(15)

Since nic is a sum of independent random variables (discussed above), its mean is simply the sum of the individual means. Furthermore, using the facts that , (since ) and we obtain

(16)

Similarly, the variance of nic, see Eq (1), is obtained using the fact that variances of independent random variables add up:

(17)

where we used that for a Poisson distribution, the mean is equal to the variance, i.e. . The variance of Tukey-lambda distributed random number is given by,

(18)

where [21] and is the Gamma function. Above we also used the fact that the variance of the rounding error Nq is given by [10,28].

Solving Eq (16) for λ and plugging the result into Eq (17) we obtain our final mean (subtracted by offset)-variance relation:

(19)

where the constant (“y-intercept”)

(20)

Inspecting Eq (19), we notice that the gain, g, appears as the slope if the variance is plotted as a function of the offset subtracted from mean . In the mean-variance analysis, we therefore fit (using the least squares method) our experimental mean-variance data (see above) to a straight line. The slope of this line serves as our estimate of g. The fitted value for the constant d is here not used for parameter estimation. We will, however, later show how to use it as a consistency check.

Offset estimation.

The offset, Δ, is estimated by using the dark frame image acquired during Cap-on experiment (see the “Experiments” section). In this experiment, we have no input photons and therefore we can set the noe to 0 in Eq (1). By further taking the expectation value of this equation we obtain following Eq (15). Hence, Δ is the average image count in the Cap-on experiment. We therefore estimate Δ as the empirical mean,

(21)

where is the recorded image count for pixel j () in the cap-on experiment.

Estimation of the read noise parameters (, ).

To estimate the shape parameter () and the scale parameter () in the Tukey-Lambda distribution we again make use of the Cap-on experiment. Again setting the first term in Eq (1) to 0 (no incoming photons in the cap-on experiments), we find that the image count in these experiments is a random variable . In the following, we also neglect the rounding error, i.e., we set , and assume Δ known (i.e. estimated in the previous subsubsection). With these approximations, we have , i.e., the statistics of the recorded image count in the Cap-on experiment (with the offset subtracted) is described by the Tukey-Lambda distribution.

To estimate the parameters and , we use a technique called probability plot correlation coefficient (PPCC) [21,29], which we here recapitulate for completeness. The PPCC method, as applied to our data, is divided into the following steps:

  • Collect the image counts from a cap-on sCMOS image into an one-dimensional array. Sort this array to yield: , where .
  • Write the theoretical quantiles for the Tukey-Lambda distribution on the form(22)
  • for , with unscaled theoretical quantiles(23)
  • where
    For a fixed set of parameters, the quantile function for the Tukey-Lambda distribution can be graphically illustrated by plotting as a function of u where and (by instead plotting u as a function of , one could illustrate the cumulative distribution function).
  • Compute empirical quantiles
  • For each , within a range [–1,1], compute the Pearson correlation coefficient (PCC) between the and :(24)
    The PCC takes values between −1 and 1, and perfect agreement (up to a scale factor) gives PCC = 1. For a “good fit” (), a quantile-quantile (QQ) plot of versus should ideally be a straight line, where the slope of this line gives (see Eq 22) [29]. We therefore choose the value of that maximizes ρ [29] (the “optimal” ) .
  • For the optimal from above, we then estimate the slope by fitting a linear function using the least squares method; the value of the fitted slope serves as our estimate of [29].

Consistency check of estimated camera parameters.

After estimating all the camera parameters: , as a consistency check, we plug these estimates into Eqs (18) and (20) to obtain an expected value for the y-intercept, d. This value can then be compared to the actual y-intercept obtained in the fitting procedure in the mean-variance analysis in Fig 1.

thumbnail
Fig 1. Estimating the gain parameter, g, of the sCMOS camera: Variance vs mean plot of bright field images acquired during the Illuminated white wall experiment (see ‘Experiments’ section).

Intensity levels were incremented in 10% steps from 10% to 100%. The gain is calculated from the slope of plot using the Eq (19), yielding (shown as the solid cyan line). The y-intercept of the linear fit is 6.4. This value is consistent with the value 4.7 obtained using Eq (20).

https://doi.org/10.1371/journal.pone.0335310.g001

Estimating

Using the estimated sCMOS camera chip parameters, chipParams, we estimate the Poisson parameter describing background pixels in sCMOS images containing both background and signal regions. Following Krog et al. [10], we fit a truncated version of sCMOS-PMF, Eq (13) given by , to the lower tail of the image count histogram (where background dominates) with as a fit parameter. Note that we deliberately restrict the fit to the lower tail of the image count histogram, so that any contribution from signal photons (described by Poisson parameter ) does not enter our truncated PMF fit. The fitting procedure involves the truncated likelihood: , where . As in [10], we iteratively adjust the truncation point and perform maximum likelihood estimation (MLE) until the fit satisfies a goodness-of-test with a significance level set by .

Unsupervised probabilistic image thresholding and image segmentation

With all camera parameters and estimated, we can perform unsupervised probabilistic image thresholding and segmentation, similar to [10], but here for an sCMOS imaging system instead of an EMCCD setup.

For image thresholding, we follow the procedure formalized by [10]. We first estimate an image count threshold, based on a p-value binarization threshold . Here, is our a priori value of the number of false positives that we accept (i.e., an acceptable value for the fraction of white pixels in background regions). The p-value is turned into an image count threshold, by inverting the CDF (Eq 14). With this threshold in hand, we binarize (threshold) the image by turning pixels with an image count above the threshold, white, and those below the threshold, black. The strength of this method is that by using we know a priori the expected fraction of white pixels in background regions.

For segmentation, we use the binarized image and apply the method from [10] with to identify the connected components of white/black pixels. Segmentation quality is controlled via a p-value (see [10] for details), calculated using the sCMOS-PMF for summed counts in each segmented region.

Results

The Results section is organized as follows: First, we calibrate the sCMOS parameters (gain, offset, read noise parameters) using calibration experiments. Next, we estimate the background Poisson parameter () from images containing both signal and background pixels. We then perform automated, unsupervised thresholding and segmentation with prior error estimation, requiring no user intervention. Finally, we validate the sCMOS-PMF framework by comparing estimates derived from sCMOS and EMCCD detectors under identical imaging conditions, confirming the accuracy of our algorithm.

Camera parameter estimation

Using a set of calibration experiments combined with the parameter estimation procedure described in Materials and Methods, we estimate the model parameters for the sCMOS camera used herein.

We first estimate gain parameter of the camera using data from the Illuminated white wall experiment recorded at increasing intensities (10%,20%,..,100%), see Fig 1. Through mean-variance analysis (see “Methods and materials”), the slope of this relationship yields the camera’s gain, estimated as under balanced conversion gain settings.

From the mean dark-frame intensity in the Cap-On Experiment (Eq 21), we estimate ADU, which is in close agreement to the factory default bias of 100 ADU [16].

We next seek to estimate the camera parameters associated with the read noise. From our Cap-On experiment, we acquired the dark frame images, which are used to estimate shape parameter () and scale parameter () in the Tukey-Lambda distribution family following the steps discussed in “Methods and materials” (see Fig 2). The mean value of and over all dark frames of our sCMOS camera are given by values and , respectively.

thumbnail
Fig 2. Estimation of read noise parameters, and , for the sCMOS camera.

Panel (a) shows the Pearson correlation coefficient, Eq (24), between the empirical quantiles , , from the dark frame image (acquired during Cap-On experiment) and the unscaled theoretical quantiles, F, for a range of shape parameters (). The dashed line in the figure represents corresponding to the maximum PCC score. Panel (b) plots the dark image frame empirical quantiles against the theoretical unscaled quantiles at the optimal from panel (a). The slope of this curve gives estimation of the scale parameter () given by . The y-intercept of the fitted line is 0.009 (which is close to the expected value 0).

https://doi.org/10.1371/journal.pone.0335310.g002

Estimating in an image with background and signal pixels

Using the calibrated camera parameters (g, , , Δ), we can estimate the Poisson parameter describing background pixels in sCMOS images containing both background and signal regions using the procedure described in Materials and Methods. We estimate the background Poisson parameter of a fluorescence image of DNA on glass (Fig 3), acquired with a 100 ms exposure time and balanced gain settings.

thumbnail
Fig 3. (a) An sCMOS image of fluorescently labelled single DNA molecules deposited on a glass slide.

The image is acquired using a procedure described in the subsection Experiments in Materials and Methods. The image is split into tiles of size 64x64 pixels, where each tile is given a label {row,column}, where in this example row,column = 1,...,16. (b) A histogram of the image counts for a single tile, here tile {7,13} (yellow bordered tile in Fig 3(a)). The blue bars represent pixels regarded as true background, while the orange bars represent the outliers (not true background or signal pixels). The background Poisson parameter is estimated to . The image count threshold was estimated to be . This threshold separates the blue and orange bars and was determined using a p-value threshold, pGoF = 0.01, for the goodness-of-fit tests. The dashed black curve shows the fitted PMF for the estimated background, extended to the full range of image counts (in our method, we fit a truncated PMF to the blue bars). (c) To show the contrast across the tiles in the image, we estimate for another tile {8,8} (green bordered tile in Fig 3(a)) with . Thresholded and segmented versions of the image from panel (a) are found in the Supporting information, S1 Fig.

https://doi.org/10.1371/journal.pone.0335310.g003

To deal with non-uniform illumination, like in [10], we partition the image into tiles ( pixels each). Focusing on the tile {(7,13)}, we compute using our truncated PMF fitting procedure described in Materials and Methods. In Fig 3(b), the histogram’s blue bars represent pixel intensities below , identified as true background, while orange bars denote uncertain (mixed background/signal) pixels. Our analysis yields , corresponding to an average of approximately 102 photoelectrons (or 102 photons multiplied with QE) generated in the sensor. To illustrate the robustness of our algorithm, we also fit the image counts from the tile {(8,8)} in Fig 3(c). Here, the estimated remains similar to that of {(7,13)}. However, the threshold increases to 197 highlighting the variation in the image. The sCMOS-PMF given by black dashed line, Eq (13) fits well to the entire image histogram.

One of the key advantages of our algorithm is its performance with very low-exposure images (e.g., 1 ms). To demonstrate this, we plotted the sCMOS-PMF fit on the image histogram of the sCMOS camera for three low exposure times: 1 ms, 8 ms, and 20 ms in Fig 4. The average background Poisson parameter for the 1 ms image is , indicating that, on average, each pixel records less than 1 photoelectron.

thumbnail
Fig 4. Performance of the sCMOS-PMF algorithm at low exposure time images: (a) 1 ms, (b) 8 ms, and (c) 20 ms (tile ).

The sample being imaged is identical to the one in Fig 3. Black dashed curves represent fitted PMFs. Estimated background Poisson parameters () and threshold counts () are given in the figure legends for each case.

https://doi.org/10.1371/journal.pone.0335310.g004

Finally, to further validate the robustness of our sCMOS-PMF fitting procedure, we calculated from DNA on glass images across a wide range of exposure times, from 1 ms to 1000 ms. Ideally, the algorithm should provide a background Poisson parameter that follows a linear relationship with exposure time (10 times longer exposure times results in 10 times more photons hitting the sensor). Fig 5 shows this expected behavior, where indeed display a linear relationship with exposure time.

thumbnail
Fig 5. Relationship of the estimated background Poisson parameter () with the exposure time of the sCMOS camera.

We show the average background Poisson parameter (over all tiles) for images with exposure times (1–1000 ms). The inset shows zoomed version of the image for 1–16 ms. Notice that increases linearly with exposure time, as it should (since the number of collected photons increases linearly with exposure time). The error bars are the standard deviations of across tiles in the image.

https://doi.org/10.1371/journal.pone.0335310.g005

Probabilistic image thresholding and image segmentation

With the camera parameters and estimated, we next apply our probabilistic image thresholding and segmentation methods, see “Materials and Methods”. Fig 6(a) show an example of a bacterial cell image, acquired using a procedure described in the subsection Experiments. Fig 6(b) display a thresholded (binarized) version of this image, where white pixels ideally represent non-background regions (given a significance level set by pbinarize). From the binarized image we perform image segmentation, where Fig 6(c) displays detected regions (yellow boundaries) using the method described in “Methods and Materials”.

thumbnail
Fig 6. The photophysical sCMOS image processing pipeline applied to bacteria cells overexpressing GFP.

(a) a 100 ms exposure time image of fluorescently stained cells (balanced gain setting). (b) Binarized image processed by our unsupervised thresholding algorithm with a p-value threshold of pbinarize = 0.01. Our algorithm is designed so that for this choice of threshold we expect approximately 1% false positives, which from visual inspection may roughly be the case (no ground truth is available here). (c) Output of our segmentation approach with the yellow pixels forms the boundary of the “objects” identified by our unsupervised segmentation method. Example images at lower and higher exposure times are found in the Supporting information, S2S3 Figs.

https://doi.org/10.1371/journal.pone.0335310.g006

We notice that visually our unsupervised thresholding and segmentation procedure works very well for the above example. To further evaluate the robustness and performance of our image processing pipeline, we applied it a few more datasets including bacterial cell images acquired at other exposure times and fluorescently stained DNA on glass substrates (Fig 3) (see Supporting information, S2S3 Figs).

Comparison of estimates of for sCMOS and EMCCD for cameras focusing on same FOV

To test the robustness of our algorithm across different fluorescence cameras, we conducted a procedure Dual camera same FOV experiment (see the experiments subsection in “Methods”), where both cameras were mounted on the same microscope and focused on the same field of view (FOV) containing a DNA sample on glass. This approach ensures that, under identical experimental conditions (same light source intensity, and exposure time), both cameras should ideally record the same amount of background photon counts per unit area after offset and noise parameter corrections.

We compared the sCMOS-derived estimates of with its counterpart from the EMCCD camera as the pixel size for the sCMOS () differs from EMCCD (). For the EMCCD setup, we applied the the EMCCD-PIA algorithm by Krog et al. [10]. Prior to analysis, EMCCD calibration was performed following Krog et al.’s [10] protocol with estimated parameters listed in the caption of Fig 7.

thumbnail
Fig 7. Comparison of the mean background Poisson parameter per pixel area () between sCMOS and EMCCD cameras.

Images were recorded by sCMOS and EMCCD cameras focusing on same field of view (see Dual camera same FOV experiment in experiment subsection). For each camera, was first calculated in individual image tiles; plotted values represent the mean over all tiles, with error bars showing the standard deviation of across tiles. The Poisson parameter is scaled by the pixel area as the pixel size is different for the two cameras (see Experiments section). The calibration parameters for the sCMOS camera are same as calculated in the parameter estimation section while for the EMCCD camera the EM Gain knob on the camera was set at 100. The EMCCD calibration parameters , gain/AD factor (g/f), offset (Δ), read noise (r0) are 4.518 , 483.77, and 7.81 respectively, calculated following the procedure discussed for EMCCD cameras in [10].

https://doi.org/10.1371/journal.pone.0335310.g007

We find that the sCMOS algorithm produces an output for per pixel area which aligns closely with the per pixel area from the EMCCD setup, see Fig 7. Minor differences in background counts were observed between the two cameras, potentially due to differences in their quantum efficiencies, QE. Additionally, at very low exposure times, EMCCD images exhibit pixel bleeding [30]. This effect and EM gain-amplified spurious charge could cause a mixing of signal and background, potentially leading to an overestimation of for EMCCD cameras at low exposure times, in agreement with the findings in Fig 7 [31,32]. Furthermore, EMCCDs exhibit lower dark current due to their deeper cooling , which reduces thermal noise accumulation in long-exposure images [33]. This lower dark current may contribute to a reduced effective photon count in high-exposure scenarios for EMCCD compared to sCMOS cameras 7 [20,34].

This diligent setup and comparison confirm that, while slight variations exist, our algorithms are robust across different fluorescence camera systems.

Summary and outlook

We developed a probabilistic image analysis framework for sCMOS cameras, leveraging the statistical properties of photon emission and detection. By exploiting the multiplicative property of the characteristic functions for independent random variables, we derived an expression for the sCMOS-specific probability mass function (sCMOS-PMF) by numerical inversion. This PMF enables estimation of the background Poisson parameter, representing the average number of photoelectrons for background pixels.

Our algorithm excels in low-intensity regimes (), where traditional thresholding methods often fail due to overlapping signal and noise distributions. To validate the method, we adapted the EMCCD-PIA pipeline for comparative analysis of EMCCD and sCMOS images under equal experimental conditions. The experiment demonstrated agreement in the estimation of ensuring the robustness of the algorithm.

The framework is not restricted to fluorescence imaging; it generalizes to any sCMOS-acquired image where camera noise parameters (readout noise, gain, offset) can be pre-calibrated and proves particularly valuable for limited photon budget experiments, such as imaging photo-sensitive biological specimens or astronomical observations.

We provide public implementations (GUI and non-GUI versions) enabling experimentalists to: a) directly process raw sCMOS images through automated calibration pipelines and robustly estimate background Poisson parameters (), for precise signal-background separation and b) generate thresholded and segmented outputs without any user intervention. The software is available at https://github.com/dibyajyoti41.

Supporting information

Acknowledgments

We thank Ann-Brit Schafer from the Division of Chemical Biology, Department of Life Sciences, Chalmers University of Technology for providing the bacterial cells.

References

  1. 1. Nakamura J. Basics of image sensors. Image Sensors and Signal Processing for Digital Still Cameras. Boca Raton: CRC Press; 2006. p. 66–77.
  2. 2. Liu S, Mlodzianoski MJ, Hu Z, Ren Y, McElmurry K, Suter DM, et al. sCMOS noise-correction algorithm for microscopy images. Nat Methods. 2017;14(8):760–1. pmid:28753600
  3. 3. Mandracchia B, Hua X, Guo C, Son J, Urner T, Jia S. Fast and accurate sCMOS noise correction for fluorescence microscopy. Nat Commun. 2020;11(1):94. pmid:31901080
  4. 4. Hua X, Han K, Mandracchia B, Radmand A, Liu W, Kim H, et al. Light-field flow cytometry for high-resolution, volumetric and multiparametric 3D single-cell analysis. Nat Commun. 2024;15(1):1975. pmid:38438356
  5. 5. Cardenas-Benitez B, Hurtado R, Luo X, Lee AP. Three-dimensional isotropic imaging of live suspension cells enabled by droplet microvortices. Proc Natl Acad Sci U S A. 2024;121(44):e2408567121. pmid:39436653
  6. 6. Baker M. Faster frames, clearer pictures. Nat Methods. 2011;8(12):1005–9.
  7. 7. Zhang Z, Wang Y, Piestun R, Huang Z-L. Characterizing and correcting camera noise in back-illuminated sCMOS cameras. Opt Express. 2021;29(5):6668–90. pmid:33726183
  8. 8. Huang Z-L, Zhu H, Long F, Ma H, Qin L, Liu Y, et al. Localization-based super-resolution microscopy with an sCMOS camera. Opt Express. 2011;19(20):19156–68. pmid:21996858
  9. 9. Baradez M-O, McGuckin CP, Forraz N, Pettengell R, Hoppe A. Robust and automated unimodal histogram thresholding and potential applications. Pattern Recognition. 2004;37(6):1131–48.
  10. 10. Krog J, Dvirnas A, Ström OE, Beech JP, Tegenfeldt JO, Müller V, et al. Photophysical image analysis: unsupervised probabilistic thresholding for images from electron-multiplying charge-coupled devices. PLoS One. 2024;19(4):e0300122. pmid:38578724
  11. 11. Otsu N. A threshold selection method from gray-level histograms. IEEE Trans Syst, Man, Cybern. 1979;9(1):62–6.
  12. 12. Foi A, Trimeche M, Katkovnik V, Egiazarian K. Practical Poissonian-Gaussian noise modeling and fitting for single-image raw-data. IEEE Trans Image Process. 2008;17(10):1737–54. pmid:18784024
  13. 13. Mäkitalo M, Foi A. A closed-form approximation of the exact unbiased inverse of the Anscombe variance-stabilizing transformation. IEEE Trans Image Process. 2011;20(9):2697–8. pmid:21356615
  14. 14. Huang F, Hartwich TMP, Rivera-Molina FE, Lin Y, Duim WC, Long JJ, et al. Video-rate nanoscopy using sCMOS camera-specific single-molecule localization algorithms. Nat Methods. 2013;10(7):653–8. pmid:23708387
  15. 15. Wei K, Fu Y, Zheng Y, Yang J. Physics-based noise modeling for extreme low-light photography. IEEE Trans Pattern Anal Mach Intell. 2022;44(11):8520–37. pmid:34375279
  16. 16. Teledyne Photometrics. Prime 95B Scientific CMOS Camera Handbook. Teledyne Photometrics. 2019. https://www.photometrics.com/wp-content/uploads/2019/11/95B-Handbook_e-book_11072019.pdf
  17. 17. Müller V, Dvirnas A, Andersson J, Singh V, Kk S, Johansson P, et al. Enzyme-free optical DNA mapping of the human genome using competitive binding. Nucleic Acids Res. 2019;47(15):e89. pmid:31165870
  18. 18. Sasanian N, Sharma R, Lubart Q, Kk S, Ghaeidamini M, Dorfman KD, et al. Probing physical properties of single amyloid fibrils using nanofluidic channels. Nanoscale. 2023;15(46):18737–44. pmid:37953701
  19. 19. Schäfer A-B, Sidarta M, Abdelmesseh Nekhala I, Marinho Righetto G, Arshad A, Wenzel M. Dissecting antibiotic effects on the cell envelope using bacterial cytological profiling: a phenotypic analysis starter kit. Microbiol Spectr. 2024;12(3):e0327523. pmid:38289933
  20. 20. Teledyne. Dark current. 2025. https://www.teledynevisionsolutions.com/learn/learning-center/imaging-fundamentals/dark-current/
  21. 21. Joiner BL, Rosenblatt JR. Some properties of the range in samples from Tukey’s symmetric lambda distributions. Journal of the American Statistical Association. 1971;66(334):394–9.
  22. 22. Oshana R. Overview of digital signal processing algorithms, part II - Part 10 in a series of tutorials in instrumentation and measurement. IEEE Instrum Meas Mag. 2007;10(2):53–8.
  23. 23. Cherniak G, Nemirovsky A, Nemirovsky Y. Revisiting the modeling of the conversion gain of CMOS image sensors with a new stochastic approach. Sensors (Basel). 2022;22(19):7620. pmid:36236717
  24. 24. Cloudy Nights. Have I understood CMOS camera gain correctly? 2023. https://www.cloudynights.com/topic/610724-have-i-understood-cmos-camera-gain-correctly/
  25. 25. Lukacs E. Characteristic functions. 2nd ed. Griffin; 1970.
  26. 26. Hu H. Poisson distribution and application in photon-limited imaging. Knoxville, TN: University of Tennessee; 2008.
  27. 27. Gil-Pelaez J. Note on the inversion theorem. Biometrika. 1951;38(3–4):481–2.
  28. 28. Casella G, Berger R. Statistical inference. Chapman and Hall/CRC; 2024.
  29. 29. NIST/SEMATECH. Probability plot correlation coefficient (PPCC) plot. Electronic Handbook of Statistical Methods. 2024. https://www.itl.nist.gov/div898/handbook/eda/section3/ppccplot.htm
  30. 30. Kim TJ, Tuerkcan S, Ceballos A, Pratx G. Modular platform for low-light microscopy. Biomed Opt Express. 2015;6(11):4585–98. pmid:26601020
  31. 31. Harpsøe KBW, Jørgensen UG, Andersen MI, Grundahl F. High frame rate imaging based photometry. A&A. 2012;542:A23.
  32. 32. Sage D, Pham T-A, Babcock H, Lukes T, Pengo T, Chao J, et al. Super-resolution fight club: assessment of 2D and 3D single-molecule localization microscopy software. Nat Methods. 2019;16(5):387–95. pmid:30962624
  33. 33. Daigle O, Djazovski O, Dupuis J, Doyon R, Artigau É. Astronomical imaging with EMCCDs using long exposures. In: SPIE Proceedings. 2014. p. 91540D. https://doi.org/10.1117/12.2056617
  34. 34. Kaur S, Tang ZF, McMillen DR. A framework to enhance the signal-to-noise ratio for quantitative fluorescence microscopy. PLoS One. 2025;20(9):e0330718. pmid:40906681