## Figures

## Abstract

The misalignment between recorded in-focus and out-of-focus images using the Phase Diversity (PD) algorithm leads to a dramatic decline in wavefront detection accuracy and image recovery quality for segmented active optics systems. This paper demonstrates the theoretical relationship between the image misalignment and tip-tilt terms in Zernike polynomials of the wavefront phase for the first time, and an efficient two-step alignment correction algorithm is proposed to eliminate these misalignment effects. This algorithm processes a spatial 2-D cross-correlation of the misaligned images, revising the offset to 1 or 2 pixels and narrowing the search range for alignment. Then, it eliminates the need for subpixel fine alignment to achieve adaptive correction by adding additional tip-tilt terms to the Optical Transfer Function (OTF) of the out-of-focus channel. The experimental results demonstrate the feasibility and validity of the proposed correction algorithm to improve the measurement accuracy during the co-phasing of segmented mirrors. With this alignment correction, the reconstructed wavefront is more accurate, and the recovered image is of higher quality.

**Citation: **Yue D, Xu S, Nie H, Wang Z (2016) An Efficient Correction Algorithm for Eliminating Image Misalignment Effects on Co-Phasing Measurement Accuracy for Segmented Active Optics Systems. PLoS ONE 11(3):
e0148872.
doi:10.1371/journal.pone.0148872

**Editor: **Feng Shao, Ningbo University, CHINA

**Received: **May 10, 2015; **Accepted: **January 23, 2016; **Published: ** March 2, 2016

**Copyright: ** © 2016 Yue et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

**Data Availability: **All relevant data are within the paper.

**Funding: **This work was supported by National Natural Science Funds of China, Grant No. 61205143, http://www.ciomp.cas.cn/.

**Competing interests: ** The authors have declared that no competing interests exist.

## Introduction

Segmented active optics (SAO) systems can meet the demands of next-generation space telescopes being lighter, larger and foldable [1]. A segmented primary mirror stitches serial arrays of sub-mirrors together to reach the optimal capabilities of a monolithic mirror. This type of space telescope can be folded for launch and deploy autonomously after reaching orbit [2]. However, high-quality images equivalent to those of a monolithic mirror can only be achieved if co-phasing of the segmented mirrors occurs. Co-phasing of segmented mirrors removes the relative piston aberrations between segments and the tip-tilt aberrations of each segment, making it the one of the core technologies for the practical application of segmented telescopes.

Many methods have been proposed for the co-phasing segmented mirrors to obtain nearly diffraction-limited performance from the total aperture, such as the Mach-Zehnder Interferometer Sensing [3–4], modified Shack-Hartmann Wavefront Sensing (WFS) [5–6], the Curvature Sensing [7–8], the Pyramid Sensing [9–10], the Zernike Phase Contrast Sensing [11] and Phase Diversity WFS (PD WFS) [12–15]. PD WFS stands out in the development of technologies for the co-phasing of SAO systems because traditional wavefront reconstruction using methods such as Shack-Hartmann WFS tend to break down at the mirror segment edges. Additionally, PD WFS does not require new instrumentation to be added to the already complex optical system and is sensitive to both relative piston and tip-tilt aberrations for continuous and discontinuous input of distorted wavefronts.

The basic principle of the PD algorithm is the simultaneous acquisition of a pair of short-exposure images with a known out-of-focus distance to construct an iterative optimization model based on the maximum likelihood estimation theory. Then, the distorted wavefront is reconstructed, and the recovery of an unknown object in the field can be acquired according to the intensity distribution of the gained images. PD WFS requires not only the synchronous acquisition of images but also rigorous alignment of each image. Currently, many refinement approaches of PD technique have been proposed, such as modifying the optimization algorithm to improve the accuracy and noise tolerance [16] or establishing a linear relationship between the detected images and unknown aberrations to improve the speed of the algorithm [17–18]. All these methods are based on utilizing at least two images, which are ideally without misalignment. When there are horizontal and/or vertical position offsets between a pair of in-focus-defocused images, the accuracy of wavefront sensing and the quality of the recovered image will suffer a very serious decline. The strict theoretical demonstration presented in this paper will prove that this image misalignment error precisely corresponds to the tip-tilt aberrations of the wavefront phase. The co-phasing of segmented mirrors eliminates the piston and tip-tilt errors; thus, the strict alignment of the image pair is a compulsory step during the wavefront detection and the recovery of unknown objects based on PD algorithm for the SAO systems.

This paper analyzes the reasons for the appearance of image misalignment errors and gives the strict theoretical proof of the corresponding relationship between these misalignment errors and tip-tilt aberrations in the Zernike polynomials of the wavefront phase. To eliminate the undesirable effects of image misalignment on the wavefront detection accuracy and the object image recovery quality, an efficient two-step alignment correction algorithm is proposed based on the characteristics of PD WFS for SAO systems. This algorithm utilizes a spatial 2-D cross-correlation [19–21] of image pairs to process coarse alignment corrections and restricts the offset to 1 or 2 pixels, which narrows the search range for correction in the next step. Then, adaptive correction is realized without the need for subpixel fine alignment by adding additional tip-tilt terms to the Optical Transfer Function (OTF) of the out-of-focus channel as alignment parameters. A comparison of the results of reconstructed wavefronts and recovered objects with and without alignment correction demonstrates the effectiveness and feasibility of the proposed algorithm and reveals that strict image alignment is indispensable in co-phasing of segmented mirrors to obtain nearly diffraction-limited performance.

## Materials and Methods

### Theory demonstration

The basic theory of the PD algorithm demands acquiring multi-channel images of the same target simultaneously, including images recorded in the in-focus plane and in the out-of-focus plane. The adding channel contains only an out-of-focus aberration introduced by the known out-of-focus amount without any other aberrations. There should not be any position offsets between the recorded images ideally. However, misalignment may occur when fixing the camera or the optical platform in the optical setup, leading to relative offsets in the image pair. Meanwhile, the CCD camera target surface is often larger than the object image; thus, the region of interest (ROI) of the in-focus and out-of-focus images must be intercepted respectively to reduce the amount of the subsequent work, which can also bring about misalignment in the images.

Without loss of generality, single frame images from two-channel are considered here. Consider the image *i*_{1} recorded in the focus plane as a reference; then, the image collected in the out-of-focus plane has relative offsets, which are Δ*u* and Δ*v* in the horizontal and vertical directions, respectively, given by Eq (1):
(1)
where *i*_{2} is the ideal out-of-focus image without any misalignment errors. According to the Fourier transform properties, the formula above can be rewritten as Eq (2) in the frequency domain:
(2)

By taking the frequency spectrum of the out-of-focus image with misalignment into the PD cost function, we obtain Eq (3):
(3)
where *S*_{1} and *S*_{2} are OTFs of the in-focus and out-of-focus channels, respectively. The misalignment error of images only affects the numerator of the third term, expanding it to Eq (4):
(4)

The purpose of the derivation above is to transfer the misalignment of images to the wavefront phase, thus allowing for a deeper analysis into the OTF of the out-of-focus channel. Eq (5) is obtained using the Fourier phase shift theorem: (5)

Utilizing the relationships between the point spread function (PSF), the impulse response function and the generalized pupil function in incoherent imaging systems, Eqs (6) and (7) can be obtained as
(6)
(7)
where *s*_{2} and *h*_{2} are the PSF and impulse response function of the out-of-focus channel, respectively, and P_{2} are the generalized pupil function with and without misalignment errors, respectively, and *ϕ*(*ρ*,*θ*) and *ϕ*_{d}(*ρ*,*θ*) denote the co-phase aberrations and constant out-of-focus aberration, respectively. According to the transformation relationship between Cartesian and polar coordinates, a significant conclusion can be drawn here: the image misalignment errors precisely correspond to the tip-tilt terms in Zernike polynomials of the wavefront phase. Thus, through the derivation of the above formulas, image misalignment errors have been mapped to the wavefront aberration of the generalized pupil function, which is the sound theoretical foundation for the proposed correction algorithm.

The image misalignment errors do not affect image quality, namely they have no influence on the 4th- or higher-order Zernike aberrations. However, in the co-phasing of SAO systems, the main aberrations to be removed are the relative piston aberrations between segments and the tip-tilt aberrations of each segment. Thus, for the wavefront detection and image restoration of segmented telescopes based on the PD algorithm, the image pair between the in-focus and out-of-focus planes must be strictly aligned.

### Alignment correction algorithm

The images collected in the in-focus and out-of-focus planes require alignment correction according to the theoretical analysis above. The image in the in-focus plane is typically set as a reference, and the out-of-focus image is aligned to the referenced one. However, the introduced out-of-focus aberrations and the unavoidable noise lead to inefficiency in conventional alignment methods. Aimed at the specific features of SAO systems based on the PD algorithm and the theoretical relationship between misalignment and tip-tilt errors, a two-step alignment correction method is proposed in this paper. First, a spatial 2-D cross-correlation of the in-focus and out-of-focus images is computed to locate a coarse alignment position to narrow the search range. Then, the remaining offsets in the vertical and horizontal directions after coarse alignment are used as search parameters to realize adaptive correction based on the optimization process of solving the PD cost function without subpixel fine alignment.

#### Coarse alignment correction.

Many conventional alignment correction approaches cannot yield accurate results such as phase correlation [22] and cross-correlation in the frequency domain [23] due to the introduced out-of-focus aberrations and the presence of a cut-off frequency in the OTFs of the segmented optical system that results in 0 frequency spectrum points of degraded images. This paper processes a spatial 2-D cross-correlation directly to the pixel matrices of images in the in-focus and out-of-focus planes and then locates the position of its maxima. There is a corresponding relationship between coordinate position and image offset [24–26].

The spatial 2-D cross-correlation of a *M* × *N* matrix X and a *P*×*Q* matrix H is a matrix C of size *M* + *P* − 1 by *N* + *Q* − 1 given by Eq (8):
(8)
where −(*P*−1)≤*k*≤*M*−1, −(*Q*−1)≤*l*≤*N*−1 and the bar over H denotes complex conjugation. Assume that matrix X is the template and (*xpeak*, *ypeak*) is the coordinate position of peak point of *C*(*k*, *l*); then, the coarse number of offset pixels can be gained by Eq (9):
(9)

With coarse alignment correction, the misalignment between the image pair can be limited to 1 or 2 pixels, which narrows the search range for subsequent adaptive alignment correction and improves the computation efficiency and accuracy.

#### Adaptive alignment correction.

After coarse alignment, there exists a 1- or 2-pixel offset between the in-focus and out-of-focus images; this offset still has a serious effect on the wavefront detection accuracy and image restoration quality. According to the theoretical proof, image misalignment errors can be mapped to the wavefront phase of the generalized pupil function corresponding to the tip-tilt terms in the Zernike polynomials. Therefore, here, additional tip-tilt terms are added to the Zernike polynomials of original wavefront phase to match the offset between the image pair without requiring subpixel fine alignment. Adaptive correction can be achieved by the optimization algorithm of solving the PD cost function.

The cost function of segmented optics system based on the PD algorithm is expressed as Eq (10):
(10)
where *I*_{1} and *I*_{2} represent the frequency spectra of in-focus and out-of-focus images with misalignment errors, respectively. In the practical iteration process, the in-focus image is set as reference by default adding no tip-tilt terms. Then, the modified OTFs with matching terms are given in Eq (11):
(11)
where *n* is the index of sub-aperture, *N* is the total number of sub-mirrors, and *Z*_{0}, *Z*_{1}, *Z*_{2} and *Z*_{3} correspond to piston, tip, tilt and defocus aberrations in Zernike polynomials, respectively. *E*_{n}, *T*_{xn}, *T*_{yn} and *D*_{n} are the corresponding Zernike polynomial coefficients of the *nth* sub-mirror, respectively. Δ*T*_{xn} and Δ*T*_{yn} are the additional tip-tilt matching terms. During each iteration of the search process, the OTFs of the in-focus and out-of-focus channels in Eq (11) are used to compute the cost function to perform nonlinear constraints. Thus, the tip-tilt errors introduced by the misalignment between images can be eliminated automatically by the PD optimization algorithm without the subpixel fine alignment, and then, the real aberration coefficients are obtained.

The OTFs used to recover the object image also need the additional matching tip-tilt terms to obtain the correct result. Taking the final searched coefficients of the actual aberrations and the matching tip-tilt terms to Eq (11), the OTFs of the corresponding channels are obtained and the recovery of the object is achieved by Eq (12): (12)

The flowchart for the alignment correction method is shown in Fig 1.

## Results

In this section, the effectiveness and accuracy of the proposed alignment correction algorithm are demonstrated using several numerical simulations.

The parameters of the optical system used in the simulation are as follows. The segmented primary mirror consists of 6 hexagon sub-mirrors; their construction and sequence are shown in Fig 2. The hexagon sub-mirror diameter is *d* and occupies 43×43 pixels in the pupil plane. The diameter of the primary mirror is *D*, and its corresponding pixels are 128×128. To satisfy Nyquist sample theory, the entire pupil plane is set to 256×256 pixels. The *F*^{#} of the optical system is 8, the monochromatic wavelength is 570*nm* and the out-of-focus distance is set to 400*λ*.

The 1st sub-mirror is set as the standard mirror; then, a set of random piston and tip-tilt errors listed in Table 1 are applied to all sub-apertures with co-phase errors restricted within ±0.5*λ*. The resulting phase distribution of the distorted wavefront is shown in Fig 3.

First, consider that the object image only occupies part of CCD target surface; specifically, the object is located in the middle of the target surface and is distinct from the background. Panoramic images will be discussed later.

Take the commonly used resolution test panel in laboratory shown in Fig 4(A) as an observed object; then, the in-focus image and out-of-focus image with misalignment errors obtained by the SAO simulation system are given by Fig 4(B) and 4(C), respectively. The coarse aligned out-of-focus image is shown in Fig 4(D), which limits the misalignment error within the range of 1–2 pixels. Then, the adaptive alignment correction is processed by using the in-focus image in Fig 4(B) and coarse aligned out-of-focus image in Fig 4(D) to realize adaptive correction. This paper utilizes L-BFGS as a nonlinear optimization algorithm to solve the PD cost function. The reconstructed wavefront aberration coefficients are listed in Table 2. The reconstructed wavefront phase distribution, residual phase distribution and recovered object with alignment correction are shown in Fig 5(A), 5(B) and 5(C), respectively. To show the contrast effect, the experimental results without alignment correction are given in Fig 6. Fig 6(A) and 6(B) show the reconstructed wavefront phase distribution and residual phase distribution, respectively. Fig 6(C) presents the recovered object without correction. The contrast experimental results show that without alignment correction, the reconstructed wavefront greatly deviates from the actual wavefront; the recovered object image has lower contrast, more blurred edges and the same inclination trend with the misalignment errors.

An actual image of lunar eclipse taken by a Maca telescope is used as another observed object for jointly estimating the wavefront and object image under the larger aberrations of the SAO simulation systems. The experimental results are shown in Fig 7.

For panoramic images, a window function should be added to images before alignment correction. The goal is to weaken the misalignment image edge, improve the alignment accuracy and suppress the Fourier cycle edge effect. The size and type of the chosen window function depends on the object image and the PSF. A modified 2-D Hanning window, shown in Fig 8, is used in this paper for panoramic images and is added to the observed images. The experimental results of a satellite map of a military base are shown in Fig 9. Another experiment using a satellite map of an urban scene with dense texture is also tested to verify the alignment algorithm. The results are given by Fig 10.

To evaluate the wavefront detection accuracy and the image restoration quality, the following part presents the corresponding evaluation indicators:

- The root-mean-square-error (
*RMSE*) for wavefront detection: (13) where*ϕ*_{0}is the simulated phase distribution needing to be measured and*ϕ*is the reconstructed phase distribution by the PD algorithm.*M*and*N*denote the sampling number. A smaller*RMSE*value indicates a higher wavefront detection accuracy. *PV*value: peak-to-valley value of the phase. A smaller*PV*value means a smaller residual error.- Mean-square-error (
*MSE*) for image restoration: (14) where*f*(*i*,*j*) and represent the pixel values at point (*i*,*j*) of the reference image and the image to be evaluated, respectively.*M*and*N*denote the sampling number. A smaller*MSE*value indicates a better image quality. - Similarity Measurement (
*SM*): (15)*SM*utilizes the similarity degree of the gray value of the two images to indirectly assess the image restoration effect. A value close to 1 indicates that the recovered image better approximates the original object, indicating that the image quality is better.

The evaluation results of the wavefront detection accuracy and image restoration quality according to the defined indicators are listed in Table 3. Table 3 shows that with alignment correction, the *RMSE* and *PV* values of the residual phase are considerably smaller than those without correction and are within the acceptable range for SAO systems. For image restoration, the *MSE* value with alignment correction is smaller, indicating that the recovered image is of better quality. The *SM* value is close to 1, implying that the restored image has a higher similarity degree and lower deviation from the ideal image than that without correction.

## Discussion

For the co-phasing of SAO systems, the main task is to eliminate piston and tip-tilt errors. The PD algorithm is typically used to detect the discontinuous wavefront; however, misalignment of the images recorded in the in-focus and out-of-focus planes will lead to a serious decline of the wavefront detection accuracy and image restoration quality. To solve this problem, this paper rigorously demonstrated the theoretical relationship between image misalignment and the tip-tilt terms in the Zernike polynomials of the wavefront phase and then proposed an efficient two-step alignment correction algorithm. This algorithm processes a spatial 2-D cross-correlation to an image pair with misalignment errors to achieve a coarse alignment correction, which narrows the search range for subsequent correction. Then, additional tip-tilt terms are added to the OTF of the out-of-focus channel as search parameters to realize adaptive correction without the need for subpixel fine alignment. The experimental results of both the object image distinct from the background and panoramic image demonstrate the effectiveness and veracity of the proposed alignment correction algorithm, with the reconstructed wavefront being more accurately determined and the recovered image being closer to the ideal object, resulting in higher quality.

## Acknowledgments

We would like to give special thanks to Prof. Shuyan Xu at the Chinese Academy of Science (CAS) for providing access to the CSA software.

## Author Contributions

Conceived and designed the experiments: DY HTN ZYW. Performed the experiments: DY HTN. Analyzed the data: DY HTN ZYW. Contributed reagents/materials/analysis tools: DY SYX. Wrote the paper: DY HTN.

## References

- 1. Lightsey P, Atkinson C, Clampin M, Feinberg L (2012) James Webb Space Telescope: large deployable cryogenic telescope in space. Optical Engineering 51(1), 011003:1–19. doi: 10.1117/1.OE.51.1.011003.
- 2. Lillie C F, Dailey D, Polidan R (2010) Large aperture telescopes for launch with the Ares V launch vehicle. Acta Astronautica 66(3): 374–381. doi: 10.1016/j.actaastro.2009.07.025.
- 3. Martinez L, Dohlen K, Yaitskova N, Dierickx P (2003) Mach Zender wavefront sensor for phasing of segmented telescopes. Astronomical Telescopes and Instrumentation. International Society for Optics and Photonics: 564–573. doi: 10.1117/12.458012.
- 4. Yaitskova N, Dohlen K, Dierickx P, Montoya L (2005) Mach–Zehnder interferometer for piston and tip–tilt sensing in segmented telescopes: theory and analytical treatment. JOSA A, 22(6): 1093–1105. doi: 10.1364/JOSAA.22.001093. pmid:15984482
- 5.
Chanan G (1989) Design of the Keck Observatory alignment camera. OPTCON'88 Conferences-Applications of Optical Engineering. International Society for Optics and Photonics 1989: 59–71. doi: 10.1117/12.950971.
- 6. Chanan G, Ohara C, Troy M (2000) Phasing the mirror segments of the Keck telescopes II: the narrow-band phasing algorithm. Applied Optics 39(25): 4706–4714. doi: 10.1364/AO.39.004706. pmid:18350062
- 7. Orlov V, Cuevas S, Garfias F, Voitsekhovich V V, Sanchez L (2000) Co-phasing of segmented mirror telescopes with curvature sensing. Astronomical Telescopes and Instrumentation. International Society for Optics and Photonics: 540–551. doi: 10.1117/12.393930.
- 8. Chanan G, Pintó A. (2004) Wavefront curvature sensing on highly segmented telescopes. Appl. Opt.16:3279~3286. doi: 10.1117/12.459856.
- 9.
Esposito S, Devaney N, Pinna E, Tozzi A, Stefanini P (2003) Cophasing of segmented mirrors using the pyramid sensor. Optical Science and Technology, SPIE's 48th Annual Meeting. International Society for Optics and Photonics: 72–78. doi: 10.1117/12.511507.
- 10. Esposito S, Pinna E, Puglisi A, Tozzi A, Stefanini P (2005) Pyramid sensor for segmented mirror alignment. Optics letters, 30(19): 2572–2574. doi: 10.1364/OL.30.002572. pmid:16208903
- 11. Surdej I, Yaitskova N, Gonte F. (2010) On-sky performance of the Zernike phase contrast sensor for the phasing of segmented telescopes. Applied optics, 49(21): 4052–4062. doi: 10.1364/AO.49.004052. pmid:20648188
- 12. Lofdahl M G, Kendrick R L, Harwit A, Mitchell K E, Duncan A (1998) Phase diversity experiment to measure piston misalignment on the segmented primary mirror of the Keck II telescope. Astronomical Telescopes & Instrumentation. International Society for Optics and Photonics 3356:1190–1201. doi: 10.1117/12.324519.
- 13. Paxman R, Fienup J (1988) Optical misalignment sensing and image reconstruction using phase diversity. JOSA A 5(6): 914–923. doi: 10.1364/JOSAA.5.000914.
- 14. Li C, Zhang S (2012) Co-phasing of the segmented mirror based on the generalized phase diversity wavefront sensor. SPIE Astronomical Telescopes Instrumentation. International Society for Optics and Photonics: 84500B-84500B-6. doi: 10.1117/12.923645.
- 15. Meimon S, Delavaquerie E, Cassaing F, Fusco T, Mugnier L M, Michau V (2008) Phasing segmented telescopes with long-exposure phase diversity images. SPIE Astronomical Telescopes Instrumentation. International Society for Optics and Photonics: 701214-701214-10. doi: 10.1117/12.787835.
- 16. Yue D, Xu S Y, Nie H T (2015) Co-phasing of the segmented mirror and image retrieval based on phase diversity using a modified algorithm. Applied Optics 54(26):7917–7924. doi: 10.1364/AO.54.007917. pmid:26368964
- 17. Mocoeur I, Mugnier L M, Cassaing F (2009) Analytical solution to the phase-diversity problem for real-time wavefront sensing. Optics Letters 34(22): 3487–3489. doi: 10.1364/OL.34.003487. pmid:19927186
- 18. Smith C S, Marinica R, Verhaegen M (2013) Real-time wavefront reconstruction from intensity measurements. Adaptive Optics for the Extremely Large Telescopes (AO4ELT). doi: 10.12839/AO4ELT3.13243.
- 19. Guan T, He Y F, Duan L Y, Yu J Q (2014) Efficient BOF Generation and Compression for On-Device Mobile Visual Location Recognition. IEEE Multimedia 21(2): 32–41. doi: 10.1109/MMUL.2013.31.
- 20. Gao Y, Wang M, Zha Z J, Shen J L, Li X L, Wu X D (2013) Visual-Textual Joint Relevance Learning for Tag-Based Social Image Search. IEEE Transactions on Image Processing 22(1): 363–376. doi: 10.1109/TIP.2012.2202676. pmid:22692911
- 21. Ji R, Duan L Y, Chen J, Yao H, Yuan J, Rui Y, et al (2012) Location discriminative vocabulary coding for mobile landmark search. International Journal of Computer Vision 96 (3): 290–314 doi: 10.1007/s11263-011-0472-9.
- 22. Foroosh H, Zerubia J B, Berthod M (2002) Extension of phase correlation to subpixel registration. Image Processing, IEEE Transactions on 11(3): 188–200. doi: 10.1109/83.988953.
- 23. Guizar-Sicairos M, Thurman S T, Fienup J R (2008) Efficient subpixel image registration algorithms. Optics letters33(2): 156–158. doi: 10.1364/OL.33.000156. pmid:18197224
- 24. Guan T, He Y, Gao J, Yang J, Yu J (2013) On-Device Mobile Visual Location Recognition by Integrating Vision and Inertial Sensors. IEEE Transactions on Multimedia 15(7): 1688–1699. doi: 10.1109/TMM.2013.2265674.
- 25. Wei B C, Guan T, Yu J Q (2014) Projected Residual Vector Quantization for ANN Search. IEEE Multimedia 21(3): 41–51. doi: 10.1109/MMUL.2013.65.
- 26. Gao Y, Wang M, Tao D C, Ji R R, Dai Q H (2012) 3D Object Retrieval and Recognition with Hypergraph Analysis. IEEE Transactions on Image Processing 21(9): 4290–4303. doi: 10.1109/TIP.2012.2199502. pmid:22614650