Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

An Efficient Correction Algorithm for Eliminating Image Misalignment Effects on Co-Phasing Measurement Accuracy for Segmented Active Optics Systems

  • Dan Yue ,

    danzik3@126.com

    Affiliations Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun, China, University of Chinese Academy of Sciences, Beijing, China

  • Shuyan Xu,

    Affiliation Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun, China

  • Haitao Nie,

    Affiliations Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun, China, University of Chinese Academy of Sciences, Beijing, China

  • Zongyang Wang

    Affiliation Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun, China

Abstract

The misalignment between recorded in-focus and out-of-focus images using the Phase Diversity (PD) algorithm leads to a dramatic decline in wavefront detection accuracy and image recovery quality for segmented active optics systems. This paper demonstrates the theoretical relationship between the image misalignment and tip-tilt terms in Zernike polynomials of the wavefront phase for the first time, and an efficient two-step alignment correction algorithm is proposed to eliminate these misalignment effects. This algorithm processes a spatial 2-D cross-correlation of the misaligned images, revising the offset to 1 or 2 pixels and narrowing the search range for alignment. Then, it eliminates the need for subpixel fine alignment to achieve adaptive correction by adding additional tip-tilt terms to the Optical Transfer Function (OTF) of the out-of-focus channel. The experimental results demonstrate the feasibility and validity of the proposed correction algorithm to improve the measurement accuracy during the co-phasing of segmented mirrors. With this alignment correction, the reconstructed wavefront is more accurate, and the recovered image is of higher quality.

Introduction

Segmented active optics (SAO) systems can meet the demands of next-generation space telescopes being lighter, larger and foldable [1]. A segmented primary mirror stitches serial arrays of sub-mirrors together to reach the optimal capabilities of a monolithic mirror. This type of space telescope can be folded for launch and deploy autonomously after reaching orbit [2]. However, high-quality images equivalent to those of a monolithic mirror can only be achieved if co-phasing of the segmented mirrors occurs. Co-phasing of segmented mirrors removes the relative piston aberrations between segments and the tip-tilt aberrations of each segment, making it the one of the core technologies for the practical application of segmented telescopes.

Many methods have been proposed for the co-phasing segmented mirrors to obtain nearly diffraction-limited performance from the total aperture, such as the Mach-Zehnder Interferometer Sensing [34], modified Shack-Hartmann Wavefront Sensing (WFS) [56], the Curvature Sensing [78], the Pyramid Sensing [910], the Zernike Phase Contrast Sensing [11] and Phase Diversity WFS (PD WFS) [1215]. PD WFS stands out in the development of technologies for the co-phasing of SAO systems because traditional wavefront reconstruction using methods such as Shack-Hartmann WFS tend to break down at the mirror segment edges. Additionally, PD WFS does not require new instrumentation to be added to the already complex optical system and is sensitive to both relative piston and tip-tilt aberrations for continuous and discontinuous input of distorted wavefronts.

The basic principle of the PD algorithm is the simultaneous acquisition of a pair of short-exposure images with a known out-of-focus distance to construct an iterative optimization model based on the maximum likelihood estimation theory. Then, the distorted wavefront is reconstructed, and the recovery of an unknown object in the field can be acquired according to the intensity distribution of the gained images. PD WFS requires not only the synchronous acquisition of images but also rigorous alignment of each image. Currently, many refinement approaches of PD technique have been proposed, such as modifying the optimization algorithm to improve the accuracy and noise tolerance [16] or establishing a linear relationship between the detected images and unknown aberrations to improve the speed of the algorithm [1718]. All these methods are based on utilizing at least two images, which are ideally without misalignment. When there are horizontal and/or vertical position offsets between a pair of in-focus-defocused images, the accuracy of wavefront sensing and the quality of the recovered image will suffer a very serious decline. The strict theoretical demonstration presented in this paper will prove that this image misalignment error precisely corresponds to the tip-tilt aberrations of the wavefront phase. The co-phasing of segmented mirrors eliminates the piston and tip-tilt errors; thus, the strict alignment of the image pair is a compulsory step during the wavefront detection and the recovery of unknown objects based on PD algorithm for the SAO systems.

This paper analyzes the reasons for the appearance of image misalignment errors and gives the strict theoretical proof of the corresponding relationship between these misalignment errors and tip-tilt aberrations in the Zernike polynomials of the wavefront phase. To eliminate the undesirable effects of image misalignment on the wavefront detection accuracy and the object image recovery quality, an efficient two-step alignment correction algorithm is proposed based on the characteristics of PD WFS for SAO systems. This algorithm utilizes a spatial 2-D cross-correlation [1921] of image pairs to process coarse alignment corrections and restricts the offset to 1 or 2 pixels, which narrows the search range for correction in the next step. Then, adaptive correction is realized without the need for subpixel fine alignment by adding additional tip-tilt terms to the Optical Transfer Function (OTF) of the out-of-focus channel as alignment parameters. A comparison of the results of reconstructed wavefronts and recovered objects with and without alignment correction demonstrates the effectiveness and feasibility of the proposed algorithm and reveals that strict image alignment is indispensable in co-phasing of segmented mirrors to obtain nearly diffraction-limited performance.

Materials and Methods

Theory demonstration

The basic theory of the PD algorithm demands acquiring multi-channel images of the same target simultaneously, including images recorded in the in-focus plane and in the out-of-focus plane. The adding channel contains only an out-of-focus aberration introduced by the known out-of-focus amount without any other aberrations. There should not be any position offsets between the recorded images ideally. However, misalignment may occur when fixing the camera or the optical platform in the optical setup, leading to relative offsets in the image pair. Meanwhile, the CCD camera target surface is often larger than the object image; thus, the region of interest (ROI) of the in-focus and out-of-focus images must be intercepted respectively to reduce the amount of the subsequent work, which can also bring about misalignment in the images.

Without loss of generality, single frame images from two-channel are considered here. Consider the image i1 recorded in the focus plane as a reference; then, the image collected in the out-of-focus plane has relative offsets, which are Δu and Δv in the horizontal and vertical directions, respectively, given by Eq (1): (1) where i2 is the ideal out-of-focus image without any misalignment errors. According to the Fourier transform properties, the formula above can be rewritten as Eq (2) in the frequency domain: (2)

By taking the frequency spectrum of the out-of-focus image with misalignment into the PD cost function, we obtain Eq (3): (3) where S1 and S2 are OTFs of the in-focus and out-of-focus channels, respectively. The misalignment error of images only affects the numerator of the third term, expanding it to Eq (4): (4)

The purpose of the derivation above is to transfer the misalignment of images to the wavefront phase, thus allowing for a deeper analysis into the OTF of the out-of-focus channel. Eq (5) is obtained using the Fourier phase shift theorem: (5)

Utilizing the relationships between the point spread function (PSF), the impulse response function and the generalized pupil function in incoherent imaging systems, Eqs (6) and (7) can be obtained as (6) (7) where s2 and h2 are the PSF and impulse response function of the out-of-focus channel, respectively, and P2 are the generalized pupil function with and without misalignment errors, respectively, and ϕ(ρ,θ) and ϕd(ρ,θ) denote the co-phase aberrations and constant out-of-focus aberration, respectively. According to the transformation relationship between Cartesian and polar coordinates, a significant conclusion can be drawn here: the image misalignment errors precisely correspond to the tip-tilt terms in Zernike polynomials of the wavefront phase. Thus, through the derivation of the above formulas, image misalignment errors have been mapped to the wavefront aberration of the generalized pupil function, which is the sound theoretical foundation for the proposed correction algorithm.

The image misalignment errors do not affect image quality, namely they have no influence on the 4th- or higher-order Zernike aberrations. However, in the co-phasing of SAO systems, the main aberrations to be removed are the relative piston aberrations between segments and the tip-tilt aberrations of each segment. Thus, for the wavefront detection and image restoration of segmented telescopes based on the PD algorithm, the image pair between the in-focus and out-of-focus planes must be strictly aligned.

Alignment correction algorithm

The images collected in the in-focus and out-of-focus planes require alignment correction according to the theoretical analysis above. The image in the in-focus plane is typically set as a reference, and the out-of-focus image is aligned to the referenced one. However, the introduced out-of-focus aberrations and the unavoidable noise lead to inefficiency in conventional alignment methods. Aimed at the specific features of SAO systems based on the PD algorithm and the theoretical relationship between misalignment and tip-tilt errors, a two-step alignment correction method is proposed in this paper. First, a spatial 2-D cross-correlation of the in-focus and out-of-focus images is computed to locate a coarse alignment position to narrow the search range. Then, the remaining offsets in the vertical and horizontal directions after coarse alignment are used as search parameters to realize adaptive correction based on the optimization process of solving the PD cost function without subpixel fine alignment.

Coarse alignment correction.

Many conventional alignment correction approaches cannot yield accurate results such as phase correlation [22] and cross-correlation in the frequency domain [23] due to the introduced out-of-focus aberrations and the presence of a cut-off frequency in the OTFs of the segmented optical system that results in 0 frequency spectrum points of degraded images. This paper processes a spatial 2-D cross-correlation directly to the pixel matrices of images in the in-focus and out-of-focus planes and then locates the position of its maxima. There is a corresponding relationship between coordinate position and image offset [2426].

The spatial 2-D cross-correlation of a M × N matrix X and a P×Q matrix H is a matrix C of size M + P − 1 by N + Q − 1 given by Eq (8): (8) where −(P−1)≤kM−1, −(Q−1)≤lN−1 and the bar over H denotes complex conjugation. Assume that matrix X is the template and (xpeak, ypeak) is the coordinate position of peak point of C(k, l); then, the coarse number of offset pixels can be gained by Eq (9): (9)

With coarse alignment correction, the misalignment between the image pair can be limited to 1 or 2 pixels, which narrows the search range for subsequent adaptive alignment correction and improves the computation efficiency and accuracy.

Adaptive alignment correction.

After coarse alignment, there exists a 1- or 2-pixel offset between the in-focus and out-of-focus images; this offset still has a serious effect on the wavefront detection accuracy and image restoration quality. According to the theoretical proof, image misalignment errors can be mapped to the wavefront phase of the generalized pupil function corresponding to the tip-tilt terms in the Zernike polynomials. Therefore, here, additional tip-tilt terms are added to the Zernike polynomials of original wavefront phase to match the offset between the image pair without requiring subpixel fine alignment. Adaptive correction can be achieved by the optimization algorithm of solving the PD cost function.

The cost function of segmented optics system based on the PD algorithm is expressed as Eq (10): (10) where I1 and I2 represent the frequency spectra of in-focus and out-of-focus images with misalignment errors, respectively. In the practical iteration process, the in-focus image is set as reference by default adding no tip-tilt terms. Then, the modified OTFs with matching terms are given in Eq (11): (11) where n is the index of sub-aperture, N is the total number of sub-mirrors, and Z0, Z1, Z2 and Z3 correspond to piston, tip, tilt and defocus aberrations in Zernike polynomials, respectively. En, Txn, Tyn and Dn are the corresponding Zernike polynomial coefficients of the nth sub-mirror, respectively. ΔTxn and ΔTyn are the additional tip-tilt matching terms. During each iteration of the search process, the OTFs of the in-focus and out-of-focus channels in Eq (11) are used to compute the cost function to perform nonlinear constraints. Thus, the tip-tilt errors introduced by the misalignment between images can be eliminated automatically by the PD optimization algorithm without the subpixel fine alignment, and then, the real aberration coefficients are obtained.

The OTFs used to recover the object image also need the additional matching tip-tilt terms to obtain the correct result. Taking the final searched coefficients of the actual aberrations and the matching tip-tilt terms to Eq (11), the OTFs of the corresponding channels are obtained and the recovery of the object is achieved by Eq (12): (12)

The flowchart for the alignment correction method is shown in Fig 1.

thumbnail
Fig 1. Flowchart of the proposed alignment correction algorithm.

https://doi.org/10.1371/journal.pone.0148872.g001

Results

In this section, the effectiveness and accuracy of the proposed alignment correction algorithm are demonstrated using several numerical simulations.

The parameters of the optical system used in the simulation are as follows. The segmented primary mirror consists of 6 hexagon sub-mirrors; their construction and sequence are shown in Fig 2. The hexagon sub-mirror diameter is d and occupies 43×43 pixels in the pupil plane. The diameter of the primary mirror is D, and its corresponding pixels are 128×128. To satisfy Nyquist sample theory, the entire pupil plane is set to 256×256 pixels. The F# of the optical system is 8, the monochromatic wavelength is 570nm and the out-of-focus distance is set to 400λ.

thumbnail
Fig 2. Construction of primary mirror and dimension of segmented sub-aperture.

https://doi.org/10.1371/journal.pone.0148872.g002

The 1st sub-mirror is set as the standard mirror; then, a set of random piston and tip-tilt errors listed in Table 1 are applied to all sub-apertures with co-phase errors restricted within ±0.5λ. The resulting phase distribution of the distorted wavefront is shown in Fig 3.

thumbnail
Fig 3. Phase distribution of original distorted wavefront of the resolution test panel experiment.

https://doi.org/10.1371/journal.pone.0148872.g003

First, consider that the object image only occupies part of CCD target surface; specifically, the object is located in the middle of the target surface and is distinct from the background. Panoramic images will be discussed later.

Take the commonly used resolution test panel in laboratory shown in Fig 4(A) as an observed object; then, the in-focus image and out-of-focus image with misalignment errors obtained by the SAO simulation system are given by Fig 4(B) and 4(C), respectively. The coarse aligned out-of-focus image is shown in Fig 4(D), which limits the misalignment error within the range of 1–2 pixels. Then, the adaptive alignment correction is processed by using the in-focus image in Fig 4(B) and coarse aligned out-of-focus image in Fig 4(D) to realize adaptive correction. This paper utilizes L-BFGS as a nonlinear optimization algorithm to solve the PD cost function. The reconstructed wavefront aberration coefficients are listed in Table 2. The reconstructed wavefront phase distribution, residual phase distribution and recovered object with alignment correction are shown in Fig 5(A), 5(B) and 5(C), respectively. To show the contrast effect, the experimental results without alignment correction are given in Fig 6. Fig 6(A) and 6(B) show the reconstructed wavefront phase distribution and residual phase distribution, respectively. Fig 6(C) presents the recovered object without correction. The contrast experimental results show that without alignment correction, the reconstructed wavefront greatly deviates from the actual wavefront; the recovered object image has lower contrast, more blurred edges and the same inclination trend with the misalignment errors.

thumbnail
Fig 4. Relevant object images of the resolution test panel experiment.

https://doi.org/10.1371/journal.pone.0148872.g004

thumbnail
Fig 5. Experimental results of the resolution test panel with alignment correction.

https://doi.org/10.1371/journal.pone.0148872.g005

thumbnail
Fig 6. Experimental results of the resolution test panel without alignment correction.

https://doi.org/10.1371/journal.pone.0148872.g006

thumbnail
Table 2. Reconstructed wavefront aberration coefficients and residual errors.

https://doi.org/10.1371/journal.pone.0148872.t002

An actual image of lunar eclipse taken by a Maca telescope is used as another observed object for jointly estimating the wavefront and object image under the larger aberrations of the SAO simulation systems. The experimental results are shown in Fig 7.

thumbnail
Fig 7. Experimental results of an actual image of the lunar eclipse.

https://doi.org/10.1371/journal.pone.0148872.g007

For panoramic images, a window function should be added to images before alignment correction. The goal is to weaken the misalignment image edge, improve the alignment accuracy and suppress the Fourier cycle edge effect. The size and type of the chosen window function depends on the object image and the PSF. A modified 2-D Hanning window, shown in Fig 8, is used in this paper for panoramic images and is added to the observed images. The experimental results of a satellite map of a military base are shown in Fig 9. Another experiment using a satellite map of an urban scene with dense texture is also tested to verify the alignment algorithm. The results are given by Fig 10.

thumbnail
Fig 9. Experimental results of a satellite map of military base.

https://doi.org/10.1371/journal.pone.0148872.g009

thumbnail
Fig 10. Experimental results of a satellite map of urban scene.

https://doi.org/10.1371/journal.pone.0148872.g010

To evaluate the wavefront detection accuracy and the image restoration quality, the following part presents the corresponding evaluation indicators:

  1. The root-mean-square-error (RMSE) for wavefront detection: (13) where ϕ0 is the simulated phase distribution needing to be measured and ϕ is the reconstructed phase distribution by the PD algorithm. M and N denote the sampling number. A smaller RMSE value indicates a higher wavefront detection accuracy.
  2. PV value: peak-to-valley value of the phase. A smaller PV value means a smaller residual error.
  3. Mean-square-error (MSE) for image restoration: (14) where f(i, j) and represent the pixel values at point (i, j) of the reference image and the image to be evaluated, respectively. M and N denote the sampling number. A smaller MSE value indicates a better image quality.
  4. Similarity Measurement (SM): (15) SM utilizes the similarity degree of the gray value of the two images to indirectly assess the image restoration effect. A value close to 1 indicates that the recovered image better approximates the original object, indicating that the image quality is better.

The evaluation results of the wavefront detection accuracy and image restoration quality according to the defined indicators are listed in Table 3. Table 3 shows that with alignment correction, the RMSE and PV values of the residual phase are considerably smaller than those without correction and are within the acceptable range for SAO systems. For image restoration, the MSE value with alignment correction is smaller, indicating that the recovered image is of better quality. The SM value is close to 1, implying that the restored image has a higher similarity degree and lower deviation from the ideal image than that without correction.

thumbnail
Table 3. Evaluation results of the wavefront detection accuracy and image restoration quality.

https://doi.org/10.1371/journal.pone.0148872.t003

Discussion

For the co-phasing of SAO systems, the main task is to eliminate piston and tip-tilt errors. The PD algorithm is typically used to detect the discontinuous wavefront; however, misalignment of the images recorded in the in-focus and out-of-focus planes will lead to a serious decline of the wavefront detection accuracy and image restoration quality. To solve this problem, this paper rigorously demonstrated the theoretical relationship between image misalignment and the tip-tilt terms in the Zernike polynomials of the wavefront phase and then proposed an efficient two-step alignment correction algorithm. This algorithm processes a spatial 2-D cross-correlation to an image pair with misalignment errors to achieve a coarse alignment correction, which narrows the search range for subsequent correction. Then, additional tip-tilt terms are added to the OTF of the out-of-focus channel as search parameters to realize adaptive correction without the need for subpixel fine alignment. The experimental results of both the object image distinct from the background and panoramic image demonstrate the effectiveness and veracity of the proposed alignment correction algorithm, with the reconstructed wavefront being more accurately determined and the recovered image being closer to the ideal object, resulting in higher quality.

Acknowledgments

We would like to give special thanks to Prof. Shuyan Xu at the Chinese Academy of Science (CAS) for providing access to the CSA software.

Author Contributions

Conceived and designed the experiments: DY HTN ZYW. Performed the experiments: DY HTN. Analyzed the data: DY HTN ZYW. Contributed reagents/materials/analysis tools: DY SYX. Wrote the paper: DY HTN.

References

  1. 1. Lightsey P, Atkinson C, Clampin M, Feinberg L (2012) James Webb Space Telescope: large deployable cryogenic telescope in space. Optical Engineering 51(1), 011003:1–19.
  2. 2. Lillie C F, Dailey D, Polidan R (2010) Large aperture telescopes for launch with the Ares V launch vehicle. Acta Astronautica 66(3): 374–381.
  3. 3. Martinez L, Dohlen K, Yaitskova N, Dierickx P (2003) Mach Zender wavefront sensor for phasing of segmented telescopes. Astronomical Telescopes and Instrumentation. International Society for Optics and Photonics: 564–573.
  4. 4. Yaitskova N, Dohlen K, Dierickx P, Montoya L (2005) Mach–Zehnder interferometer for piston and tip–tilt sensing in segmented telescopes: theory and analytical treatment. JOSA A, 22(6): 1093–1105. pmid:15984482
  5. 5. Chanan G (1989) Design of the Keck Observatory alignment camera. OPTCON'88 Conferences-Applications of Optical Engineering. International Society for Optics and Photonics 1989: 59–71. https://doi.org/10.1117/12.950971
  6. 6. Chanan G, Ohara C, Troy M (2000) Phasing the mirror segments of the Keck telescopes II: the narrow-band phasing algorithm. Applied Optics 39(25): 4706–4714. pmid:18350062
  7. 7. Orlov V, Cuevas S, Garfias F, Voitsekhovich V V, Sanchez L (2000) Co-phasing of segmented mirror telescopes with curvature sensing. Astronomical Telescopes and Instrumentation. International Society for Optics and Photonics: 540–551.
  8. 8. Chanan G, Pintó A. (2004) Wavefront curvature sensing on highly segmented telescopes. Appl. Opt.16:3279~3286.
  9. 9. Esposito S, Devaney N, Pinna E, Tozzi A, Stefanini P (2003) Cophasing of segmented mirrors using the pyramid sensor. Optical Science and Technology, SPIE's 48th Annual Meeting. International Society for Optics and Photonics: 72–78. https://doi.org/10.1117/12.511507
  10. 10. Esposito S, Pinna E, Puglisi A, Tozzi A, Stefanini P (2005) Pyramid sensor for segmented mirror alignment. Optics letters, 30(19): 2572–2574. pmid:16208903
  11. 11. Surdej I, Yaitskova N, Gonte F. (2010) On-sky performance of the Zernike phase contrast sensor for the phasing of segmented telescopes. Applied optics, 49(21): 4052–4062. pmid:20648188
  12. 12. Lofdahl M G, Kendrick R L, Harwit A, Mitchell K E, Duncan A (1998) Phase diversity experiment to measure piston misalignment on the segmented primary mirror of the Keck II telescope. Astronomical Telescopes & Instrumentation. International Society for Optics and Photonics 3356:1190–1201.
  13. 13. Paxman R, Fienup J (1988) Optical misalignment sensing and image reconstruction using phase diversity. JOSA A 5(6): 914–923.
  14. 14. Li C, Zhang S (2012) Co-phasing of the segmented mirror based on the generalized phase diversity wavefront sensor. SPIE Astronomical Telescopes Instrumentation. International Society for Optics and Photonics: 84500B-84500B-6.
  15. 15. Meimon S, Delavaquerie E, Cassaing F, Fusco T, Mugnier L M, Michau V (2008) Phasing segmented telescopes with long-exposure phase diversity images. SPIE Astronomical Telescopes Instrumentation. International Society for Optics and Photonics: 701214-701214-10.
  16. 16. Yue D, Xu S Y, Nie H T (2015) Co-phasing of the segmented mirror and image retrieval based on phase diversity using a modified algorithm. Applied Optics 54(26):7917–7924. pmid:26368964
  17. 17. Mocoeur I, Mugnier L M, Cassaing F (2009) Analytical solution to the phase-diversity problem for real-time wavefront sensing. Optics Letters 34(22): 3487–3489. pmid:19927186
  18. 18. Smith C S, Marinica R, Verhaegen M (2013) Real-time wavefront reconstruction from intensity measurements. Adaptive Optics for the Extremely Large Telescopes (AO4ELT).
  19. 19. Guan T, He Y F, Duan L Y, Yu J Q (2014) Efficient BOF Generation and Compression for On-Device Mobile Visual Location Recognition. IEEE Multimedia 21(2): 32–41.
  20. 20. Gao Y, Wang M, Zha Z J, Shen J L, Li X L, Wu X D (2013) Visual-Textual Joint Relevance Learning for Tag-Based Social Image Search. IEEE Transactions on Image Processing 22(1): 363–376. pmid:22692911
  21. 21. Ji R, Duan L Y, Chen J, Yao H, Yuan J, Rui Y, et al (2012) Location discriminative vocabulary coding for mobile landmark search. International Journal of Computer Vision 96 (3): 290–314
  22. 22. Foroosh H, Zerubia J B, Berthod M (2002) Extension of phase correlation to subpixel registration. Image Processing, IEEE Transactions on 11(3): 188–200.
  23. 23. Guizar-Sicairos M, Thurman S T, Fienup J R (2008) Efficient subpixel image registration algorithms. Optics letters33(2): 156–158. pmid:18197224
  24. 24. Guan T, He Y, Gao J, Yang J, Yu J (2013) On-Device Mobile Visual Location Recognition by Integrating Vision and Inertial Sensors. IEEE Transactions on Multimedia 15(7): 1688–1699.
  25. 25. Wei B C, Guan T, Yu J Q (2014) Projected Residual Vector Quantization for ANN Search. IEEE Multimedia 21(3): 41–51.
  26. 26. Gao Y, Wang M, Tao D C, Ji R R, Dai Q H (2012) 3D Object Retrieval and Recognition with Hypergraph Analysis. IEEE Transactions on Image Processing 21(9): 4290–4303. pmid:22614650