Figures
Abstract
Image registration has demonstrated its significance as an essential tool for target recognition, classification, tracking, and damage assessment during natural catastrophes. The image registration process relies on the identification of numerous reliable features; thus, low resolutions, poor lighting conditions, and low image contrast substantially diminish the number of dependable features available for registration. Contrast stretching enhances image quality, facilitating the object detection process. In this study, we proposed a hybrid binary differential evolution and BAT optimization model to enhance contrast stretching by optimizing a decision variables in the transformation function. To validate its efficiency, the proposed approach is utilized as a preprocessor before feature extraction in image registration. Cross-comparison of detected features of enhanced images verses the original images during image registration validate the improvements in the image registration process.
Citation: Akram MA, Akram T, Javed U, Rafiq M, Naz M, He D (2024) DeBo: Contrast enhancement for image registration using binary differential evolution and bat optimization. PLoS ONE 19(12): e0315902. https://doi.org/10.1371/journal.pone.0315902
Editor: Nouman Ali, Mirpur University of Science and Technology, PAKISTAN
Received: October 20, 2023; Accepted: November 28, 2024; Published: December 26, 2024
Copyright: © 2024 Akram et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: The dataset is freely available to download from https://bupt-ai-cz.github.io/LLVIP/ and https://www.cvlibs.net/datasets/kitti/index.php.
Funding: This work was supported by the National Natural Science Foundation of China (NFSC) under Grant No. 61971278 and Grant No. 62231010.
Competing interests: The authors have declared that no competing interests exist.
Introduction
Image registration involves the alignment of multiple images (two or more) that are taken from the same scene under different conditions, for instance, from different sensors, from different viewpoints, under different illumination conditions, or at different times, etc. The alignment process is the fundamental requirement in diverse applications, including image stitching, environmental monitoring [1], change detection [2], precision agriculture [3, 4], map updating [5] and number of others.
Image registration is broadly categorized into intensity-based and feature-based [6]. In the intensity-based registration process, intensity is used to relate the reference and target images, such as mutual information [7] and correlation coefficient [8, 9]. The selection of the transformation model and the similarity function are the critical choices that affects the performance of the intensity-based approaches. However, a large number of resources are required to handle a substantial amount of transformation parameters. Moreover, the similarity function involves calculation between pixels of each image, which is not a computationally economical task. Furthermore, intensity-based image registration algorithms are more susceptible to noise, scale variations, illumination changes and contrast [6, 10]. Furthermore, intensity-based techniques cannot be used for multi-spectral image registration because the intensity changes non-linearly in multi-spectral images [11].
Feature-based techniques perform image registration by matching the distinguishable features extracted from both images. Corners are considered the most reliable features in feature-based image registration [1] since they are invariant to image geometry and can be observed by the human eye [12]. Therefore, feature-based image registration methods got more attention from the researchers over time. The most important aspect of the feature-based image registration is the extraction of features such as corners, lines, etc. Several feature extraction algorithms have been proposed in computer vision, including SIFT (Scale Invariant Feature Transform) [13], ORB (oriented FAST and rotated BRIEF) [14], BRISK (binary robust invariant salable keypoints) [15] and accelerated (A)-KAZE [16].
The feature-based image registration process consists of five key processes, shown in Fig 1. The keypoint descriptor matching is the most crucial step in the image registration process. The image registration process adopts additional methods to enhance its performance.
Feature-matching image registration process is classified into two categories based on the selection of image enhancement techniques: intensity-invariant extraction and outlier-removal. Outlier involves the exclusion of outlying features through employing the relation between the features, thus reducing the transformation errors in image registration. LWM (local weighted mean) and RANSAC are used along with the SIFT for outlying feature removal from multimodal image features. However, LWM aided SIFT is not applicble to images with high outlier rates especially optical and SAR images [32]. Another outlier removal model is proposed in [17] where spatial constraint is used to improve the number of inliers that ends in better accuracy of feature matching. However, outlier removal enhancement highly relies on the number of features. If the number of detected keypoints is small, outlier removal technique fails to make improvements in image registration.
Although numerous image registration techniques have been proposed [18–22], However, a key question is still there about the image quality and illumination conditions. In this paper, a novel contrast-enhancement technique is proposed to improve the feature-based image registration process. The key idea is to improve the image contrast [23] using hybrid binary differential evolution and BAT optimization model for low contrast/illumination images. The contrast improvement will thus resulting in enhancing the number of inlier keypoints and features for image registration. In this work, we demonstrate the proposed methodology of contrast enhancement with modeling of hybrid binary differently evolution and BAT optimization. Feature based image registration process using ORB is also explained. The performance of the proposed methodology over low contrast and low illuminated images is shown in the experimental section. Details of the current and future aspects of the proposed research is discussed in the conclusion.
Proposed methodology
The proposed methodology for the image registration system, accompanied by contrast stretching, is illustrated in Fig 2. The original target and reference images undergo preprocessing for contrast enhancement with hybrid binary differential evolution and BAT optimization (DeBo). The preprocessed images were subsequently utilized to identify features for image registration.
The introduced contrast stretching DeBo algorithm is comprised of the following steps:
- Convert original RGB images to HSI color space.
- Apply contrast enhancement transformation to intensity pixels only.
- Combine the transformed I(intensity) pixels to original H(hue) and S(saturation) to create HSI image.
- Transform new HSI image to RGB image for further processing.
The flow of DeBo algorithm for contrast transformation is shown in the Fig 3.
Contrast transformation cost function
The intensity channel’s contrast is augmented using local area enhancement models, including local mean and standard deviation. The subsequent function represents the contrast enhancement transfer function.
(1)
In the aforementioned equation, Tf denotes the transformation function that enhances the intensity of the original image intensity value Ψ, which comprises M rows and N columns. While traditional techniques like histogram equalization yield superior results, they are computationally intensive. In contrast, we employed a modified statistical method, as outlined in [24], which necessitates fewer computations than the original proposed in [25]. The subsequent function illustrates enhancements in pixel-level intensity.
(2)
In the aforementioned equation, κ(i, j) represents the original pixel intensity level, l(i, j) denotes the enhanced version, M signifies the global mean, and (i, j) indicates the central pixel upon which the operation is executed. ϑ(i, j) and ξ(i, j) represent the estimated local mean and standard deviation, respectively, computed from the neighboring pixels of k × k region. The non-zero value of ε enables the utilization of zero standard deviation while employing γ, allowing for the selection of a fraction of the mean value for subtraction. The decision variables, ε and γ, remain constant throughout the processing of an individual image. The contrast enhancement transformation function relies on the automatic estimation of decision variables. This function is employed to modify the intensity of the intensity channel at the pixel level.
(3)
here, R × S represents the dimension of the image. Mainly the objective function is based on the entropy H(Ψ*) and intensity E(Ψ*) functions of enhanced intensity channel. Sobel edge filter is used for detection of edges and edge intensity [32] using the following equations.
(4)
(5)
(6)
Differential evolution
Differential Evolution (DE), introduced in [26], is an evolutionary method that utilizes global search parameters to identify optimal solutions. DE functions in two stages: 1) Initialization; 2) Evolution. During initialization, the population is produced randomly. Throughout the evolutionary process, the created population undergoes mutation, crossover, and selection procedures. The stages are reiterated iteratively to fulfill the selection criteria [27]. The fundamental flowchart of differential evolution is illustrated in Fig 4.
Initialization: Initialization involves the generation of uniformly distributed population. Let , where ΞG is the search space and
is a D−dimensional population vector with generation G and of size p i.e.
. Following expression represents the uniform generation of population:
(7)
where {Φl, Φu} represents the lower and upper bounds of search space respectively.
Mutation: In mutation step, a mutation vector is generated for every target
using the following expression:
(8)
where α ∈ {0, 1} is the scaling factor and {r1, r2, r3} ∈ {1, 2, …, p} are randomly selected but mutually different.
Crossover: After mutation, a new vector is generated, in crossover step, known as trial vector, . Crossover is performed among mutant vector
and target vector
using crossover probability Cp ∈ {0, 1}.
(9)
where i ∈ {1, 2, …, D}.
Selection: During the selection process, a comparison is performed between trial vector and target vector with respect to fitness criteria. The operation is performed with the following expression:
(10)
In this paper we adopted the binary differential evolution, proposed in [28].
BAT optimization
BAT optimization is a metaheuristic algorithm [29]. The term “metaheuristic” combines two Greek words, meaning “to investigate a solution to a higher level”. In other words, metaheuristic techniques are computational models that try to enhance the candidate solutions to a near-optimal level through an iterative process in comparison to a given quality level. The key advantage of meta-heuristics is that they can explore very large spaces of contender solutions by making a few assumptions about the optimization problem. However, the limitation of metaheuristics is their inability to guarantee an optimal solution. Most metaheuristics are nature-inspired and population-based. Following are the key characteristics of the metaheuristic algorithms:
- Metaheuristics are approaches that lead to exploring the search space to extract the near-optimal solution.
- Meta-heuristic-based techniques range broadly from simple local search models to complex learning procedures.
- Meta-heuristic algorithms are non-deterministic approximations.
- Metaheuristics are not problem-specific.
BAT optimization is a metaheuristic algorithm inspired by the hunting behavior of bats. The model is based on the inherent traits of micro-bats, specifically their echolocation and biosonar capabilities. Before discussing the details of the BAT algorithm, first provide some details related to echolocation process of micro-bats.
Echo-location process of micro-bats.
Approximately 1,000 bat species exist in nature, varying from the diminutive bumblebee bat, which weighs about 2 grams, to larger species that can weigh up to 1 kilogram and possess a wingspan of approximately 2 meters. Echolocation is a natural phenomenon shown by all bats to varying degrees. Nevertheless, among all species, microbats predominantly utilize echolocation.
In micro-bats, echolocation, a type of sonar is used to find preys, detect and avoid obstacles and to find the location of their roosting crevices in the night. Echolocation emits a loud sound like pulse that bounces back from neighboring objects and bats listen these echoes. Different bat species make hunting strategies based on these echo pulses, since they have different properties. Most bats emits short sound pulses, frequency modulated with frequency range of 25kHz-150kHz, and lasts for few milliseconds. Micro-bats emits 10-20 sound waves bursts per second, however, this speed increased up to 200 per second when they are homing to their prey. At a constant frequency, the wavelength, at the velocity v = 340m/s of sound in the air, is ranging from 2mm to 14mm, in the same order of their prey size.
BAT inspired optimization model.
Xin-She Yang proposed the BAT optimization algorithm in [30] based on the echolocation behavior of micro-bats. The proposed algorithm is the first of its form in terms of natural computational intelligence and optimization problem. In the proposed BA algorithm, every bat is presented using a velocity along with a location
for every iteration i and in the g-dimensional search space. The bat location represents the solution of the problem need to be optimized. If the population contains n number of bats, the current near-optimal solution S* is modified every time in iterative process.
The mathematical formulation of original BAT, from [30] by Yang, is represented in the following equations.
(11)
(12)
(13)
Above equations represent the updating process of bat location and velocity in every iteration respectively. ξ = [0, 1] is the random vector that is obtained through uniform distribution. Although, in the iterative process, sound pulse rate and loudness can vary, however, the following equations provide a simple relation for the sound wave emission rates and loudness,
(14)
(15)
where 0 < k < 1 and r > 0 are constants.
Algorithm 1 BAT Optimization Algorithm
Input: Bat population vector: , for the iterations:
Output: The near-optimal solution Sbest along fmin = min(f(S))
1: init_bat(),
2: eval = evaluate_the_new_population,
3: fmin = find_the_best_solution,
4: while termination condition not meet do
5: for i = 1 to d do
6: v = generate_new_solution(Si)
7: if rand(0, 1)>ri then
8: v = improve_the_best_solution(Sbest),
9: end if {local search step}
10: fnew = evaluate_the_new_solution(v)
11: eval = eval+ 1
12: if fnew ≤ fi and N(0, 1)<Ai then
13: Si = v; fi = fnew
14: end if {save_the_best_solution_conditionally}
15: fmin = find_the_best_solution(Sbest)
16: end for,
17: end while
Algorithm 1 represents the basic BAT optimization pseudocode that is summarized in the following steps:
- Initialization (lines 1:3): The is the first step and involves the initialization of population and algorithm parameters. Moreover, also initial best solution over the given population is also calculated.
- New Solution Generation (line 6): This step involves the movement of virtual bats in the search space based on the BA updating rules.
- Local Search (line 7-9): In this step, best solution is improved based on the Random Walks.
- New Solution Evaluation (line 10): The new solution is evaluated in this step.
- The Best Solution Update (line 12-14): The best solution is archived in this step based on the given conditions.
- Discover best Solution (line 15): The best solution is updated for the current iteration.
Image registration
As mentioned earlier, feature detection is the first step towards the final image registration. In this work, we used ORB algorithm for feature detection step. ORB algorithm is proposed in [31] by E. Rublee et al. ORB is a combination of two algorithms, FAST and BRIEF where FAST (Features from Accelerated Segment Test) [17] is used for feature detection and modified version of BRIEF (Binary Robust Independent Elementary Features) [32] is adopted as feature descriptor. The ORB based image feature extraction and matching process consists of three steps:
- Extraction of feature points
- Generation of feature point descriptor
- Matching feature points
Extraction of feature points
Feature points in an image are points that show significant presence, such as bright spots in a dark area or dark spots in bright areas. The ORB algorithm uses FAST for feature point extraction. In FAST, the image pixel is considered the corner point if it is significantly different from its neighborhood pixels. We divide the complete feature point extraction process into the following sub-processes.
- At first, a pixel P is selected from the image with the brightness of Ip. By considering a brightness threshold T, the gray value of sixteen neighbor pixels in a circle is compared with the pixel P as shown in the Fig 5.
If the difference of consecutive N points on the circle is greater than εth or less than εtl, the pixel P is considered as the feature point.(16) where Ix is the gray value of the point on the circle. Mostly, three-quarters of the total points on the circle is defined as the N. For example if at least 12 points around the circle exceeds the threshold, the center point is considered as corner otherwise it is rejected for feature point.
- Feature Point Screening: In the original FAST algorithm, a feature point is selected by comparing the brightness value between the pixels; therefore, a large number of feature points are detected with no information related to direction. The modification of the original FAST algorithm in ORB is based on the Harris response value. The following equations calculate the Harris response for all features detected by the original FAST.
(17)
(18)
In the above equations, R is the Harris response, M is a 2x2 matrix, w(x, y) is the image window function, while k is a constant ranging between [0.04 0.06]. IxIy are the feature point variations in the horizontal and vertical directions, respectively. However, the whole process is time consuming when applied to all points. Therefore, we can make a prediction based on the outcomes of four candidate points surrounding the central pixel, separated at 90-degree intervals. We reject a feature point without evaluating its neighboring points unless three out of four points exhibit sufficient gray level difference to verify all the points. - Creation of Image Scale Pyramid: The original FAST algorithm, used in the ORB algorithm, does not detect direction related information of feature points. However, the algorithm is modified for rotation invariance and scale invariance through using the grey level intensity centroid technique and by establishing image Gaussian pyramid respectively. At start, the moment of image block i.e. the orientation of the corner in a small image block can be defined as:
(19) where α, β are the coordinates of the image pixel in the neighborhood of the feature point and λα,β represents the grey level intensity of the correspondent pixel. The centroid of image block is calculated by the following relation,
(20) where υ00 is the 0th moment and represents the image block mass, while the centroid of the image block is represented by the 1st moment of the image block i.e. (υ10, υ01). Finally, the orientation of the corner feature point is calculated with the following equation along with the correction if the feature point is dark with respect to its background.
(21)
The above relations make the FAST feature points rotation and scale invariance.
Feature points descriptor
The feature descriptor’s goal is to characterize and distinguish the detected feature points. A feature descriptor is numerical information extracted from the image to distinguish one feature point from the others. In the ORB algorithm, an improved version of BRIEF [32] is used to calculate feature descriptors of the detected points. The BRIEF algorithm’s fundamental principle is to calculate the binary feature vector for all detected feature points using FAST. The BRIEF algorithm designates each feature point with a binary feature vector or feature descriptor, a string of 0 and 1, consisting of 128–512 bits. BRIEF’s core idea is based on the principle that it is possible to express the image neighborhood using a small amount of intensity contrast.
(22)
where λx is the gray level intensity of the pixel at position x around the feature point while λy is the gray level intensity at position y around the feature point. Moreover, Gaussian filtering is also applied at start to reduce the noise effect. Let’s random select a set of N pixels from a kxk neighborhood window with feature point at the center, normally N is taken as 128, 256 or 512. Lastly, an N-dimensional vector consisting of N binary strings is calculated as:
(23)
In the BRIEF algorithm, only single pixel image neighborhood is adopted that is susceptible to noise. In order to solve this problem, a mxm pixel sub-window is used in the ORB with a neighborhood window of size kxk. In this case, the selection of sub-window obeys the Gaussian Distribution. The original BRIEF is undirected and is not invariant to rotation. A 2n matrix is defined for any n criterion feature set at position (xi, yi),
(24)
A corrected Q matrix is constructed using the direction of neighborhood ϑ and the corresponding rotation matrix, Υϑ, Qϑ = ΥϑQ. And the final Steered BRIEF descriptor is:
(25)
Feature matching
Feature point matching entails the assessment of similarity between the feature points of two distinct images. After calculating the feature point descriptors, the subsequent step entails estimating the matching of feature points. Numerous feature matching algorithms exist, including Brute Force and Hamming distance. Hamming distance is utilized in BRIEF to quantify the number of differing characters between two strings of same length. Nonetheless, Brute Force is simpler since it calculates the distance between each feature point of image It and every feature descriptor of image It+ 1, and matching features based on the shortest distance. The FLANN (Fast Library for Approximate Nearest Neighbors) method is utilized alongside ORB in certain visual SLAM models [33].
Experimental results and analysis
The proposed model’s performance is assessed using various pairs of infrared and optical (RGB) images. The testing images vary in resolution, lighting conditions, incident angles, and other factors. The quantity of inlier features is significant, as is the count of accurate matches. Consequently, the quantity of inlier features is assessed prior to and following the application of contrast enhancement. RMSE (root mean square error) [34] is computed to quantitatively assess the registration performance for the matching keypoints. The following mathematical model is employed to compute the RMSE:
(26)
In the above equation, represents the pair of ith matched feature point from target image and
represents the pair of ith matched feature point from reference image.
Fig 6 shows the image enhancement results on the intensity channel (d) and optical enhancement (b). The enhancement has significant impact on the number of detected features. The histogram comparison of the Fig 6 is shown in the Fig 7. The left side histogram is prior enhancement. From the figure, it can be analyzed that enhancement model improves the distribution of pixels that can be seen in right side image of Fig 7. In Fig 8 inlier features are shown after RANSAC correction.
Figure (a) shows the original optical image under low light condition. Figure (b) represents the contrast enhanced optical image. Figure (c) and (d) represents only original intensity channel of the HSI (hue,saturation,intensity) image and enhanced one respectively.
Left image represents histogram of optical image before contrast enhancement while right histogram is the representation after contrast enhancement.
Table 1 represents the number of features detected before and after applying the proposed enhancement model in the low light conditions. In this condition number of features are increased for both optical and thermal images before and after applying the RANSAC algorithm. RANSAC algorithm is used for detection and removal of outlier features in image registration process.
Fig 9 represents the image enhancement results of the high quality optical image. Table 2 represents the corresponding quantitative analysis. Although the number of total detected features are reduced in optical image after applying enhancement, however, there is a significant improvement in the registered number of features (features found in both images).
Image registration examples are presented in Fig 10 with optical and thermal pair of images. The quantitative analysis of Fig 10 in terms of RMSE is shown in the Table 3. In first two pairs both registration points are significantly increased in addition of significant reduction in RMSE/pixel. However, for third pair, under bright light, RMSE/pixel i not reduced significantly, although, number of registration points are significantly improved.
Three optical and thermal image pairs under different environmental conditions. Image pairs (a) and (b) are taken as examples of low light conditions and image (c) is taken in bright light.
Conclusion
In this paper, we introduced an image contrast enhancement step in the image registration process based on a novel hybrid binary differential evolution and BAT optimization model. The results proved the improvements in the image registration process, especially for low-light optical images. The fundamental idea of image contrast enhancement involves enhancing the distribution of pixel constraints through the application of the natural BAT optimization model and differential evolution. The population size plays a crucial role in the optimization model, as the processing time directly correlates with the size of the BAT population. Although an increase in population also increases processing, it has minimal effects on performance results, a topic not covered in this paper. The performance results showed that the proposed model is also effective for bright images. The proposed model helps to increase the number of detection and matching features in the image registration process. Despite the results proving the significance of the model, the performance of the model was significantly affected by the population size during contrast enhancement. A very small population size can result in quick computation at the cost of low improvements in the image quality; conversely, a very high population level can take a long computation time. During this work, we used a fixed population size; in the future, we will develop a model focusing on the dynamic selection of the population size according to image requirements.
References
- 1. Yang K, Pan A, Yang Y, Zhang S, Ong S, Tang H. Remote sensing image registration using multiple image features. Remote Sensing 2017; 9(6):581.
- 2. Patel M, Thakar V, Shah S. Image registration of satellite images with varying illumination level using HOG descriptor based SURF. Procedia computer science 2016; 93:382–388.
- 3.
Saleem, S, Bais, A, Khawaja, Y. Registering aerial photographs of farmland with satellite imagery. IEEE International Conference on Image Processing; 2010:945–948.
- 4. Saleem S, Bais A, Sablatnig R. Towards feature points based image matching between satellite imagery and aerial photographs of agriculture land. Computers and Electronics in Agriculture 2016; 126:12–20.
- 5. Ma J, Zhou H, Zhao J, Gao Y, Jiang J, Tian J. Robust feature matching for remote sensing image registration via locally linear transforming. IEEE Transactions on Geoscience and Remote Sensing 2015; 53(12):6469–6481.
- 6. Lemmens M. A survey on stereo matching techniques. International Archives of Photogrammetry and Remote Sensing 1988; 27(B8):11–23.
- 7. Viola P, Wells W III. Alignment by maximization of mutual information. International journal of computer vision 1997; 24(2):137–154.
- 8. Kim J, Fessler J. Intensity-based image registration using robust correlation coefficients. IEEE transactions on medical imaging 2004; 23(11):1430–1444. pmid:15554130
- 9. Khan M, Akram T, Sharif M, Awais M, Javed K, Ali H, Saba T. CCDF: Automatic system for segmentation and recognition of fruit crops diseases based on correlation coefficient and deep CNN features. Computers and electronics in agriculture 2018; 155:220–236.
- 10. Zitova B, Flusser J. Image registration methods: a survey. Image and vision computing 2003; 21(11):977–1000.
- 11. Yi Z, Zhiguo C, Yang X. Multi-spectral remote image registration based on SIFT. Electronics Letters 2008; 44(2):1.
- 12. ZitovaJ B. Flusser. Image registration methods: a survey 2003:977–1000.
- 13. Lowe D. Distinctive image features from scale-invariant keypoints. International journal of computer vision 2004; 60:91–110.
- 14.
ER, V. RB: ORB: An efficient alternative to SIFT or SURF. International Conference on Computer Vision;2011.
- 15.
Leutenegger, S, Chli, M, Siegwart, R. BRISK: Binary Robust Invariant Scalable Keypoints Stefan. International Conference on Computer Vision;2012.
- 16.
Alcantarilla, P, Nuevo, J, Bartoli, A. BMVC 2013-Electron Proc Br Mach Vis Conf 2013. IGARSS 2020-2020 IEEE International Geoscience and Remote Sensing Symposium;2013: p5.
- 17.
Rosten, E, Drummond, T. Machine learning for high-speed corner detection. 9th European Conference on Computer Vision, Graz, Austria. Proceedings, Part I 9;2006:430–443.
- 18.
Roche, A, Malandain, G, Pennec, X, Ayache, N. The correlation ratio as a new similarity measure for multimodal image registration. Medical Image Computing and Computer-Assisted Intervention—MICCAI’;1998:1115–1124.
- 19. Ma W, Wen Z, Wu Y, Jiao L, Gong M, Zheng Y, et al. Remote sensing image registration with modified SIFT and enhanced feature matching. IEEE Geoscience and Remote Sensing Letters 2016; 14(1):3–7.
- 20. Bermejo E, Chica M, Damas S, Salcedo-Sanz S, Cordon O. Coral reef optimization with substrate layers for medical image registration. Swarm and Evolutionary Computation 2018; 42:138–159.
- 21. Klein S, Pluim J, Staring M, Viergever M. Adaptive stochastic gradient descent optimisation for image registration. International journal of computer vision 2009; 81:227–239.
- 22. Jenkinson M, Bannister P, Brady M, Smith S. Improved optimization for the robust and accurate linear registration and motion correction of brain images. Neuroimage 2002; 17(2):825–841. pmid:12377157
- 23. Khan S, Nazir M, Khan M, Saba T, Javed K, Rehman A, et al. Lungs nodule detection framework from computed tomography images using support vector machine. Microscopy research and technique 2019; 82(8):1256–1266. pmid:30974031
- 24. Suresh S, Lal S. Modified differential evolution algorithm for contrast and brightness enhancement of satellite images. Applied soft computing 2017; 61:622–641.
- 25.
Munteanu, C, Rosa, A. Towards automatic image enhancement using genetic algorithms. Proceedings of the 2000 Congress on Evolutionary Computation. CEC00 (Cat. No. 00TH8512);2000; 2:1535–1542.
- 26. Storn R, Price K. Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. Journal of global optimization 1997; 11:341–359.
- 27. Malik S, Akram T, Ashraf I, Rafiullah M, Ullah M, Tanveer J. A hybrid preprocessor de-abc for efficient skin-lesion segmentation with improved contrast. Diagnostics 2022; 12(11):2625. pmid:36359469
- 28. Zhang Y, Gong Dw, Gao Xz, Tian T, Sun Xy. Binary differential evolution with self-learning for multi-objective feature selection. Information Sciences 2020; 507:67–85.
- 29. Malik S, Akram T, Awais M, Khan M, Hadjouni M, Elmannai H, et al. An improved skin lesion boundary estimation for enhanced-intensity images using hybrid metaheuristics. Diagnostics 2023; 13(7):1285. pmid:37046503
- 30. Yang XS, Hossein Gandomi A. Bat algorithm: a novel approach for global engineering optimization. Engineering computations 2012; 29(5):464–483.
- 31.
Rublee, E., Rabaud, V., Konolige, K., & Bradski, G. ORB: An efficient alternative to SIFT or SURF. 2011 International conference on computer vision;2011:2564-2571.
- 32.
Calonder, M, Lepetit, V, Strecha, C, Fua, P. Brief: Binary robust independent elementary features. 11th European Conference on Computer Vision, Heraklion, Crete, Greece, Proceedings, Part IV 11;2010:778–792.
- 33.
Kerl, C, Sturm, J, Cremers, D. Dense visual SLAM for RGB-D cameras. IEEE/RSJ International Conference on Intelligent Robots and Systems;2013:2100–2106.
- 34. Akram T, Naqvi S, Haider S, Kamran M, Qamar A. A novel framework for approximation of magneto-resistance curves of a superconducting film using GMDH-type neural networks. Superlattices and Microstructures 2020; 145:106635.