Figures
Abstract
Multimodal Medical Image Fusion is a key evolution in medical imaging. It contributes to improving diagnosis, providing better treatment, and reducing risk. Multimodal medical image fusion is a multi-objective due to the need of balancing factors like the weights of the fusion rules and the speed of the fusion process. While multi-objective particle swarm optimization has already been applied to solve this problem, it suffers from premature. It has been shown that the Darwinian Particle Swarm Optimization performs better than the classical Particle Swarm optimization by escaping the local optima. Therefore, this paper proposes a new approach based on the combination of variable-order fractional-order with multi-objective Darwinian Particle Swarm Optimization. Variable-order fractional-order improves the convergence rate of multi-objective Darwinian Particle Swarm Optimization by adjusting the particle velocity and position dynamically. Moreover, the new approach uses the gradient compass in the spatial domain to generate detailed images, further enhancing fusion quality. The proposed method is used to optimize both the fusion process weights and processing time. Experiments using the fusion of computed tomography along with magnetic resonance imaging show that the proposed technique outperforms existing techniques. Both the Inverted Generational Distance (IGD) and the Hyper-Volume (HV) metrics of the proposed multi-objective problem solution surpass the state-of-the-art showing the optimality of the provided solution. Additionally, the proposed solution image visual demonstrated high visual quality, efficient edge preservation, and the absence of noisy artefacts. Furthermore, our proposed fusion approach showed its suitability for real-time application, with a processing time not exceeding 0.085 seconds, outperforming other methods.
Citation: Ogbuanya CE, Obayi A, Larabi-Marie-Sainte S, Saad AO, Berriche L (2025) A hybrid optimization approach for accelerated multimodal medical image fusion. PLoS One 20(7): e0324973. https://doi.org/10.1371/journal.pone.0324973
Editor: Yuanchao Liu, Northeastern University, CHINA
Received: December 22, 2024; Accepted: May 5, 2025; Published: July 10, 2025
Copyright: © 2025 Ogbuanya et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: All source image files are available from the Harvard Dataverse at https://doi.org/10.7910/DVN/EAKJEU and https://doi.org/10.7910/DVN/QREN2T.
Funding: This paper was supported by the Prince Sultan University in the form of APC funding. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
1. Introduction
Medical image fusion consists of a combination of various imaging modalities to generate a single thorough view of a patient’s disease. This process improves diagnostic correctness, enhances treatment planning, and offers better conception of complicated structures. It also helps in detecting abnormalities, resulting in better clinical decisions. Medical image modalities are classified under two types: anatomical or functional. Anatomical imaging modalities, such as X-ray, computed tomography (CT), and magnetic resonance imaging (MRI) focus on the structures of the body [1,2].
An efficient multimodal medical image fusion (MMIF) technique can integrate complementary details from various medical images obtained from one or more imaging modalities to improve visual quality by retaining precise features from tissues or organs for analysis by a physician or for machine recognition [3,4]. For example, Magnetic resonance imaging (MRI) provides details of superior soft-tissue variations, while CT images give insights about thick structures such as bones and implants. Hence, combining MRI and CT images generates comprehensive details for clinical diagnosis and treatment planning. Additionally, the fusion of MRI and CT images, when combined with surgical navigation, assists surgeons in accurately preparing a preoperative plan, potentially improving the surgery outcomes [5,6]. Therefore, multimodal medical image fusion provides relevant technology for the integration of several types of medical imaging information in different clinical applications. CT and MRI images have already been integrated to aid in surgeries for skull base tumors [7]. Additionally, the fusion of CT and MRI images was carried out in [7] and [8] to validate the effectiveness of the novel pulse-coupled neural network (PCNN)-based image integration methods. However, obtaining high-quality fused images results in a short amount of time, especially for processes such as real-time image-guided surgery, is still an unsolved problem [9,10]. This highlights the need to maximize image quality while minimizing fusion time. MMIF can be considered a multi-objective problem that requires a multi-objective algorithm to optimize the fusion process. Recently, many multi-objective optimization algorithms were proposed, including the Pareto archive evolutionary strategy (PASE) [11,12], strength Pareto evolutionary algorithm (SPEA 2) [13], non-dominated sorting genetic algorithm II (NSGA-II) [14], non-dominated sorting particle swarm optimization (PSO) [15], and multi-objective particle swarm optimization (MOPSO) [16,17]. It has been shown that MOPSO achieved the highest optimization capacity and rate of convergence [18,19]. However, it is prone to the premature convergence of the MMIF process [20]. The authors in [20] proposed a new method, called the Fractional order Darwinian particle Swarm optimization (FODPSO), to escape from local optima, though its results could be improved.
To overcome the above-mentioned drawback, this paper proposes a variable-order fractional-order multi-objective Darwinian Particle Swarm Optimization combined with a gradient compass-based multimodal medical image fusion method (called VF-MODPSO-GC). The proposed approach fuses two types of multimodal medical images, that is, MRI and CT. This fusion ensures improved visual quality of the fused images and increases the speed of the fusion process. The proposed approach is carried out as follows: First, edge details are generated from the source medical images in eight varying directions. These edges provide relevant information for designing an edge map of the input medical images. The edge maps are then used to obtain two detailed medical images. The statistical features of the detailed medical images are then processed alongside the proposed multi-objective Darwinian Particle Swarm optimization to generate optimal weight matrices. Finally, pixel fusion is carried out between the two-source multimodal medical images. The proposed method is evaluated using a dataset generated from the Harvard Medical Image Database (https://dataverse.harvard.edu/). The principal contributions of this paper are as follows:
- A novel optimization algorithm is introduced, combining the multi-objective Darwinian Particle Swarm Optimization with a variable-order fractional calculus operator (VF-MODPSO). The Variable-order Fractional-order (VF) contributes to enhancing the convergence rate of MODPSO.
- The novel optimization algorithm is designed to address the issue of premature convergence commonly observed in multimodal MIF.
- The proposed algorithm contributes to enhancing the feature extraction phase. This algorithm optimizes the weight matrices derived from the two detailed medical images produced during the gradient compass process of Multimodal MIF.
- The proposed algorithm is applied using the gradient compass image generator. This composite model successfully achieves two primary objectives: maintaining high image quality during the fusion process and preserving intricate details from the source images in the final fused image.
The evaluation results show high performance in transferring sufficient details from source medical images to the fused medical images, while increasing the processing speed.
The rest of this paper is organized as follows: Section 2 presents the related works. Section 3 presents the background and the proposed technique. Section 4 provides the performance metrics. Section 5 presents the experiment results and the comparison study of VF-MODPSO with other optimization algorithms through a series of test instances and shows the application of the proposed algorithm to medical image fusion. Section 6 discusses the dynamics of the results. Finally, conclusions and future perspectives are drawn in Section 7.
2. Related works
This section reviews recent works in multimodal medical image fusion, particularly focusing on optimization algorithms, fractional calculus, and machine learning techniques. The goal is to identify the limitations in existing approaches and highlight the novelty of the proposed method.
Recently, the application of AI-based tools to improve diagnostic efficacy as well as healthcare assistance has become a very active area of research. Over the years, different medical image fusion methods have been proposed [21–23]. Moreover, the application of deep learning to image processing has gained considerable attention. Deep learning was introduced to medical image fusion while attempting to address the design challenge of activity level measurement and the fusion rule found in traditional methods [24].
In this section, we reviewed the recent state-of-the-art studies that applied AI algorithms to enhance the fusion process, along with existing works that utilized deep learning and neural network-based medical imaging techniques. In [23], authors developed a novel deep medical image fusion approach based on a deep convolutional neural network (DCNN) to directly learn image features from the original images. Specifically, they used a pre-trained CNN model to extract deep features from the principal components of the decomposed source images. The method was evaluated using Edge Strength Measure (QAB/F) and entropy on six pairs of CT and MRI images related to distinct brain diseases. The results showed a QAB/F of 0.704 with 6.23 entropy for normal brain images and 0.775 with 4.75 entropy for Alzheimer diseases. However, the deep learning-based components of the method may be prone to overfitting problems and have high training cost.
Medical image translation using a fully conditioned bounded deep network was conducted in [24]. Bayesian deep learning was applied [25,26] for the registration of noisy medical images affected by nonlinear geometric irregularities. Medical image denoising was conducted in [27] via a multilayer deep residue network combined with sectionalized dictionaries. In [28], deep cascade restructuring of low-grade reduced-resolution computed tomography images of the chest was performed to enhance and resolve CT chest images, making them more effective for monitoring lung health in COVID-19 patients. However, the speed of all these AI-based diagnostic processes was not taken into consideration or improved.
Various optimization algorithms have been applied in multimodal image fusion. [7] used the whale optimization algorithm to optimize PCNN. [8] utilized multi-source information encoding optimization. [9] applied wavelet transform and the XGboost optimization algorithm. [10] employed metaheuristic optimization. [11] used boosted grey wolf optimizer. [12] applied improved sand cat optimization. [13] applied multimetric routing protocol optimization.
Furthermore, optimization techniques have played an important role in improving the quality and speed of MMIF. Optimization algorithms have been utilized to enhance the fusion process by targeting specific aspects. Duan et al. [29] presented an MMIF fusion method that applies a genetic algorithm in the optimization of the extracted features from the log-Gabor filter and sum-modified Laplacian. The results of their work revealed good local and spatial information retention abilities that are not commonly found in pixel-based fusion methods. However, image brightness needs to be improved. Das et al. [30] presented an MMIF framework that applies gray wolf optimization in the optimization of decomposed cartoon and texture components from an optimized low-rank texture prior model. Their results showed good structural information preservation; however, image contrast still needs improvement.
Also, fractional calculus was incorporated into MMIF to improve convergence rates and feature extraction. Bhardwaj and Nayak [31] applied fractional bird swarm optimization for the optimization of the Bayesian approach used for the fusion of decomposed sub-bands. The results obtained showed fairly good visual quality of fused images; however, the edge-preservation ability suboptimal. Mergin and Premi [32] used neural networks with convolutional layers and the PSO algorithm with quantum behavior to develop a new approach for merging multimodal medical pictures. The results obtained showed excellent information retention ability; however, the speed of the fusion process was low. Kaur and Singh [33] applied multi-objective differential evolution to optimize the extracted features of decomposed sub-bands from source images for MMIF. The results obtained showed excellent edge preservation ability; however, the image brightness was not excellent. Prashantha and Prakash [34] presented a feature fusion method using a genetic algorithm and deep learning techniques. The results obtained showed good visual quality fusion image results; however, the edge-preservation ability was not good. Tang et al. [22] presented an MMIF method that applies a multi-swarm fruit fly optimization algorithm in the optimization of the hyperparameters of PCNN, resulting in fused images of good visual quality. However, the speed of the fusion process remained low. Das et al. [10] and [35] used differential evolution for the optimization of decomposed sub-bands obtained through deep neural networks and pulse-coupled neural networks, respectively, for MMIF. The results showed that the fused images preserved structural and textural details; however, image brightness needs improvement.
The reviewed studies indicate that several challenges still persist in MMIF, including slow fusion speeds, premature convergence, and difficulties in optimizing specific image features such as brightness and edge preservation. Our approach aims to address these issues by combining variable-order fractional calculus with multi-objective optimization.
3. Background and proposed method
In this section, we will start by presenting the different algorithms involved in our technique. Then, we will present our proposed method.
3.1. Background
First, we will present the gradient compass-based method. Then, we will give an overview about multi-objective optimization and Darwinian particle swarm optimization. Finally, we will discuss the variable-order fractional calculus operator.
The gradient compass-based method.
One approach to merging medical images from different modalities at the element level is the gradient compass-based method proposed in [36]. The gradient compass edge detection algorithm demonstrated good performance compared to other fusion techniques, such as wavelet transform, shearlet transform, and guided filtering as stated in [36]. The comparative analysis revealed the superior performance of the gradient-compass-based image fusion method across the diverse set of evaluation metrics. When compared to established techniques, including wavelet transform, shearlet transform, guided filtering, Laplacian Pyramid, Contourlet Transform, and a representative CNN-based fusion architecture, the gradient-compass method consistently achieved higher scores in metrics such as Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), and Mutual Information (MI) [36]. This indicates that the fused images generated by the gradient-compass approach exhibit improved visual quality, greater structural preservation, and a higher degree of information transfer from the source images. The gradient-compass method’s ability to effectively capture and integrate edge information from multiple input images appears to be a key factor in its improved performance, leading to more accurate and visually pleasing fusion results. The gradient compass-based method consists of four distinct steps: edge detection, detail image generation, weight generation, and pixel merging.
Step 1: Edge detection
Medical images encompass several significant features. However, these features by themselves do not accurately outline the shape, structure, and boundaries of a specific organ. Also, they do not efficiently distinguish one organ from others, particularly when organs overlap, or edges are unclear [26]. Thus, the details of the medical image that indicate the edges are highly significant. Many techniques have been introduced to highlight boundaries in medical imaging, and every method consists of its own set of prerequisites [26]. One use of the gradient compass technique is the detection of edge information in input medical images using the Sobel gradient compass. It utilizes eight masks , shown in Fig 1, each yielding boundary strength in any of the eight possible compass directions. Sobel gradient compass is renowned for preserving important information from the source images and preventing the addition of undesirable artifacts to the fused images [26]. The Sobel filter is among the most fundamental filters utilized for edge identification. It employs two 3x3 kernels, one for horizontal edges and another for vertical edges, which are convolved with the image. The kernels are designed to estimate the first-order derivatives of the image along the x and y axes, which quantify the gradient or the rate of change in pixel intensity. The Sobel filter subsequently integrates the horizontal and vertical gradients to derive the edge magnitude and direction. The Sobel filter offers the benefits of low complexity, straightforward implementation, rapid computation, and resilience to noise. It possesses drawbacks such as sensitivity to diagonal edges, the generation of thick edges, and an inability to consider edge continuity or smoothness; nonetheless, its advantages are very advantageous for multimodal medical picture fusion [36].
(a): GC 0˚, (b): GC 45˚, (c): GC 90˚, (d): GC 135˚, (e): GC 180˚, (f): GC 225˚, (g): GC 270˚, (h): GC 315˚.
The computation of edge intensity at coordinates is carried out using Eqn (1).
We select the output results of the gradient masks only because they are same as the reversals
. Then, we apply Eqn (1) on the source images
as:
Step 2: Detail image generation
With the application of specific directional edge maps, two detailed images are obtained. The generated maximum edge strength maps,
are deducted from the source medical scans
to obtain detail images
.
Eqn (4) is applied to detect peculiar details inherent in
, and the peculiar details of image
are detected from image
through the detail image
via Eqn (5).
Step 3: Weight generation
A statistical method is applied to obtain weights in the form of two weight matrices, , using the detail images
. First, a window of size
is generated and passed through each element
. The region within it is specified as a matrix M, which is composed of W observations and variables, representing the rows and the columns. The matrix M is considered the vicinity of the corresponding component of DI1(x,y). Next, the independent covariance matrix
is computed from the matrix M using Eqn (6).
The sum of and
is taken as the weight for one pixel
using Eq. (9).
Step 4: Pixel merging
The merging of pixels marks the completion of the gradient compass method’s procedure. Adaptive image element fusion was applied in [26] during the pixel merging step.
Multi-objective optimization.
The subsequent definitions succinctly delineate the terms related to optimization of multiple objectives [36]:
Multi-objective optimization: A decision-making procedure that involves the consideration of multiple competing objectives. The aim is to identify a set of solutions that strike a balance between these objectives.
Where is a D-dimensional decision variable. m is the number of objective functions in the multi-objective optimization problem.
represents the mth objective optimization function.
the ith inequality constraint, p the number of inequality constraints,
represents the jth equation constraint, and l is the number of equation constraints.
Pareto dominance: A solution A dominates a solution B if it is at least as excellent as B in all objectives and strictly better in one or more objectives. The two solutions in the feasible region are represented as
.
can be seen as the solution that dominates
, and this relationship is mathematically denoted by
if and only if
Non-dominated solution: A solution is non-dominated if there is no other solution in the population that dominates it.
Pareto optimal solution: A solution that is not controlled by an additional conceivable solution.
Pareto Optimal Set The collection of all Pareto optimal solutions. It is defined as:
Pareto Front: The visual representation of the Pareto optimal set in the objective space. It is defined as:
Typically, it is difficult to determine the exact shape of the Pareto front analytically. Instead, we aim to achieve three objectives: maximize the amount of non-dominated alternatives, guarantee a fair distribution of solutions throughout the Pareto front, and reduce the gap between the solutions produced by our method and the actual Pareto front, if one exists. Calculating the exact mathematical equation of the Pareto function is often impractical [36]. Therefore, we focus on three key goals: generating a significant quantity of non-inferior alternatives, minimizing the variation between our algorithm’s output and the ideal Pareto front (if its location is known), and ensuring that the generated solutions are evenly spread across the Pareto front.
Darwinian particle swarm optimization.
The Darwinian Particle Swarm Optimization (DPSO) algorithm is an enhanced version of the Particle Swarm Optimization (PSO) that includes elements of Darwinian principles, such as survival of the fittest, to improve the optimization process [37].
Darwinian Particle Swarm Optimization (DPSO) is highly effective for image fusion due to its ability to maintain diversity and avoid premature convergence through natural selection principles. By utilizing multiple swarms and retaining only the best-performing particles, DPSO ensures robust optimization and reliable convergence to high-quality solutions. This adaptability allows DPSO to handle the complex optimization landscapes of image fusion tasks, such as selecting the most relevant features or coefficients from source images. Additionally, its focus on survival-of-the-fittest mechanisms ensures that the fused image preserves maximum information, achieving superior fusion results compared to traditional methods [38].
The improvements to the particle swarm algorithm are generally given as follows: Eqn (14) and Eqn (15) below:
Where is the moving particle and
is the time step.
is the particle’s location.
represents the particle’s velocity.
stands for each particle’s local best position.
is the best position found within the neighborhood of the
th particle.
. ρ1, ρ2, and ρ3 prioritize the top performers on a global, regional, and neighborhood scale, respectively, when the new velocity is determined.
,
and
are uniformly distributed between 0 and 1. These random numbers introduce variability and randomness into the velocity update of each particle in the Darwinian Particle Swarm Optimization (DPSO) algorithm, allowing the particles to “explore” the search space in different ways.
The features that distinguish DPSO from PSO include the removal of outdated particles, the destruction of ineffective particles, the renewal of the swarm, and the creation of new particles. In DPSO, when a particle is removed, the number of removed particles, , increases gradually and approaches a threshold value. As particles are never removed in PSO, this number is always equal to zero. The probability of a particle being selected for reproduction or survival in DPSO is called the Selection Coefficient,
, as described in Eq (16).
Where is the maximum value that the selection coefficient can attain.
Variable-order Fractional calculus operator (VF).
Fractional calculus is a mathematical instrument that is indispensable for practical sciences is [39]. Fractional calculus has enhanced the performance of numerous procedures applied in simulation, curve estimation, sorting, pattern identification, boundary detection, authentication, equilibrium, autonomy, observability, and durability [40]. Some works, like [41] and [40], employed fractional calculus to boost the convergence rate of PSO. This was achieved by accelerating the convergence of both particle velocities and positions, both in standard PSO and DPSO. Fractional-order Darwinian particle swarm optimization (FODPSO) was also utilized by [20] for multilayer thresholding of medical pictures. These applications demonstrate that incorporating fractional calculus improves the exploration capabilities of the algorithm, which leads to better optimization performance. VFs were introduced to address the variable nature of most real-world systems [42]. The definition of VF in the Grünwald-Letnikov approach [43] is expressed as shown in Eq
Where is called the Pochhammer symbol.
3.2. Materials and methods
The multimodal medical image fusion technique proposed in this study involves the merging of two input images of the same organ from various modalities. This is performed via a combination of three main techniques, including the proposed multi-objective optimization technique (MODPSO), the VF operator, and the gradient compass method in the spatial domain. In fact, VF is combined with MODPSO to enhance the premature convergence observed in MODPSO. In step 1, Gradient Compass Edge detection is applied to the two input images I1 and I2. In step2, specific chosen directional edge maps are applied to generate two detail images. Using Eqn. (1), boundary strength at location (x, y) is computed. The output results of the gradient compass mask
are then chosen. The maximum edge strength maps and
are then obtained via Eqn. (2) and Eqn. (3) and subtracted from the source images
to obtain detail images
according to Eqns. (4) and (5). In step3, the weight of a pixel
(or is obtained from Eqns. (6) – (9) with
Therefore, we apply our VF-MODPSO algorithm to obtain the optimal weights
for the entire elements of the input images as well as
, which are computed from their respective detailed scans according to Eqn. (9). Finally, in the fourth step, pixel fusion is carried out. Fig 2 represents a schematic diagram of the proposed method.
(GC: Gradient Compass).
This section provides a comprehensive explanation of the proposed VF-MODPSO. The MODPSO algorithm is explained in Section 3.2.1. The VF approach is detailed in Section 3.2.2. Finally, the non-adaptive fusion technique is described in 3.2.3.
Multi-objective darwinian particle swarm optimization (MODPSO).
Drawing upon the principles of multi-objective optimization outlined in previous the section and the core concepts of the Darwinian Particle Swarm Optimization (DPSO) algorithm detailed also in the previous section, we suggest a new strategy for the MODPSO algorithm. The following are definitions of some of the algorithm’s fundamental terms:
- Particle: Within the framework of MODPSO, a particle serves as a potential solution to the multi-objective optimization challenge. Every particle possesses a coordinate vector that aligns with a particular set of decision variables, along with a vector of motion that defines both the trajectory and magnitude of the particle’s movement within the solution room.
- Archive: An archive is a repository of solutions that are not dominated and identified throughout the optimization procedure. It stores the best solutions discovered so far, helping to maintain diversity and direct the hunt towards the Pareto front.
- Selection: Selection in MODPSO typically relies on a combination of fitness and diversity. To this end, Pareto ranking is employed. This technique ranks solutions based on Pareto dominance, assigning higher ranks to non-dominated solutions. By applying Pareto ranking to the entire population, the algorithm prioritizes the selection of high-ranking solutions for reproduction. This strategy ensures that the search effort is directed towards the Pareto front, mitigating the risk of premature convergence to local optima. Pareto ranking offers several advantages: it preserves diversity within the population, facilitates efficient selection, and is well-suited for handling problems with multiple, conflicting objectives.
- Objective Functions: The target functions within a multiple objective optimization challenge represent the conflicting goals that need to be balanced. In our MODPSO algorithm, the objective functions are:
- Minimize time: This aims to reduce the running time of the convergence of particles.
- Maximize fitness: This aims to increase the fitness of the converging particles.
- Focusing on this research study, the objective is to enhance the quality of the merged image by optimizing factors including clarity, brightness, as well as information preservation. The goal is to generate a merged image that is both visually appealing and illuminating while minimizing artefacts and distortions. By carefully defining the objective functions and tuning the parameters of the MODPSO algorithm, we can effectively address multi-objective optimization problems. The fitness function is evaluated via Eqns. (18) and (19) below:
Where ENT is entropy, which is applied to measure the extent of rich details in the fused image. FMI is a measure of the mutual dependence between the source image and the fused image. is the edge information similarity, which indicates the amount of edge information preserved from the source image that can be found in the fused image. Finally, Time is the running time of the fusion process. Eq (19) expresses an inverse relationship between fitness and time. As time decreases, fitness increases proportionally. This is a common scenario in optimization problems where the goal is to minimize time while maximizing performance or quality.
- Termination criteria: This is determined by the maximum number of iterations that will be implemented in the experiments.
Variable-order fractional-order based multi-objective darwinian particle swarm optimization (VF-MODPSO).
As discussed above, the goal of this combination is to enhance the convergence speed of MODPSO. For this, VF is incorporated in the Velocity equation. The fractional calculus derived from the Grünwald-Letnikof definition, which is derived from the principle of a fractional definition with a fractional coefficient ⍺ of a general signal x(t), is expressed by:
where
is the gamma function
Equation (20) is for the time domain, but we will develop our novel method in the frequency domain. In the frequency domain, the fractional differential operator may be estimated using the subsequent transfer function obtained from the Oustaloup method in the frequency domain (see Eq. (21)).
where and
are integer constants,
,
are related denominator coefficients, and
,
are related numerator coefficients.
By applying the variable-order fractional calculus (VF) operator to the MODPSO algorithm, we obtain:
and
are ⍺ related denominator coefficients, which we denote as
.
and
are ⍺ related numerator coefficients, which we denote as
.
and
are integer constants that can be taken as 2 and 20 [24], respectively.
is the transpose of the matrix
, and
is the transpose of the matrix
with
.
For any given order , its frequency domain transfer function can be gotten via the polynomial fitting technique via (
) and (
). Considering the five-order polynomial fitting, the polynomial fitting outcomes for the numerator coefficient and denominator coefficient are expressed in Eq 23 and Eq 24.
NB:Eqn. (22) is the main equation expressing the originality we added to MODPSO
We improved the MODPSO algorithm using VF to increase the convergence rate of MODPSO for image fusion processes. This is because we need an image fusion algorithm that produces high-quality fused images with complete information and negligible noise and will be useful to medical experts during real-time image-guided surgery. The overall structure of VF-MODPSO is outlined in Algorithm 2:
Fusion step.
In this work, nonadaptive pixel fusion is introduced and applied. While adaptive pixel fusion applies new adaptive weights and noise-manipulating coefficients, nonadaptive pixel fusion applies optimized weight matrices and zero noise-manipulating coefficients. This makes our method less complex with no noise artefacts. Here, the merging procedure between the elements of input multimodal photos related to medicine, as well as
takes place via Eqn (25).
Image pixel of and
possessing identical x and y vectors is merged sequentially to obtain the result of the fused medical image. In Eqn (14),
and
represent actual pixels extracted from the input medical image, whereas
represent values of the weights derived from the corresponding weight arrays.,
as well as the
.
stands for final merged element value, that consists of proportionate details from both multimodal source pixels.
4. Performance metrics
4.1. Quantitative metrics
In a multi-objective
optimization problem [36], the Pareto set (PS) comprises all Pareto optimum alternatives, while the Pareto front (PF) consists of the objective values associated with these Pareto optimal responses. The approximation set (AS) comprises nondominated alternatives derived from the search process, whereas its approximation front (AF) consists of the target variables corresponding to these nondominated solutions. When addressing multi-objective optimization problems, it is advantageous to possess two criteria. The AF approaches the PF as nearly as feasible to guarantee its precision. Secondly, AF disperses across the full PF with as much uniformity as feasible (variance) to produce a multitude of meaningful alternatives.
The solution of a multi-objective problem requires both convergence and diversity. In this work, we apply two metrics, Inverted Generational Distance (IGD) and Hyper-Volume (HV), as characteristic factors to evaluate the search performance of our VF-MODPSO algorithm. For more information about these metrics, the reader can refer to [44].
IGD measures how close is the solution to the PF. It can assess the convergence of multi-objective evolutionary algorithms during the resolution of multi-objective issue.
The HV can concurrently assess convergence and diversity by considering the coverage and distribution of solutions. It calculates the distance through reference locations and the participants in the AF.
This suggests that minimal IGD and elevated HV signify whether the AF has become proximate to the PF together with being dispersed as uniformly as feasible across the entire PF.
Therefore, IGD and HV are the two metrics used in this work to evaluate the effectiveness of VF-MODPSO-based GC and to compare it with other image fusion methods.
Mathematical expressions of the quantitative metrics.
1. IGD
Where:
- A: The approximation set, i.e., the set of solutions obtained by the algorithm.
: The true Pareto front, i.e., the reference set of optimal solutions.
: The number of points in the true pareto front.
The distance between point
(from
) and
(from
), often computed using Euclidean distance:
Where:
m is the number of objectives
2. HV
The hypervolume HV of set A is calculated as follows:
Where:
: The approximation set, i.e., the set of solutions obtained by the algorithm.
: A user-defined reference point.
: The region in the objective space dominated by A and bounded by r.
: The infinitesimal volume element.
4.2. Qualitative metrics
The efficacy of the suggested strategy is assessed using many widely recognized evaluation indicators. The selected evaluation metrics are entropy (ENT), mutual information (MI), edge information similarity (Qabf), and the structural similarity index (SSIM) (for mathematical details, refer to [45–47]). Entropy (ENT) is utilized to quantify the degree of intricate details in the combined image. Mutual information (MI) quantifies the interdependence between the source image and the fused image. The edge detail similarity (Qabf) quantifies how much of the edge information retained from the original image is present in the combined image. The structurally equivalent index (SSIM) measures the fidelity of structural information from the input visuals retained in the combined picture. For all these indicators, elevated values signify superior performance. The duration of the fusion process for each comparable approach is also assessed. The shorter the runtime, the superior the method’s performance.
Mathematical expressions of qualitative metrics.
1. ENT
Where:
- P(x) is the probability of occurrence of the event x.
- Σ represents the summation over all possible events.
- log2 is the base-2 logarithm.
2. MI
The formula for Mutual Information (MI) between two random variables x and y is:
Where:
represents the Mutual Information between x and y.
is the entropy of x.
is the conditional entropy of x given y.
3. SSIM
Where:
- F is the fused image
- I is the input image
are the mean intensity of image F and I respectively
and
stand for the variance of images F and I respectively
computes the covariance if images F and I
and
are constants
5. Results and discussion
5.1. Experimental configuration
To conduct our experiments, fourteen groups of medical images (CT and MR scans) are utilized to assess the precision of the proposed approach. Each medical image group consists of two images. In total, twenty-eight images were deployed as a dataset for our experiments. These can be seen in Fig 3. Images A represent the input CT images, and images B represent the input MR images, which will be fused.
The images are pixel resolutions. All the images in the dataset are correctly pre-registered. The source images can be found in http://www.med.harvard.edu/aanlib/. The MATLAB software is used in this simulation with the following specifications, R2018a version 9.4.0.813654, 64 bits, and the hardware 3.20 GHz, 4 GB of RAM, and Windows 7.
To assess the efficacy of the suggested strategy (VF-MODPSO-based MMIF), the parameters are set according to the recommendations mentioned in the related works [8–10]. Some comparative experiments were conducted with eight other image fusion algorithms on five test instances [40,41], using some performance metrics. The test functions are the optimization functions usually deployed during the tests of multi-objective optimization algorithms. These optimization functions consist of minimizing several well-known functions namely: Bohachevsky 1, Colville, Drop wave, Easom, and Rastrigin. More details can be found in [40,41]. The eight image fusion algorithms include GA + LGF-SML [7], GWO+OLTP [8], FBSO+HWD [9], QPSO+CNN [10], DE + DNN [11], GA + DL-CNN [12], MFOA+PCNN [13], and DE+PCNN [35]. All the compared algorithms were implemented. Termination criteria of the proposed optimization algorithm are similar to those used in [36]: Maximum Iterations: 1000; Hypervolume Convergence Threshold: 0.01; Population Diversity Threshold: 0.2; Time Limit: 3600 seconds (1 hour). The fractional-order multi-objective Darwinian particle swarm optimization (FOMODPSO) is an advancement of the fractional-order Darwinian particle swarm optimization (FODPSO), which itself is an extension of the standard Darwinian particle swarm optimization (DPSO) [40]. The most efficient results of VF-based DPSO are obtained by varying the values of the fractional coefficient (⍺) [24]. Tuning the parameter is essential in obtaining robust results. The greater the order of the fractional coefficients, the faster the convergence rate is. In this work, the order is varied within the range [0.6–10] [43].
5.2. Experimental results
Quantitative evaluation results and analysis.
The basis points for computing the HV in the subsequent experiments are established as follows:
The proposed method is tested many times. The experiments yielded stable (unchanged) results when the number of runs reached 31. So, this number was set to 31. The IGD and HV (mean and standard deviation) measurements of the merged images for each of the test instances have been calculated and are presented in Table 1. Table 1 displays the results of VF-MODPSO + GC and other image fusion methods. The initial quality indicators presented in Table 1 represent the mean, with the standard deviation for every one of the IGD and HV indicated at the right side of the mean. Rankings are displayed in brackets in square format adjacent to the quality measurements on the right side. To indicate the outcome of the comparison of VF-MODPSO + GC with the referenced method, we utilize the symbols “˫” for better performance, “≬” for worse performance, and “∾” for similar performance. The mean ranking (MR) is derived from each method across all test instances. The counts of “˫”, “≬”, and “∾” for every contrasted method are calculated to assess the typical efficiency of the image fusion methods.
Qualitative evaluation results and analysis.
To perform the qualitative evaluation, a Support Vector Machine (SVM) algorithm was employed. It is chosen for its effectiveness in handling high-dimensional data and achieving robust performance in binary classification tasks. The classification is performed to show whether the fused image exhibits high quality (Class 1) or low quality (Class 0). To create the labeled dataset, fused images were assigned to Class 1 if they achieved a score above 0.5 and exhibited no visible artifacts as determined by visual inspection by two independent reviewers. Fused images were assigned to Class 0 if they had
score below 0.5 or showed significant blurring or misalignment. Disagreements between reviewers were resolved through discussion and consensus. About 66.67% of the extracted dataset was used for training and 33.33% for validation. The image features that are used as input for the SVM include edge orientation, edge magnitude, edge density, and texture features. The SVM classifier employed an RBF kernel with hyperparameters (C and gamma) optimized using grid search with 3-fold cross-validation. The optimal hyperparameter values were found to be C = 1.0 and gamma = 0.01.
The accuracy, sensitivity, specificity, and F1 score are chosen as the assessment metrics to evaluate the efficiency of the suggested approach against the state-of-the-art methods used in the previous comparative study.
The experimental results for each of the cross-validation stages are shown in Table 2, while the detailed experimental outcomes of all the other stages are made available at:
https://github.com/GoddessChysomme/FOMODPSO/issues/1
https://github.com/GoddessChysomme/FOMODPSO/issues/2
Moreover, the codes used in this work are open-sourced and can be found at:
https://github.com/GoddessChysomme/FOMODPSO/blob/main/fusion
https://github.com/GoddessChysomme/FOMODPSO/blob/main/xfusmaxmin
https://github.com/GoddessChysomme/FOMODPSO/blob/main/xfusmean
The in-depth analysis reveals that our proposed method yields the best fusion results with an average accuracy of 0.895, a sensitivity of 0.684, a specificity of 0.764, and an F1 score of 0.677.
6. Discussion
Table 1 proves that the suggested VF-MODPSO + GC method performs better than the other methods using the suggested metrics. Specifically, VF-MODPSO + GC has 79 instances where its mean metric values are better than the other methods and 9 instances where it has the best (top) mean metric values among all methods. This indicates a high-quality performance in achieving superior fused images. Additionally, VF-MODPSO + GC consistently achieved the best mean rank (MR), except on using IGD values, where DE+PCNN performed better. Based on the MR, the image fusion methods are ranked as follows: VF-MODPSO + GC, DE+PCNN, MFOA+PCNN, GA + DL-CNN, DE + DNN, QPSO+CNN, FBSO+HWD, GWO+OLTP, and GA + LGF-SML. Among all the image fusion methods, VF-MODPSO + GC exhibits outstanding efficiency on the test instances based on the measures of variety (search quality) and convergence.
The image fusion results of various multi-objective evolutionary approaches, including the suggested strategy, on fourteen groups of multimodal medical images are represented in Fig 4. The GA + LGF-SML approach has good image detail preservation ability; however, the brightness of the fused images needs to be determined. The GWO+OLTP multimodal image fusion method has excellent image structural detail preservation ability; however, there is minimal contrast in the fusion outcomes. The FBSO+HWD method has good visual quality fused image results; however, edge preservation is not good. The QPSO+CNN multimodal image fusion method has good information retention ability; however, the image brightness can be improved. The DE + DNN method shows excellent edge preservation; however, the fused image results are not bright enough.
The GA + DL-CNN method yields good visual quality fused images, but edge preservation is not as good. The MFOA+PCNN multimodal image fusion method yields good visual quality fused image results; however, the edge preservation ability needs to be improved.
The DE+PCNN yields fused image results with well-preserved structural and textural details; however, the image brightness needs improvement. Fig 4 shows that the proposed fusion method gives fused image results of very good visual quality, efficient edge preservation ability, and no trace of noisy artefacts.
Furthermore, the speed of the fusion process with each comparative method was also evaluated and contrasted with the speed of the fusion process of the proposed VF-MODPSO + GC method, as shown in Table 3. Our method has a shorter run time than the other comparative approaches, namely, the DE+PCNN and GA + DL-CNN methods. The reason is that our method has low time consumption due to the increased optimization rate of the weights of the fusion rules. In fact, increasing the convergence rate of multi-objective Darwinian particle swarm optimization via a variable-order fractional calculus operator and applying the improved optimization algorithm to the weights of the fusion rules of the gradient compass in the spatial domain reduces the average time consumption of our method compared with those of other comparative methods.
Although its high performance, we recommend analyzing the scalability of the proposed method in terms of image modalities that are fused, as well as the size and depth of the input images. The current experiments involve a very small dataset of 14 2D images with only two modalities—CT and MRI. We recommend extending the numerical experiments to new datasets with more image modalities, larger images, and both 2D and 3D images (such as PET, SPECT, and ultrasound) to assess the robustness of the approach. One potential limitation of the proposed VF-MODPSO method is its sensitivity to high-dimensional data, which may lead to increased computational complexity and processing time, potentially affecting real-time applications. Additionally, the performance of the method may degrade when dealing with highly heterogeneous image modalities that exhibit significant differences in contrast, resolution, or noise levels. The fusion process could also be influenced by parameter tuning, where suboptimal choices may result in poor convergence or loss of critical image details. Further investigations on adaptive parameter selection and computational efficiency improvements, such as parallel processing or hardware acceleration, could help mitigate these challenges and ensure broader applicability of the method.
7. Conclusions
The field of medical imaging is considered incomplete without MMIF, as medical experts and researchers usually need multimodal fusion results for clinical diagnoses, treatment planning, and various medical studies. This study proposed a new method for MMIF that relies on the VF-MODPSO-based GC in the spatial domain. We first extracted edge details from source images in eight different directions in conjunction with a Sobel gradient compass. Then, the constructed edge maps are used to obtain two high-resolution medical pictures. We applied the comprehensive medical data’s statistical characteristics to build weight matrix systems, and then we used them to perform the merging. For the optimization of weight matrices, we proposed variable-order fractional-order multi-objective Darwinian particle swarm optimization. The improved convergence rate of VF-MODPSO contributed to enhancing the suggested method’s search efficiency. To evaluate the proposed approach, fourteen sets of CT/MR scans were used. In addition, the qualitative, visual, and quantitative evaluation metrics were employed to show how well the suggested method performs. Furthermore, the comparison study is achieved using eight multi-objective evolution-based MMIF methods.
The results showed the superiority of our method over the other methods in terms of search performance, visual quality, and fusion performance parameters. The proposed VF-MODPSO method could be adapted for video fusion.
Acknowledgments
The authors would like to acknowledge the support of the Artificial Intelligence and Data Analytics Lab (AIDA), PSU, Riyadh, KSA. The author would like to thank Prince Sultan University for their support and for paying the Article Processing Charges (APC) of this publication.
References
- 1. Basu S, Singhal S, Singh D. A Systematic Literature Review on Multimodal Medical Image Fusion. Multimed Tools Appl. 2023;83(6):15845–913.
- 2. A. S, E. F. A survey on deep learning techniques for medical image fusion. IJEIT. 2024;12(1):7–16.
- 3. Khan SU, Ullah I, Ullah N, Shah S, Affendi ME, Lee B. A novel CT image de-noising and fusion based deep learning network to screen for disease (COVID-19). Sci Rep. 2023;13(1):6601. pmid:37088788
- 4. Ghandour C, El-Shafai W, El-Rabaie E-SM, Elshazly EA. Applying medical image fusion based on a simple deep learning principal component analysis network. Multimed Tools Appl. 2023;83(2):5971–6003.
- 5. Behrouzi Y, Basiri A, Pourgholi R, Kiaei AA. Fusion of medical images using Nabla operator; Objective evaluations and step-by-step statistical comparisons. PLoS One. 2023;18(8):e0284873. pmid:37585476
- 6. Dong L, Wang J, Zhao L, Zhang Y, Yang J. ICIF: Image fusion via information clustering and image features. PLoS One. 2023;18(8):e0286024. pmid:37531364
- 7.
Raha R, Sengupta A, Dhabal S. Medical Image Fusion using PCNN Optimized by Whale Optimization Algorithm. In: 2020 IEEE 1st International Conference for Convergence in Engineering (ICCE). IEEE; 2020. 374–8. https://doi.org/10.1109/icce50343.2020.9290504
- 8. Nie R, Cao J, Zhou D, Qian W. Multi-source information exchange encoding with PCNN for medical image fusion. IEEE Trans Circuits Syst Video Technol. 2021;31(3):986–1000.
- 9. Naseem S, Mahmood T, Khan AR, Farooq U, Nawazish S, Alamri FS, et al. Image Fusion Using Wavelet Transformation and XGboost Algorithm. CMC. 2024;79(1):801–17.
- 10. Das M, Gupta D, Radeva P, Bakde AM. Multi‐scale decomposition‐based CT‐MR neurological image fusion using optimized bio‐inspired spiking neural model with meta‐heuristic optimization. Int J Imaging Syst Tech. 2021;31(4):2170–88.
- 11. Zhang H, Cai Z, Xiao L, Heidari AA, Chen H, Zhao D, et al. Face image segmentation using boosted grey wolf optimizer. Biomimetics (Basel). 2023;8(6):484. pmid:37887615
- 12. Yao L, Yang J, Yuan P, Li G, Lu Y, Zhang T. Multi-strategy improved sand cat swarm optimization: global optimization and feature selection. Biomimetics (Basel). 2023;8(6):492. pmid:37887623
- 13. Ghorai C, Shakhari S, Banerjee I. A SPEA-Based Multimetric Routing Protocol for Intelligent Transportation Systems. IEEE Trans Intell Transport Syst. 2021;22(11):6737–47.
- 14. Zhu J, Wang X, Huang H, Cheng S, Wu M. A NSGA-II Algorithm for Task Scheduling in UAV-Enabled MEC System. IEEE Trans Intell Transport Syst. 2022;23(7):9414–29.
- 15. Li L, Chang L, Gu T, Sheng W, Wang W. On the Norm of Dominant Difference for Many-Objective Particle Swarm Optimization. IEEE Trans Cybern. 2021;51(4):2055–67. pmid:31380777
- 16. Xu L, Muhammad A, Pu Y, Zhou J, Zhang Y. Fractional-order quantum particle swarm optimization. PLoS One. 2019;14(6):e0218285. pmid:31220152
- 17. Amir F, Farajzadeh A, Alzabut J. An improved proximal method with quasi-distance for nonconvex multiobjective optimization problem. J Appl Anal. 2022;28(2):333–40.
- 18. Du S, Fan W, Liu Y. A novel multi-agent simulation based particle swarm optimization algorithm. PLoS One. 2022;17(10):e0275849. pmid:36227927
- 19. Ghafour K. Multi-objective continuous review inventory policy using MOPSO and TOPSIS methods. Computers & Operations Research. 2024;163:106512.
- 20. Ahilan A, Chandra Babu G, Senthil Murugan N, Parthasarathy, Manogaran G, Raja C, et al. Segmentation by fractional order darwinian particle swarm optimization based multilevel thresholding and improved lossless prediction based compression algorithm for medical images. IEEE Access. 2019;7:89570–80.
- 21. Yang Y, Cao S, Huang S, Wan W. Multimodal medical image fusion based on weighted local energy matching measurement and improved spatial frequency. IEEE Trans Instrum Meas. 2021;70:1–16.
- 22. Tang L, Tian C, Xu K. Exploiting quality-guided adaptive optimization for fusing multimodal medical images. IEEE Access. 2019;7:96048–59.
- 23.
Kumar M, Ranjan N, Chourasia B. Hybrid Methods of Contourlet Transform and Particle Swarm Optimization for Multimodal Medical Image Fusion. In: 2021 International Conference on Artificial Intelligence and Smart Systems (ICAIS). 2021. 945–51. https://doi.org/10.1109/icais50930.2021.9396021
- 24.
Zhang B, Jiang C, Hu Y, Chen Z. Medical Image Fusion Based a Densely Connected Convolutional Networks. In: 2021 IEEE 5th Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), 2021. 2164–70. https://doi.org/10.1109/iaeac50856.2021.9390712
- 25. Challa UK, Yellamraju P, Bhatt JS. “A Multi-class Deep All-CNN for Detection of Diabetic Retinopathy Using Retinal Fundus Images,” 2019, pp. 191–9.
- 26. Irshad MT, Rehman HU. Gradient compass-based adaptive multimodal medical image fusion. IEEE Access. 2021;9:22662–70.
- 27.
Rai S, Bhatt JS, Kumar Patra S. A Strictly Bounded Deep Network for Unpaired Cyclic Translation of Medical Images. In: 2023 IEEE Statistical Signal Processing Workshop (SSP). 2023;61–5. https://doi.org/10.1109/ssp53291.2023.10207960
- 28.
Deshpande VS, Bhatt JS. Bayesian Deep Learning for Deformable Medical Image Registration. Lecture Notes in Computer Science. Springer International Publishing. 2019. p. 41–9. https://doi.org/10.1007/978-3-030-34872-4_5
- 29. Duan J, Mao S, Jin J, Zhou Z, Chen L, Chen CLP. A novel GA-based optimized approach for regional multimodal medical image fusion with superpixel segmentation. IEEE Access. 2021;9:96353–66.
- 30. Das M, Gupta D, Radeva P, Bakde AM. Optimized multimodal neurological image fusion based on low-rank texture prior decomposition and super-pixel segmentation. IEEE Trans Instrum Meas. 2022;71:1–9.
- 31.
Bhardwaj J, Nayak A. Medical Image Fusion Using Lifting Wavelet and Fractional Bird Swarm Optimization. Advances in Intelligent Systems and Computing. Springer Singapore. 2021. p. 277–90. https://doi.org/10.1007/978-981-16-2123-9_21
- 32. Mergin AA, Premi MSG. Convolutional neural networks (CNN) with quantum-behaved particle swarm optimization (qpso)-based medical image fusion. Int J Image Grap. 2022;24(05).
- 33. Kaur M, Singh D. Multi-modality medical image fusion technique using multi-objective differential evolution based deep neural networks. J Ambient Intell Humaniz Comput. 2021;12(2):2483–93. pmid:32837596
- 34. SJ P, Prakash HN. A Features fusion approach for neonatal and pediatrics brain tumor image analysis using genetic and deep learning techniques. Int J Onl Eng. 2021;17(11):124–40.
- 35. Das M, Gupta D, Radeva P, Bakde AM. Multimodal image sensor fusion in a cascaded framework using optimized dual channel pulse coupled neural network. J Ambient Intell Human Comput. 2022;14(9):11985–2004.
- 36.
Deb K. Multi-objective optimisation using evolutionary algorithms: an introduction. Multi-objective Evolutionary Optimisation for Product Design and Manufacturing. Springer London. 2011. p. 3–34. https://doi.org/10.1007/978-0-85729-652-8_1
- 37. J. C. Tillet, T. M. Rao, F. Sahin, and RRao M. “Darwinian particle swarm optimization”. Indian Int Conference on Art Intelligence. 2005. pp. 1474–87.
- 38.
Mahima NB, Padmavathi MV, Karki . “Feature extraction using DPSO for medical image fusion based on NSCT,” 2017 2nd IEEE International Conference on Recent Trends in Electronics, Information & Communication Technology (RTEICT). Bangalore, India; 2017. pp. 265–9.
- 39.
Sabatier J, Agrawal OP, Machado JAT. Advances in Fractional Calculus. Springer Netherlands. 2007. https://doi.org/10.1007/978-1-4020-6042-7
- 40. Couceiro MS, Rocha RP, Ferreira NMF, Machado JAT. Introducing the fractional-order Darwinian PSO. SIViP. 2012;6(3):343–50.
- 41. Solteiro Pires EJ, Tenreiro Machado JA, de Moura Oliveira PB, Boaventura Cunha J, Mendes L. Particle swarm optimization with fractional-order velocity. Nonlinear Dyn. 2010;61(1–2):295–301.
- 42.
Huang J, Shu Q, Zhu X, Shi X, Zhou L, Liu H. A fast frequency domain approximation method for variable order fractional calculus operator based on polynomial fitting. In: 2018 37th Chinese Control Conference (CCC). 2018; 10180–5. https://doi.org/10.23919/chicc.2018.8483651
- 43. Puchalski B. Neural Approximators for Variable-Order Fractional Calculus Operators (VO-FC). IEEE Access. 2022;10:7989–8004.
- 44. Li X, Song S, Zhang H. Evolutionary multiobjective optimization with clustering-based self-adaptive mating restriction strategy. Soft Comput. 2018;23(10):3303–25.
- 45. Li J, Guo X, Lu G, Zhang B, Xu Y, Wu F, et al. DRPL: Deep Regression Pair Learning For Multi-Focus Image Fusion. IEEE Trans Image Process. 2020;10.1109/TIP.2020.2976190. pmid:32142440
- 46. Singh S, Anand RS. Multimodal Medical Image Fusion Using Hybrid Layer Decomposition With CNN-Based Feature Mapping and Structural Clustering. IEEE Trans Instrum Meas. 2020;69(6):3855–65.
- 47. Hermessi H, Mourali O, Zagrouba E. Convolutional neural network-based multimodal image fusion via similarity learning in the shearlet domain. Neural Comput & Applic. 2018;30(7):2029–45.