Figures
Abstract
Infrared and visible image fusion can generate a fusion image with clear texture and prominent goals under extreme conditions. This capability is important for all-day climate detection and other tasks. However, most existing fusion methods for extracting features from infrared and visible images are based on convolutional neural networks (CNNs). These methods often fail to make full use of the salient objects and texture features in the raw image, leading to problems such as insufficient texture details and low contrast in the fused images. To this end, we propose an unsupervised end-to-end Fusion Decomposition Network (FDNet) for infrared and visible image fusion. Firstly, we construct a fusion network that extracts gradient and intensity information from raw images, using multi-scale layers, depthwise separable convolution, and improved convolution block attention module (I-CBAM). Secondly, as the FDNet network is based on the gradient and intensity information of the image for feature extraction, gradient and intensity loss are designed accordingly. Intensity loss adopts the improved Frobenius norm to adjust the weighing values between the fused image and the two raw to select more effective information. The gradient loss introduces an adaptive weight block that determines the optimized objective based on the richness of texture information at the pixel scale, ultimately guiding the fused image to generate more abundant texture information. Finally, we design a single and dual channel convolutional layer decomposition network, which keeps the decomposed image as possible with the input raw image, forcing the fused image to contain richer detail information. Compared with various other representative image fusion methods, our proposed method not only has good subjective vision, but also achieves advanced fusion performance in objective evaluation.
Citation: Di J, Ren L, Liu J, Guo W, Zhange H, Liu Q, et al. (2023) FDNet: An end-to-end fusion decomposition network for infrared and visible images. PLoS ONE 18(9): e0290231. https://doi.org/10.1371/journal.pone.0290231
Editor: Gulistan Raja, University of Engineering & Technology, Taxila, PAKISTAN
Received: April 5, 2023; Accepted: July 31, 2023; Published: September 18, 2023
Copyright: © 2023 Di et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: All datasets are available in the TNO public database (https://figshare.com/articles/dataset/TNO_Image_Fusion_Dataset/1008029).
Funding: Jing Di received funding from the Science and Technology Plan Foundation of Gansu Province of China (grant numbers 22JR5RA360) Jing Lian received funding from the National Natural Science Foundation of China (grant numbers 62061023) and the Distinguished Young Scholars of G Ansu Province of China (grant number 21JR7RA345).
Competing interests: We also declare that we no competing interests exist.
1 Introduction
Image fusion methods use suitable feature extraction methods and fusion strategies to generate a single image containing key image information. The above methods adopt more than two raw images, which provide complement and redundant characteristics. In the realm of image fusion, one of the most important topics is infrared and visible image fusion [1], which can effectively extract the complementary redundant information between each raw image and combine them into high-quality stable and informative images. It has critical image processing applications, such as remote sensing [2, 3], target detection [4, 5], security surveillance [6], medical imaging [7, 8], and military applications [9].
Infrared and visible image fusion methods can be broadly divided into two categories: traditional methods and deep learning-based methods [10–13]. The traditional methods typically accomplish the image fusion goal in the space domain or frequency domain using corresponding mathematical transformations, such as wavelet transform [14], multiscale transform [15, 16], sparse representation [17]. However, in the image fusion stage, all these methods require manually designed complex image fusion rules. Deep learning-based methods extract and combine image features based on strong feature learning capabilities of neural networks, and could be classified into supervised learningbased methods and unsupervised learning-based methods. Liu et al. [18] adopted the Convolutional Neural Network (CNN) for image fusion and made significant progresses comparing with traditional algorithms, but the CNN requires supervised training. For infrared and visible image fusion tasks, it is impossible to generate usable labeled data. In other words, it is impossible to artificially construct fusion images that can be referenced for supervised training. To address this problem, Li Hui et al [19] proposed to use the pre-trained VGG network for fusing infrared and visible images. This algorithm enables the extraction and fusion of multi-level deep features from the source images. Later, ResNet-50 [20]was proposed to extract and fuse depth features from the source images. However, a significant drawback of these models is their reliance on pre-trained CNN models as offline feature extractors. This limitation prevents adaptive extraction and fusion of features from the source images. Subsequently, scholars designed an end-to-end network framework specifically for image fusion. Prabhakar et al [21]. proposed an unsupervised end-to-end convolutional neural network learning framework, which does not require manually setting complex fusion strategies than other image fusion methods. The novel framework has the flexibility and versatility than previously experienced, but its performance evaluation results are not optimal for specific image fusion tasks.
To solve the above problems, we propose a Fusion Decomposition Network called FDNet to achieve infrared and visible image fusion. Fig 1 shows a set of infrared and visible image pairs, and the corresponding fused images generated by deep learning -based and our proposed FDNet. There are two main aspects for the proposed network. On the one hand, Considering the characteristics of infrared and visible images derived from different sensors, we use multi-scale layers, depthwise separable convolution, and improved Convolutional Block Attention Module(I-CBAM) to create a double-branch network framework to extract the gradient and intensity information of related images. Secondly, we design a new loss function, representing the gradient and intensity information at each image. On the other hand, we consider not only the image fusion process, but also the image decomposition process from the fusion result to raw image. According to the above analysis, we design a single and double channel convolutional layer to maintain consistency between the decomposition result and corresponding raw image. Significantly, the image fusion results can contain a lot of detailed information.
From left to right: the infrared image, the visible image, the fusion results of the CNN [18], the Deeplearning approach [19], the ResNet50 approsch [20], and our proposed FDNet.
The contribution of our work consists of the following five main aspects:
- We propose a novel deep learning-based method called as FDNet for fusing infrared and visible images. Compared to traditional image fusion methods, our approach successfully complete image fusion task without manual setting activity level measurement and fusion rules. The overall fusion method can simultaneously perform both the fusion and decomposition stages. The fusion network designs a double-branch network to complete feature extraction, including multi-scale layers, depthwise separable convolution and I-CBAM. The decomposition network is composed of single and double channel network including convolutional layers, which makes the fusion result contain more scene detail information and improves the network fusion performance.
- For the shallow feature extraction step, we design multi-scale convolutional network structures to extract image feature information with different receptive field sizes for infrared and visible images. This effectively solves the problem about insufficient feature extraction using a single scale convolution kernel. This not only increases the multiscale convolution structures of the processed image, but also accurately extracts the image features of object regions, and improve the shallow feature extraction ability.
- For multi-scale deep feature fusion, we design a depthwise separable convolutional structure, which separately considers channel information and spatial information for the image regions. Deep convolution operation and point-by-point convolution operation could guarantee that the size of the feature map is not changed using a deeper network, improve the network expression ability, and build a lightweight network.
- We propose a novel Frobenius norm loss function, an adaptive gradient loss function, and a structural similarity loss function between the decomposed fused image and the raw image, and generate a desired image fusion result for the novel network.
The remainder of the paper is structured as follows: Section 2 reviews related work. Section 3 presents overall framework, network architecture, and loss functions. Section 4 conducts experimental analyses, algorithm comparisons, and ablation experiments. Section 5 makes conclusion and suggests future work.
2 Relevant works
2.1 Infrared and visible image fusion
With the emergence of various methods, image fusion techniques have made significant advancements. Currently, the most popular image fusion methods are the deep learningbased methods, which can be further classified into the supervised learning-based methods and unsupervised learning-based methods. It is the most challenging to lack the ground-truth fused images for supervised learning image fusion methods. DeepFuse is the first image fusion method based on unsupervised learning, including encoding step, image fusion step, and decoding step. As the general image fusion framework, the image fusion performance of DeepFuse on specific problems is not good enough. Subsequently, Hui Li et al. [22] proposed DenseFuse, which incorporates the encoder-decoder structure with the DenseBlock and better preserve the original image information. Li et al. [23] proposed NestFuse, a method derived from DenseFuse to retain more detailed features and provide more infrared target information. But finding an effective fusion strategy is difficult for image fusion. To address the issue of arbitrary fusion strategies, Ma et al. [24] proposed the FusionGAN framework based on a genetic algorithm. This image processing framework utilizes a generator to extract and combine meaningful information from the raw images. The purpose of the discriminator in FusionGAN is to enforce the fused image to contain more detailed information in visible image. However, the discriminator network cannot preserve image detail information. Zhang et al. [25] proposed a novel generative adversarial network called GAN-FM, specifically designed to retain more detailed image information. In this network, a full-size jump-connected generator is applied to extract shallow features, and the discriminator uses two Markov discriminators to fully retain the valid information in the infrared and visible images by playing adversarial games with the generator. In addition, a novel intensity masking generative adversarial network (IM GAN) [26] and an unsupervised continual-learning generative adversarial network (UIFGAN) [27] were designed to complement multimodal image information, whereas, it fails to integrate the extracted features efficiently. Xu et al. [12] introduced attention mechanisms to the fusion network for feature extraction, while Liu [28] proposed an Attention-guided and Wavelet-constrained Generative Adversarial Network for infrared and visible image fusion(AWFGAN) model based on Generative Adversarial Nets (GAN), which could better preserve important information of the raw images.
2.2 Attention mechanism
Attention mechanism plays an important role for human visual system and brings significant successes in image processing field. The related achievement methods focus selectively on interest regions by reassigning the related weight values of input sequences [29]. Attention mechanism has many image processing applications, including target detection [30], image enhancement [31] and emotion recognition [32], and it can be divided into local-attention, soft-attention and hard-attention [33] according to the achievement methods. Hereinto, Soft-attention mechanism can currently be subdivided into channel attention, spatial attention, and their combined module. Convolutional Block Attention Module (CBAM) is a typical joint module, and its spatial attention model focuses only on important information regions to reduce resource consumption, and the channel attention model allocates channel resources effectively by considering the relationship between feature map channels. Fig 2 shows the block diagram of CBAM.
The related calculating equations about CBAM are written as follows:
(1)
(2)
(3)
(4)
where ⊗ denotes the multiplication operation of corresponding elements. F denotes the input feature map. F′ denotes the out result of channel attention mechanism. F′′ denotes the output result of spatial attention mechanism. MC(F) denotes the output weights of F based on channel attention. MS(F′)denotes the output weights of F′ based on spatial attention. σ denotes the sigmoid function. W0 and W1 denote the weight values of the MLP.
and
denotes the average pooling feature and the maximum pooling feature, respectively. f7×7 denotes a convolution operation with the filter size of 7 × 7.
2.3 Depthwise separable convolution
The work of Laurent Sifre at Google Brain in 2013 developed depthwise separable convolution (DSC) and was applied to AlexNet to improve recognition accuracy moderately and reduced the size of the proposed model. The first layer of Inception V1 and Inception V2 also used depthwise separable convolution [34, 35]. Within Google, Andrew Howard [36] introduced an efficient mobile models called MobileNets using depthwise separable convolution. Depthwise separable convolution is also a factorization convolution. It has two main achievement steps: depth convolution and pointwise convolution, which are used to filter and combine feature information. This type of factorization not only reduces computational complexity compared to other standard convolutions, but also could acquire better trained models, and is widely used in image classification and image segmentation. Fig 3 illustrates the process of depthwise separable convolution.
3 Method
3.1 Overall framework
Infrared images have strong anti-interference capability and are not limited by weather conditions. Visible images can provide significant texture and detail information, and have high spatial resolution. In order to enhance the feature extraction capabilities and image fusion performance for visible and infrared images, we proposed FDNet, which is accompanied by its corresponding general framework is shown in Fig 4.
Fig 4, the purple box represents the raw image used for multi-scale feature extraction. The orange box for concat the multi-scale feature extraction map, the blue box represents the depthwise separable convolution operation, the green box for the I-CBAM attention mechanism, the yellow box for concat, the red box for 1 × 1 convolution operation and tanh as the activation function, the lavender represents perform 1 × 1 convolution and LReLu as the activation function, light green represents perform 3 × 3 convolution and LReLu as the activation function, light orange represents perform 3 × 3 convolution and tanh as the activation function.
The proposed FDNet fusion network consists of a fusion network and a decomposition network. The fusion network takes into account different properties of raw images from different sensors on the research, so we design a double-branch network to process related data of infrared and visible images, which has large differences from spatial resolution. The purpose of the decomposition network is to contain more abundant scene information for fusion images, so we design a single and double channel convolution layer to obtain a more finely decomposing image. The above proposed network has the same network structure and shared parameters, and receives infrared and visible image as the network inputs. The above network structure consists of multi-scale layers, depthwise separable convolution, and I-CBAM. In the training stage, firstly, two modal images with the same size of 120 × 120, are entered into the double-branch network. Here its multiscale convolution layer not only extracts the multi-scale features from the raw images, but also reduces the loss of image feature information. The depthwise separable convolution independently conducts spatial convolution step using the multi-scale input features by depthwise convolution operation, and can finds new spatial channels by pointwise convolution operation. Obviously, the related network parameters are reduced and the lightweight network is constructed to achieve deeply feature extraction. Subsequently, I-CBAM focuses on the salient information of infrared and visible images from both channel and spatial aspects, and suppress useless channel information to ensure that all salient features can be utilized during image fusion steps. The extracted features from the double-branch network of infrared and visible images use concat and convolution strategies, shown as a big yellow box in Fig 4. Finally, the decomposition network extracts image feature information by common convolutional layers from the fused images and decomposes them into two branches to generate a new decomposition image consistent with the raw images. In the testing process, the fused images are generated using the trained model data only.
3.2 Fusion network
3.2.1 Shallow feature extraction.
In the deep learning-based methods, feature information is typically extracted using convolutional layers. However, when using a single-scale convolutional kernel, the feature information from other receptive fields may not be fully captured. So, on the research, these convolutional kernels from four different scale sizes with 7 × 7, 5 × 5, 3 × 3, and 1 × 1 will be used to extract infrared and visible light image features of different receptive fields, respectively. The multi-scale convolutional layers do not change the size of the raw images, could provide much more image feature information, and extend the ranges of shallow image feature extraction. The structure diagram of multi-scale feature extraction map is shown in Fig 5.
The multi-scale feature extraction equations are calculated as follows:
(5)
(6)
where Fin and Fout are input feature map and output feature diagram, respectively. * represents the convolution operation. fj represents the used convolution kernel size (j = 1,3,5,7).
3.2.2 Deep feature fusion.
The depthwise separable convolution module plays a crucial role in deep feature fusion. Compared to standard convolution operation considering the spatial and channel information in image regions, depthwise separable convolution will consider channel information and spatial regions separately and learn more abundant representation features with less parameters. On the research, we employ the depthwise separable convolution module from the second to fourth layers for deep feature extraction, and select the Leaky Relu as the activation function. Firstly, the previous layer in deep feature fusion network mainly adopts 3 × 3 convolution kernels to conduct the spatial convolution operation of each channel and decrease the parameter number. Secondly, the network depth will be deepened by 1 × 1 convolution kernels without changing the size of the feature map, easily realizing cross-channel information interaction and integration, learning deep target information, and improving the network expression capability. The related parameters for the depthwise separable processes are presented in Table 1.
3.2.3 Improved CBAM.
To enhance the image fusion performance, CBAM is used as the attention module in this study. The receptive field size in the CBAM determines the spatial attention performance. In order to aggregate more extensive spatial context features, a 7 × 7 convolutional kernel in receptive field is used rather than previous 3 × 3 convolution kernel. The number of the module parameters with 7 × 7 convolutional kernel has an obvious increase for the receptive field. Therefore, compared to other same receptive fields, we design a spatial attention module using dilated convolution to complete feature aggregation to reduce the number of module parameters. The specific spatial attention calculation equation is given as follows:
(7)
where F denotes the input feature map. MS(F) denotes the output weights of F based on spatial attention. σ denotes the sigmoid function.
denotes the dilated convolution with a convolution kernel size of 3. The experiments use the dilated convolution with a dilated rate of 2.
The CBAM attention mechanism generally adopts “cascade” connection, but this will bring a large influence that the previous feature mapping determines the later weighing values and learned features from the attention modules. Significantly, the interference caused by the “cascade connection” could bring a worse effect for the attention modules in image fusion tasks. Therefore, we change the original “cascade connection” to “parallel connection”, which directly learns the initial input feature map without considering the order of spatial attention and channel attention, and the related mathematical equation is given as:
(8)
where F′′ denotes the final output feature map. MC(F) denotes the output weights of F based on channel attention.
For the I-CBAM, the spatial attention module and the channel attention module are learned simultaneously. Hereinto, In the channel attention module, the input feature diagram F(H × W × C) is subjected to the maximum pooling and average pooling, and obtain two feature diagrams of 1 × 1 × C, and then they are given to a Multi-Layer Perceptron (MLP). The channel feature diagram is generated by element-wise operation and sigmoid activation, known as MC. In the spatial attention module, Firstly, the input feature diagram F obtains two feature diagrams of H × W × 1 by maximum pooling and average pooling operation. Secondly, we conduct a channel-based concat operation and use the dilated convolution with convolution kernel size 3 to reduce the number of dimensions. Thirdly, through the Sigmoid activation function obtain the final spatial feature diagram MS. Fourthly, the feature map obtained by channel attention and spatial attention is directly weighted with the original input feature map F to obtain the final output feature diagram. The overall block diagram of I-CBAM is shown in Fig 6.
3.3 Decomposition networks
The purpose of designing decomposing network is to decompose the fused images and to generate good image fusion results closer to raw images. The framework of the decomposition network is illustrated in Fig 7.
In Fig 7, We extract image features from the fused image using three single-channel convolutional layers, and then generate the decomposition results from two dual-channel convolutional layers. The first convolutional layer utilizes 1 × 1 convolutional kernel, and the remaining convolution layers employ a 3 × 3 convolutional kernel. The activation function Leakly ReLU is chosen for the common convolutional layer, and the activation function Tanh is adopted for the last double-channel convolutional layer.
3.4 Loss functions
FDNet architecture is divided into the fusion and decomposition components. The fusion network combines multiple images into a single fused image through feature extraction. Moreover, the decomposition network is to make these fused results contain deeper scene information. The corresponding loss function consists of fusion loss Lsf and decomposition loss Ldc. The mathematical expression is written as:
(9)
where L represents the total loss function, Lsf represents the fusion loss, Ldc represents the decomposition loss.
3.4.1 Fusion loss.
The most basic components of infrared and visible images are image pixels, whose intensities represent the overall pixel luminance distribution. The differences between pixels could form gradient information, which represents the texture details of a raw image. Therefore, the traditional infrared and visible image fusion scheme can be constructed to extract and reconstruct the gradient and intensity information from raw images on the research. Correspondingly, the fusion loss function composes of intensity loss and gradient loss. The corresponding equation is expressed as:
(10) β is the key parameter between the intensity term and the gradient term, Lgrad represents the adaptive gradient loss function, Lint represents the intensity function.
An adaptive gradient loss function Lgrad is designed to add abundant texture features for the fusion images. We also introduce an adaptive weight block to reduce the noise influence by Gaussian low-pass filter on the weighing block. This adaptive weight block evaluates the optimization objectives of the respective pixels in the raw images based on the richness of gradients. The complete process of the adaptive weight block is depicted in Fig 8.
The equations of gradient loss function are expressed as follows:
(11)
(12)
(13)
where I1 and I2 are the raw images, Ifused is the fused image, H and W denote the height and width of the processing images, respectively. i and j represent pixel coordinates in position (i, j). ▽(⋅) is the Laplacian operator. L(⋅) and |⋅| represent the Gaussian low-pass filter function and absolute value function, respectively. min(·) and sign(·) denotes the minimum function and sign function, respectively.
Intensity loss, adopting improved the Frobenius norm, affects the brightness and contrast of the image, and brings the natural and realistic effect for the fused images. The loss function is defined as the square root of the sum of the squares for the matrix elements at each position. Its main role is to measure the distance of the matrices between the raw and fused image pixels, and to adjust their weighing values effectively. It is noted that the function could select more effective information in the network training process. The related formula is expressed as follows:
(14) α is used to adjust the infrared and visible image intensity information.
3.4.2 Decomposition loss.
Decomposition loss Ldc requires that the decomposition result of a raw image after image fusion step is akin to the corresponding raw image. We choose the structural similarity (SSIM) as the loss function, and calculate the SSIM value between the fusion decomposition result and the raw image, in terms of structural distortion, contrast distortion and luminance distortion. The corresponding formulae is written as follows:
(15)
(16)
(17)
where I1de and I2de are the decomposition results, I1 and I2 are the raw images. μ and σ are the mean value and standard deviation, respectively. The parameters C1, C2, and C3 are three important constants to avoid the SSIM value to zero during the training process.
4 Experimental results and analysis
4.1 Datasets and setup details
This paper utilizes the publicly available TNO database for completing infrared and visible image fusion tasks. We add related experimental images by designed cropping and decomposing methods in training steps. The training images with the maximum pixel size of 576 × 768 and the minimum pixel size of 360 × 270 are selected and cropped to generate 42,484 experimental images with the suitable size of 120 × 120. In contrast to the training data, ten pairs of testing images are selected as the testing data, using the original sizes of raw images.
This experiment is conducted on a Windows 10 operating system with an Intel Core i5-1035G1 CPU. Tensorflow and imageio are used to train and test network performance in the Pycharm compiler. The related parameters of the experiments are set to epoch = 15, batch size = 32, learning rate = 1e-4. The strong and well-converging adaptive optimization algorithm Adam are adopted for the optimization algorithms. In Eq 14, the parameter α is 0.5, allowing the network to obtain the main intensity information from the infrared images to maintain high contrast. In addition, after repeated experiments, the ratios of gradient loss, intensity loss, and decomposition loss included in the total loss are set to 80, 1, and 1, respectively.
4.2 Ablation experiments
4.2.1 Module performance test.
To validate the effectiveness of our proposed methods, related ablation experiments are conducted as follows:
- Image fusion experiments with multi-scale depthwise separable convolution (M-DSC).
- Image fusion experiments with multi-scale depthwise convolution and attention mechanism (M-DSC+CBAM).
- Image fusion experiments with multi-scale depthwise convolution and improved attention mechanism (M-DSC+I-CBAM).
An image fusion result of “soldier-behind-smoke-1” was randomly selected from the testing dataset for subjective evaluation. Ten groups of image fusion results were chosen for objective evaluation. The evaluation results of the ablation experiment are shown in Fig 9. It is noted that the overall image contrast is slightly insufficient only using M-DSC, and some image details cannot be captured. The fusion results of M-DSC and CBAM significantly solve these problems, making the person in the forest clearer and improving the overall image contrast to reveal more image details. Furthermore, the image fusion results obtained by combining our proposed M-DSC with I-CBAM effectively retainess ential image information and show significant improvements compared to the previous two comparison methods.
In the ablation experiment, we select the average gradient (AG) and multi-scale structural similarity (MSSSIM) as objective evaluation metrics. The MSSSIM uses different resolutions to evaluate the image fusion quality and reflects the fused image sharpness and texture detail information with AG. Table 2 is the objective evaluation results of adopted metrics for the ablation experiment.
In Table 2, the increase of the number of modules brings obvious improvement for image fusion performance, and the calculation value of multi-scale structure similarity metric is 88. 2%, which demonstrates that the proposed M-DSC and I-CBAM methods can effectively extract image features.
4.2.2 Decomposition network.
A decomposition network is typically used to decomposes an image fusion result to the corresponding raw image approximately. The decomposition result is determined by the image fusion result and prompt the fused images to acquire more scene details. To demonstrate the effectiveness of the decomposition network for the fused images, we conduct related ablation experiments, as shown in Fig 10.
In Fig 10, the using decomposition network improves the clarity of the trees and soldiers in the image fusion result, and the overall visual effect is better. Additional, two objective evaluation metrics, namely spatial frequency SF and average gradient AG, are chosen to reflect the clarity of the processed images. The experimental data is shown in Table 3.
From Table 3, the fused image clarity in our proposed network becomes higher than missing decomposition network, and the image fusion performance is improved referring to the above experiment data.
4.2.3 Intensity loss analysis.
Intensity loss plays a key role in the fused image to retain important information, such as contrast. Meanwhile, this method helps to maintain a natural scene style in the fused image. For this reason, we perform the ablation experiments to prove the effectiveness of the proposed method, as shown in Fig 11.
As shown in Fig 11, the lack of intensity loss generates several problems, including low brightness, information loss and stylistic unrealism for the fused images. This indicates that intensity loss is critical for the image fusion results. Due to the significant deviation of the experimental results without intensity loss from the expected outcome,we decided not to conduct quantitative experiments in this case.
4.2.4 Gradient loss analysis.
Gradient loss forces the more texture details in the fused image, as demonstrated by our ablation experiments. The calculation results of gradient loss ablation experiment are shown in Fig 12. In the given image fusion results with no gradient loss shows texture loss and sharpness reduction, while the use of gradient loss retains original sharpness and acquire more texture details. Additionally, objective evaluation results of gradient loss ablation experiment is given in Table 4.
In Table 4, it can be seen that the inclusion of gradient loss leads to further improvement in the image fusion results. This strongly demonstrates the significance of gradient loss in enhqncing the fusion performance.
4.3 Fusion image analysis
Different image evaluation methods are easy to give different evaluation results in most cases. In this study, we adopt subjective and objective evaluation methods to evaluate the image fusion effect of a randomly given image.
4.3.1 Subjective evaluation.
The proposed fusion method is compared with twelve prevalent methods, including Deeplearning [19], ResNet50 [20], CDL [37], CCFL [38], SMVIF [39], BF [40], MLGCF [41], SDNet [42], SwinFusion [43], PIAFusion [44], FLFusion [45] and U2Fusion [46].
The image fusion result of “Nato camp” is shown in Fig 13. In the BF method, the grass, person and plants around the building are all blurred, and the overall fusion effect is bad. The ResNet50, Deeplearning, SMVIF, U2Fusion and FLFusion methods provide more de tailed information about the persons in the fused image, but the plant is still blur. In the MLGCF, SDNet, SwinFusion and PIAFusion methods, the person and grass are not blurred, but the overall contrast is low, limiting the visibility of additional feature information. The CDL and CCFL methods have better overall contrast, but the target edges are not clear enough, and some detailed information is not clear. Compared to previous popular methods, our proposed method yields clearer texture details, richer scene information and remarkable target objects.
The “helicopter” fusion result is shown in Fig 14. The FLFusion fails tot fully integrate the visible image information, resulting in a fusion result dominated by infrared information. Although the BF method shows some improvement compared to the FLFusion, it still falls short in fully extracting the original visible information, resulting in an image with only contour information. The ResNet50, Deeplearning, SMVIF, MLGC-F and U2Fusion methods maintain the related information of the raw image, the whole image is blurry and the specific texture information is unclear. The SwinFusion and PIAFusion methods have prominent infrared image object regions, but it brings a low overall contrast, which leads to the inability to represent more detailed information. The CDL and CCFL methods have better overall brightness, but the clarity is not high with artifacts around the target. The SDNet method shows significant targets and clear textures, but some missing edge information. In comparison, our proposed method retains the feature information while obtaining the best brightness and largest edge gradient, with clear background texture and good visual effect for the processed image.
The image fusion result of “Marne-04” is shown in Fig 15. The SwinFusion and PIAFusion methods exhibit a loss of cloud information from the infrared source image in the sky region. The CDL and CCFL methods contain some noise, leading to partial distortion of the image fusion result in the sky region. The SMVIF, BF, ResNet50, and Deeplearning methods do not have distorted for background texture information, but the texture details on the road are lost and the overall contrast is low. The FLFusion and U2Fusion methods do not lose texture details on the road, but the window features appear blurry. The MLGCF and SDNet method have good overall contrast and meets subjective visual perception requirements, and the significant feature information (texture on the road and car windows) from the raw image is well shown in the fusion images, but more detailed regions cannot be reflected. Our proposed method could acquire good image fusion results with clearer background texture information and better brightness than previous experienced.
The image fusion result of “Movie-01” is shown in Fig 16. The overall CDL and CCFL methods provide the blurred fusion images, with the unclear trees and houses enough. The overall clarity of SwinFusion and PIAFusion method is improved, but the river in front is blurred. The visual effect of the BF method has improved, but the windows of the house are blurry and have low clarity and contrast. The SMVIF, ResNet50, Deeplearning, SDNet and FLFusion methods bring the clear object regions of the processed infrared image, with low contrast and a large amount of missing detail information. The MLGCF and U2Fusion methods have high contrast, but the object region has a virtual shadow with low clarity. Our method preserves the image feature information while having the best brightness, clarity, and detailed information.
The “Movie-18” fusion result is shown in Fig 17. The fusion image given by the BF is relatively dark with low contrast. The corresponding person and thing on the road are not clear. The ResNet50, Deeplearning, FLFusion, and U2Fusion methods have much noise and the overall fusion image is more blurred. The experimental image of the SMVIF method lacks the detailed texture information from infrared and visible images, and only contains some contour information. The CDL, CCFL, MLGCF, SwinFusion, and PIAFusion methods exhibit more prominent object regions, but there is a lack of detail information around the person. The SDNet method demonstrates good overall contrast with prominent target features, but the street light appears blurred. In comparison, our proposed method achieves good contrast, high definition, and detailed information, offering superior performance.
The “bench” fusion results are shown in Fig 18. The BF method has a low overall contrast with blurring flame and much noise for infrared images. The FLFusion method loses the texture detail information in the visible image, resulting in a serious lack of fusion image information. The ResNet50, the SMVIF, Deeplearning and U2Fusion have improved the overall fusion effect with rich scene information, but the flame of the object region is still not obvious with low clarity. The CDL, CCFL, SwinFusion and PIAFusion have obvious flame in the infrared images, but the clarity is not high and there is a lot of detail information loss. The SDNet and MLGCF methods achieve better overall effects, with prominent targets, but there is a slight loss of scene information. In comparison, our proposed method acquires more remarkable object regions, clearer background information, richer scene information and better visual effects.
4.3.2 Objective evaluation.
In this paper, we select eight evaluation metrics to objectively evaluate the fused images, including average gradient (AG), entropy (EN), standard deviation (SD), spatial frequency (SF), correlation coefficient (CC), visual information fidelity for fusion (VIFF), signal to noise ratio (SNR), and mutual information (MI).
AG denotes the detailed representation and texture representation of a proceeded image by calculating the average gray-scale rate of change; EN measures the richness of the image by calculating the average information content of the image fusion result; SD reflects the separation of gray-scale values for a processed image by calculating the difference between intensity values and mean intensity values, which can helps to calculate image contrast; SF reflects the fused image sharpness by calculating the gray-level activity in the spatial domain, using information theory knowledge; MI calculates how much information the fused image includes its corresponding raw image to measure the similarity between these two images; VIFF provides the corresponding objective evaluation values of human vision system; SNR reflects the quality of fused images by calculating the ratio of signal to noise; CC reflects the correlation between the fused image and raw image. The evaluation results of the above metrics are shown in Tables 5–12.
Analyzing the objective evaluation metrics from Tables 5 to 12, it becomes evident that our methods exhibit high EN values, demonstrating that the image fusion result contains an abundant of information; a high SF value corresponds to high clarity in the image fusion results; the high AG value suggests that the fused image has more detailed feature and texture information; the high SD value indicates that the processed image contains abundant detailed information with high pixel intensity; the high VIFF value suggests that the subjective perception of the processed image is concordant between that of human visual system; the high SNR value suggests that the useful information in the image fusion result is retained and rarely affected by image noise; a high CC value suggests that the raw image transmit many important image features, resulting in a high correlation between the fusion result and the above features. However, our MI values are slightly lower than some comparison algorithms. This can be attributed to the fact that we employ concat and convolution fusion strategies to preserve luminance information in infrared images and texture information in visible images. The MI metric focuses mainly on the luminance information based on the mean method, if a fused image ultimately contains much noise, it will also result in the increase of luminance information. The CDL, CCFL, PIAFusion and BF methods focus on infrared information fusion while ignoring visible information, so the MI metric has the best image fusion performance.
5 Conclusion and future work
We propose a fusion decomposition network called FDNet to achieve the goal of image fusion. In image fusion stage, considering the large differences between raw images, a double branch fusion network framework, consisting of multi-scale layers, depthwise separable convolution and I-CBAM, is proposed on the research. Additionally, an improved Frobenius norm and adaptive gradient loss term are designed for unsupervised learning. The network framework can effectively extract image feature information while reducing computation complexity. In image decomposition stage, it is considered to decompose the fusion results to regenerate raw images, a SSIM structure loss is used as the decomposition loss. The related experimental results demonstrate that our method has high subjective visibility, good overall clarity, and clear background texture information. However, it should be noted that our image fusion framework is applicable to for aligned images, which has limitations for real-time, non-aligned images. In the future work, we will not only explore how to efficiently fuse unaligned images for real-time tasks, but also integrate to more advanced image processing techniques and design a unified fusion framework to handle other complex image fusion tasks.
References
- 1. Fu Y, Wu XJ, Durrani T. Image fusion based on generative adversarial network consistent with perception. Information Fusion. 2021;72:110–125.
- 2. Sun H, Liu Q, Wang J, Ren J, Wu Y, Zhao H, et al. Fusion of infrared and visible images for remote detection of low-altitude slow-speed small targets. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. 2021;14:2971–2983.
- 3. Wu M, Chang L, Yang X, Jiang L, Zhou M, Gao S, et al. Infrared small target detection by modified density peaks searching and local gray difference. Photonics. MDPI. 2022;9(5):311.
- 4. Shuai X, Jing Z, Tuo H. SAFuseNet: integration of fusion and detection for infrared and visible images. Aerospace Systems. 2022;5(4):655–661.
- 5. Gao P, Tian T, Zhao T, Li L, Zhang N, Tian J. GF-Detection: fusion with GAN of infrared and visible images for vehicle detection at nighttime. Remote Sensing. 2022;14(12):2771.
- 6. Li H, Wang Q, Wang H, Yang WK. Infrared small target detection using tensor based least mean square. Computers & Electrical Engineering. 2021;91:106994.
- 7. Lian J, Yang Z, Liu J, Sun W, Zheng L, Du X, et al. An overview of image segmentation based on pulse-coupled neural network. Archives of Computational Methods in Engineering. 2021;28:387–403.
- 8. Lian J, Yang Z, Sun W, Zheng L, Qi Y, Shi B, et al. A fire-controlled MSPCNN and its applications for image processing. Neurocomputing. 2021; 422:150–164.
- 9. Hu K, Sun W, Nie Z, Cheng R, Chen S, Kang Y. Real-time infrared small target detection network and accelerator design. Integration. 2022;87:241–252.
- 10. Wang Z, Wang F, Wu D, Gao G. Infrared and visible image fusion method using salience detection and convolutional neural network. Sensors. 2022;22(14):5430. pmid:35891107
- 11. Zhang J, Tang BJ, Hu S. Infrared and visible image fusion based on particle swarm optimization and dense block. Frontiers in Energy Research. 2022;1357.
- 12. Xu D, Zhang N, Zhang Y, Li Z, Zhao Z, W Y. Multi-scale unsupervised network for infrared and visible image fusion based on joint attention mechanism. Infrared Physics and Technology. 2022;125:104242.
- 13. Yi S, Jiang G, Liu X, Li J, Chen L. TCPMFNet: an infrared and visible image fusion network with composite auto encoder and transformer–convolutional parallel mixed fusion strategy. Infrared Physics and Technology. 2022;127:104405.
- 14. Chen Y, Wang Y, Huang F, Mao S. Infrared and visible images fusion base on wavelet transform. Sixth Symposium on Novel Optoelectronic Detection Technology and Applications. SPIE. 2020;11455:875–882.
- 15. Yang C, He Y, Sun C, Jiang S, Li Y, Zhao P. Infrared and visible image fusion based on QNSCT and Guided Filter. Optik. 2022;253:168592.
- 16. Panigrahy C, Seal A, Mahato NK. Parameter adaptive unit-linking dual-channel PCNN based infrared and visible image fusion. Neurocomputing. 2022;514:21–38.
- 17. Wang C, Wu Y, Yu Y, Zhao JQ. Joint patch clustering-based adaptive dictionary and sparse representation for multi-modality image fusion. Machine Vision and Applications. 2022;33(5):69.
- 18. Liu Y, Chen X, Cheng J, Peng H, Wang Z. Infrared and visible image fusion with convolutional neural networks. International Journal of Wavelets, Multiresolution and Information Processing. 2018;16(03):1850018.
- 19.
Li H, Wu X, kittler J. Infrared and visible image fusion using a deep learning framework. 2018 24th International Conference on Pattern Recognition (ICPR), Beijing, China. 2018; 2705–2710.
- 20. Li H, Wu X, Durrani TS. Infrared and visible image fusion with ResNet and zero-phase component analysis. Infrared Physics & Technology. 2019;102:103039.
- 21. Prabhakar KR, Srikar VS, Babu RV. DeepFuse: a deep unsupervised approach for exposure fusion with extreme exposure image pairs. Proceedings of the IEEE international conference on computer vision (ICCV). 2017;4714–4722.
- 22. Li H, Wu XJ. DenseFuse: a fusion approach to infrared and visible images. IEEE Transactions on Image Processing. 2018;28(5):2614–2623. pmid:30575534
- 23. Li H, Wu XJ, Durrani T. NestFuse: an infrared and visible image fusion architecture based on nest connection and spatial/channel attention models. IEEE Transactions on Instrumentation and Measurement. 2020;69(12):9645–9656.
- 24. Ma J, Yu W, Liang P, Li C, Jiang J. FusionGAN: a generative adversarial network for infrared and visible image fusion. Information Fusion. 2019;48:11–26.
- 25. Zhang H, Yuan J, Tian X, Ma J. GAN-FM: infrared and visible image fusion using GAN with full-scale skip connection and dual Markovian discriminators. IEEE Transactions on Computational Imaging. 2021;7:1134–1147.
- 26. Sun X, Hu S, Ma X, Hu Q, Xu S. IMGAN: infrared and visible image fusion using a novel intensity masking generative adversarial network. Infrared Physics & Technology. 2022;125:104221.
- 27. Le Z, Huang J, Xu H, Fan F, Ma Y, Mei X, et al. UIFGAN: an unsupervised continual-learning generative adversarial network for unified image fusion. Information Fusion. 2022;88:305–318.
- 28. Liu X, Wang R, Huo H, Yang X, Li J. An attention-guided and wavelet constrained generative adversarial network for infrared and visible image fusion. Infrared Physics & Technology. 2023;104570.
- 29. Niu Z, Zhong G, Yu H. A review on the attention mechanism of deep learning. Neurocomputing. 2021;452:48–62.
- 30. Wang L, Yao W, Chen C, Yang H. Driving behavior recognition algorithm combining attention mechanism and lightweight network. Entropy. 2022; 24(7): 984. pmid:35885207
- 31. Hui Y, Wang J, Shi Y, Li B. Low light image enhancement algorithm based on detail prediction and attention mechanism. Entropy. 2022; 24(6):815. pmid:35741536
- 32. Tao H, Geng L, Shan S, Mai J, Fu Hong. Multi-stream convolution-recurrent neural networks based on attention mechanism fusion for speech emotion recognition. Entropy. 2022;24(8):1025. pmid:35893005
- 33. Chaudhari S, Mithal V, Polatkan G, Ramanath R. An attentive survey of attention models. ACM Transactions on Intelligent Systems and Technology (TIST). 2021;12(5):1–32.
- 34. Ioffe S, Szegedy C. Batch normalization: accelerating deep network training by reducing internal covariate shift. International conference on machine learning. pmlr. 2015;448–456.
- 35. Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, et al. Going deeperwith convolutions. Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR). 2015;1–9.
- 36.
Howard A G, Zhu M, Chen B, Kalenichenko D, Wang W, Weyand T, et al. Mobilenets: effificient convolutional neural networks for mobile vision applications. arXiv 2017. arXiv preprint arXiv:1704.04861.
- 37. Veshki FG, Ouzir N, Vorobyov SA, Ollila E. Multimodal image fusion via coupled feature learning. Signal Processing. 2022;200:108637.
- 38.
Veshki F G, Vorobyov S A. Coupled feature learning via structured convolutional sparse coding for multimodal image fusion. ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 2022;2500–2504.
- 39. Chen J, Wu K, Cheng Z, Luo L. A saliency-based multiscale approach for infrared and visible image fusion. Signal Processing. 2021;182:107936.
- 40. Zhao Z, Xu S, Zhang C, Liu J, Zhang J. Bayesian fusion for infrared and visible images. Signal Processing. 2020;177:107734.
- 41. Tan W, Zhou H, Song J, Li H, Yu Y, Du J. Infrared and visible image perceptive fusion through multi-level Gaussian curvature filtering image decomposition. Applied optics. 2019;58(12):3064–3073. pmid:31044779
- 42. Zhang H, Ma J. SDNet: a versatile squeeze-and-decomposition network for real-time image fusion. International Journal of Computer Vision. 2021;129:2761–2785.
- 43. Ma J, Tang L, Fan F, Huang J, Mei X, Ma Y. SwinFusion: cross-domain long-range learning for general image fusion via swin transformer. IEEE/CAA Journal of Automatica Sinica. 2022;9(7):1200–1217.
- 44. Tang L, Yuan J, Zhang H, Jiang X, Ma J. PIAFusion: a progressive infrared and visible image fusion network based on illumination aware. Information Fusion. 2022;83:79–92.
- 45. Xue W, Wang A, Zhao L. FLFuse-Net: a fast and lightweight infrared and visible image fusion network via feature flow and edge compensation for salient information. Infrared Physics & Technology. 2022;127:104383.
- 46. Xu H, Ma J, Jiang J, Guo X, Ling H. U2Fusion: a unified unsupervised image fusion network. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2022;44(1):502–518. pmid:32750838