Fusion algorithm of visible and infrared image based on anisotropic diffusion and image enhancement (capitalize only the first word in a title (or heading), the first word in a subtitle (or subheading), and any proper nouns)

Aiming at the situation that the existing visible and infrared images fusion algorithms only focus on highlighting infrared targets and neglect the performance of image details, and cannot take into account the characteristics of infrared and visible images, this paper proposes an image enhancement fusion algorithm combining Karhunen-Loeve transform and Laplacian pyramid fusion. The detail layer of the source image is obtained by anisotropic diffusion to get more abundant texture information. The infrared images adopt adaptive histogram partition and brightness correction enhancement algorithm to highlight thermal radiation targets. A novel power function enhancement algorithm that simulates illumination is proposed for visible images to improve the contrast of visible images and facilitate human observation. In order to improve the fusion quality of images, the source image and the enhanced images are transformed by Karhunen-Loeve to form new visible and infrared images. Laplacian pyramid fusion is performed on the new visible and infrared images, and superimposed with the detail layer images to obtain the fusion result. Experimental results show that the method in this paper is superior to several representative image fusion algorithms in subjective visual effects on public data sets. In terms of objective evaluation, the fusion result performed well on the 8 evaluation indicators, and its own quality was high.

. We rank the 8 algorithms in order from from high to low according to the results of performance indicators.  Table 8 to Table   11 and Table 13, it can be seen that in the performance of its own image, the algorithm in this paper performs well, has good edge contour characteristics, and the loss of fusion is the least. However, the connection with the source image is slightly weaker. The reason is that after Laplacian fusion, the features and details from different images may be merged, resulting in a change in the feature structure.

Modify the description：
The article has added experiments on the 'gun' dataset according to the requirements. The details are as follows: The fusion results of Gun are shown in Fig.13. In (a), the infrared target is obvious, but the information of face and background is lost and much artifact are produced. In (b), the fusion results in a "black block" on the face of the right character, and the hand and foot of the character cannot be recognized. In(c) to (e) infrared targets are relatively fuzzy, and the shadows in the background lose the gradual change characteristics and produce different degrees of artifact. In (f), the infrared target is obvious, but there are different degrees of "black" blocks on the left face and behind the other two people. In (g), only the infrared target of the gun is obvious, and the overall brightness is low. In (h), the infrared target is clear, and the texture after fusion is natura. For example, the shadow in the background can better reflect the real situation of the original scene, which is conducive to human observation. [53] C Xydeas, V Petrovi. Objective Image Fusion Performance Measure, Electronics

Modify the description：
In order to better preserve the homogeneous and non-homogeneous structures, the article has revised the part of the structure according to the requirements.The details are as follows: (1) The detail layers ird I and visd I of infrared and visible images can be obtained by AD algorithm, to better preserve clear outline and detail information. (

Modify the description：
The article has added more detailed attribute logic according to the requirements. The details are as follows: In the third section, we provide the algorithm framework and the mathematical logic behind it. The flow of the algorithm in this paper is as follows:

Comment 11：
11、There are many grammatical errors scattered through out the paper which create the sloppy impression on the paper. Example, section 3.2 heading (3.2 Infrared image fusion base on ddaptive histogram partition and 178 brightness correction).

Modify the description：
The article has been revised in English as required.

Comment 12：
12、Considering the organization of the manuscript, the original proposal is not clearly evidenced. For example, the paper uses Adaptive Histogram Partition (AHP) and Brightness Correction (BC) algorithms to enhance infrared images to highlight the target object. As we know, the infrared images usually consist of thermal radiation characteristics. Please explain the necessity of enhancement of infrared images.

Modify the description：
Description of the necessity of infrared image enhancement. The details are as follows: The main purpose of infrared image enhancement is to enhance the edge sharpness of the image and improve the fuzzy state of the infrared image. It is more flexible and realistic to stretch the gray level in the Adaptive Histogram Partition. The Brightness Correction uses the moderate gray level of the infrared image, the gray level is more rich, and the expression is enhanced. The results of infrared source image enhancement are as follows: It can be seen from figures (a) and (c) that the algorithm does not blindly improve the overall brightness of the image, resulting in over enhancement. In figure (b), the brightness of the image is significantly improved, and the infrared target is obvious. In figure (d), the fuzzy state of infrared image is obviously improved, and the gray level of infrared image is more convenient for observation.

Comment 13：
13 、 Edge gradient operator (Q AB/F), mutual information and structural similarity (SSIM) are also two important metrics, please add them.

Modify the description：
The article has added the / AB F Q , mutual information and SSIM indicators in Table 12,   Table 15 and Table 16 according to the requirements, The details are as follows: is the important of evaluating the success of gradient information transfer from the inputs to the fused image. MI represents the amount of information the fused image obtains from the input images. SSIM reflects the structural similarity between the input images and the fused image.