Figures
Abstract
This paper proposes a new method for low-light image enhancement with balancing image brightness and preserving image details, this method can improve the brightness and contrast of low-light images while maintaining image details. Traditional histogram equalization methods often lead to excessive enhancement and loss of details, thereby resulting in an unclear and unnatural appearance. In this method, the image is processed bidirectionally. On the one hand, the image is processed by double histogram equalization with double automatic platform method based on improved cuckoo search (CS) algorithm, where the image histogram is segmented firstly, and the platform limit is selected according to the histogram statistics and improved CS technology. Then, the sub-histograms are clipped by two platforms and carried out the histogram equalization respectively. Finally, an image with balanced brightness and good contrast can be obtained. On the other hand, the main structure of the image is extracted based on the total variation model, and the image mask with all the texture details is made by removing the main structure of the image. Eventually, the final enhanced image is obtained by adding the mask with texture details to the image with balanced brightness and good contrast. Compared with the existing methods, the proposed algorithm significantly enhances the visual effect of the low-light images, based on human subjective evaluation and objective evaluation indices. Experimental results show that the proposed method in this paper is better than the existing methods.
Citation: Li C, Zhu J, Bi L, Zhang W, Liu Y (2022) A low-light image enhancement method with brightness balance and detail preservation. PLoS ONE 17(5): e0262478. https://doi.org/10.1371/journal.pone.0262478
Editor: Gulistan Raja, University of Engineering & Technology, Taxila, PAKISTAN
Received: June 6, 2021; Accepted: December 24, 2021; Published: May 31, 2022
Copyright: © 2022 Li et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: All relevant data are within the manuscript and its Supporting files.
Funding: This work was supported in part by the National Nature Science Foundation of China under Grant (U1404623) to CLL, in part by the Science and Technology Planning Project of Henan Province under Grant (212102210097) to CLL.
Competing interests: The authors have declared that no competing interests exist.
1 Introduction
People often get some poor quality images in the process of image acquisition because of the complexity of the geographical environment and time. For example, the low-light image often has shortcomings of low contrast, uneven brightness and poor image visibility due to the lacking of light source, which is extremely unfavorable for the subsequent processing of images such as feature extraction and image segmentation. However, low-light images are often produced in real life. The general causes of low light can be divided into the following situations roughly: (1) In the night, the images are very dark due to insufficient or no light source; (2) The uneven illumination caused by the shading of buildings leads to the existence of dark regions in the image; (3) There are also in special places, such as underground mines, interiors and other dark places. People are also difficult to capture satisfactory images. It can be seen that this problem has affected the daily life of people including photography, forensics, traffic monitoring and even engineering safety monitoring [1]. Therefore, it has become a challenging and very important task to enhance the low-light image [2]. The purpose of the low-light image enhancement is to improve the brightness and clarity of the image so that people can obtain more useful and accurate information from it. In recent years, this issue has become a hot research topic, many scholars have studied and discussed it.
In the past few years, a variety of image enhancement methods have been proposed. For the low-light image enhancement, many classical algorithms have been proposed, such as histogram equalization (HE) [3], gamma correction [4], retinex theory [5], etc. HE algorithms mainly focus on enhancing image contrast. HE method has received considerable attention due to its simple and direct implementation. It mainly remaps the image gray level, so that the histogram of the image follows the uniform distribution [6]. However, the traditional HE methods use the same histogram transformation for the whole image, and do not take into account the characteristics of special images. For example, the enhancement effect of this method is not significant when there are uneven brightness areas in the image. For low illumination image, due to its low dynamic range (low contrast), the image quality will be reduced in the process of histogram equalization and single gray level change. In order to solve this problem, different versions of improved methods have appeared. Considering the difference of the image local contrast, some scholars proposed adaptive histogram equalization (AHE) [7]. AHE algorithm performs histogram enhancement for each pixel by calculating the transformation function of each pixel’s neighborhood. AHE algorithm takes into account the local information of the image and effectively improves the image local contrast. However, its disadvantage is that it enlarges the noise in the homogeneous region of the image while improving the contrast. Therefore, the contrast limited adaptive histogram equalization (CLAHE) algorithm [8] is proposed. Based on AHE algorithm, CLAHE algorithm limits the histogram of each sub block, which can control the noise caused by AHE algorithm and make the image contrast more natural. In addition, in order to overcome the brightness problem, researchers have proposed some methods to maintain the average brightness based on HE algorithm. For example, the double histogram equalization method based on image mean segmentation [9] advocates that the average brightness of the input image can be maintained while the contrast of the enhanced image is low. However, the enhancement effect of this method for low illumination image in extremely dark environment is not ideal, and there is a strong oversaturation phenomenon. Similarly, recursive mean segmentation histogram equalization (RMSHE) [10] and brightness-preserving dynamic histogram equalization (BPDHE) [11] can keep the average brightness of the input image when processing the image output, but they may still not generate images with ideal natural appearance. In 2012, Khan et al. proposed an algorithm based on brightness maintenance, namely weighted average multi-segment histogram equalization (WAMSHE) [12]. Unfortunately, this method often has excessive enhancement when processing some images, due to the pixel ratio of the image segmentation regions is too large. Using the platform-based clipping method to modify the histogram can avoid intensity saturation and improve excessive enhancement. Ooi et al. [13] proposed a platform constrained double histogram equalization (BHEPL) method in 2009. Firstly, the input histogram is segmented according to the average value of the original histogram, and then the sub histogram is clipped by using the platform limit value, and then the traditional processing is performed. In addition, there are quadrant dynamic histogram equalization (QDHE) [14] and dynamic quadrant histogram equalization platform limitation (DQHEPL) [15], DQHEPL is an improvement of BHEPL, which divides all images into four sub images for equalization operation, so it can’t keep the stability of gray level for some images. The histogram-based method has achieved good results in enhancing image contrast, but often overlooked the image details. Gamma correction is also based on the grayscale transformation of the image to achieve the improvement of brightness, and the effect of detail restoration is not outstanding [16]. Retinex theory advocates maintaining the color constancy of the image [17], which can retain the original details of the image, but it is relatively weak in balancing the brightness of the image, and halos and artifacts often appear [18].Classical algorithms based on retinex theory include single-scale retinex (SSR) algorithm [19], multi-scale retinex (MSR) algorithm [20], multi-scale retinex with colour restore (MSRCR) [21], and so on. The low-light image enhancement via illumination map estimation (LIME) method proposed by X Guo et al. [22] is also based on the retinex theory, but this method refine the initial illumination map by imposing a structure prior on it, as the final illumination map, so it can get better enhancement effect.
For the enhancement of low-light images, maintaining image details and improving image contrast are theoretically opposed, and it is difficult to keep a balance between the two in the same algorithm. In order to achieve the purpose of improving image contrast and retaining image details at the same time, some new methods have emerged. To achieve the trade-off among detail enhancement, local contrast improvement and maintaining the natural feeling of the image, reference [23] used weighted fusion strategy to weigh the advantages of different technologies. In reference [24], the image is divided into structure layer and texture layer for processing, so as to preserve the image details. Reference [25] uses discrete wavelet transform (DWT) to decompose the image into detail coefficients containing edge information (image details) and approximate coefficients containing illumination information, and the approximate coefficients are separately enhanced to protect image details. In addition, there are some new methods based on intelligent optimization algorithms, such as genetic algorithm (GA) [26], particle swarm optimization (PSO) algorithm [27], artificial bee colony (ABC) algorithm [28] and cuckoo search (CS) algorithm [29]. Reference [30] introduces the method of using GA and PSO algorithm to enhance contrast in spatial domain and frequency domain. Due to the introduction of GA and PSO algorithm, good image results are obtained. The introduction of such optimization algorithms can help to solve many complex problems in image processing. Reference [31] regards the enhancement of gray image contrast as an optimization problem, which is solved by combining PSO and discrete wavelet transform (DWT). By adjusting a new expansion parameter, the target fitness criterion is maximized to enhance the contrast and local details of the image. Reference [32] proposed an adaptive image enhancement method based on non-subsampled contour transformation (NSCT), fuzzy sets and ABC optimization, which helps to improve the low contrast and low clarity of images obtained in practical applications, this method optimizes fuzzy parameters through ABC algorithm, which improves its adaptability. In reference [33], a channel segmentation image enhancement method based on discrete shearlet transform and PSO algorithm is proposed. By using PSO algorithm, image artifacts and unnatural forces caused by high directional coefficients are eliminated, and the brightness component of the image is enhanced. Reference [34] firstly applied the sigmoid function to enhance the image when processing remote sensing images, and then adopted a multi-objective particle swarm optimization algorithm to maximize the amount of information in the image while maintaining image intensity. Experiments show that this method can retain significant details on the original image. Swarm intelligence optimization algorithm has made remarkable achievements in improving the adaptability of image enhancement algorithm and simplifying the complex problems in image enhancement.
In summary, in the issue of low-light image enhancement, the fusion strategy is effective in theory to improve the image contrast and preserve the image details at the same time. The existing methods can be roughly divided into two ideas: One is to separate the original image details by transforming the space to achieve the effect of detail preservation. In this method, few people consider that there will be noises in the detail components, and at the same time, there may be noise amplification problems during the inverse transformation process, which resulting in unsatisfactory results. The other is to achieve the goal by weighted fusion of images with different advantages processed by each method, but the adaptability of this model is relatively weak and can’t be applied to different natural images. The results of image processing with different features may vary greatly. The method proposed in this paper is similar to the image fusion strategy, but it does not adopt the weighted fusion method. The purpose of this paper is to improve the image contrast and balance the image brightness while preserving the details of the original image. For improving the contrast of the image and enhancing the brightness of the image, HE method has been very outstanding. In this paper, the double histogram equalization with a double automatic platform (DHEDAPL) method based on improved CS algorithm is used to process the image, we can obtain the image A with balanced brightness and good contrast. In the reference [35], the author proposed the method named bi-histogram equalization using two plateau limits (BHE2PL), but plateau limits is only calculated by the statistics of histograms in this method, a single platform calculation method cannot maximize the advantages of this method. In this paper, we get the range of platform limit through histogram statistics, and use the improved CS algorithm to select optimal value in the range. In view of the aspect of retaining image details, this paper uses the method based on total variation in reference [36], to extract the main structure of the image, and then get the texture detail mask B which contains all the texture details of the image, this method can obtain the image details by extracting the image texture, and has good noise resistance, and at the same time, we don’t need the inverse transformation process, so the problem of noise amplification is avoided. Finally, we add texture detail mask B directly to image A to obtain the final enhanced image C.
The structure of the rest of this paper is as follows: The second part introduces the related work in this paper; The third part describes the algorithm framework and implementation steps in detail; The fourth part conducts experiments on the method proposed in this paper, firstly, it verifies the anti-noise performance of the proposed model, then compares the method in this paper with other mainstream algorithms, and analyzes it through two aspects: human subjective evaluation and objective image quality indicators; The last chapter summarizes the work of this paper.
2 Related theories
Through the analysis of the introduction of the previous chapter, we know that in recent years, more and more scholars try to transform image problems into optimization problems. Therefore, the application of intelligent optimization algorithms in image processing is becoming more and more mature, including the PSO algorithm and the CS algorithm are frequently used, both of which have their own advantages, but also have their own limitations. The HE method is simple, has certain advantages in enhancing image contrast and brightness, and is also favored by many scholars. However, the problems of over enhancement and loss of image details of traditional histogram have always been concerned by scholars and urgently need to be solved. This paper designs a method for low-light image enhancement with balancing image brightness and preserving image details, which aims to solve the above problems. In this chapter, we mainly introduce three classical theories: particle swarm optimization algorithm, cuckoo search algorithm and histogram equalization algorithm, it paves the way for the next chapter to introduce the design method of this paper.
2.1 PSO algorithm
The particle swarm optimization (PSO) algorithm was originally an evolutionary algorithm proposed by James Kennedy and Russell Eberhart in 1995 [27]. Its development is based on the study of the social behavior of bird predation. The basic idea of the standard particle swarm optimization algorithm is: assuming that the search space is M-dimensional and there are N particles in the space, the spatial position of the i-th particle can be expressed as and velocity can be expressed as
. Defining the historical best advantage of the particle (Personal best) is pbest, denoted as
, the global best found by defining the population is gbest, denoted as
. The first step of PSO algorithm is to initialize the population particles as random particles (random solutions), and then find the optimal solution iteratively. In each iteration, the particles update their speed and position through two extreme values pbest and gbest. For particles in the m-th (1≤m≤M) dimension, the iterative update formula is as follows:
(1)
(2)
Where the parameter ω in Formula (1) is the inertia weight value, c1 and c2 are learning factors, r1 and r2 are random numbers in the range of [0, 1]. The update of the particle state depends on three aspects: The first is the inheritance of the previous particle speed based on the trust of particle in the previous own state, which belongs to the "inherited part". The second is the learning of itself, which is an extension of itself. The process of the state belongs to the "cognitive part". The third is the learning of the group, which reflects the mutual sharing and cooperation of information among particles and belongs to the "social part". The pseudo code of PSO algorithm is shown in Fig 1.
The PSO algorithm is widely introduced into the application of image enhancement field because of its simple principle and easy implementation, but it is easy to fall into the local optimum and the search accuracy is not enough.
2.2 CS algorithm
In 2009, Yang and Deb developed the algorithm based on the parasitic reproduction behavior of cuckoos [29]. The cuckoo search (CS) algorithm mainly simulates the parasitic cuckoo’s nestling and the flight mechanism of certain birds [37] to effectively solve the optimization problem. The cuckoo relies on the nests of other hosts to lay eggs. The host bird treats the eggs as its own eggs. If the host bird recognizes the foreign eggs, the host bird will either throw away the eggs or leave the nest and build a new nest at a new location. Assuming that there is a bird egg in each nest, each bird egg represents a solution, and the new solution is represented by the bird egg. The basic goal of the CS algorithm is to find the best nest to incubate the bird egg by random walk. The three idealized rules to be considered to form a theory of cuckoo’s nest finding are:
- Each cuckoo lays one egg at a time and then places it in a randomly selected nest.
- In a randomly selected set of nests, those nests with high-quality eggs will be carried over to the next generation.
- The number of available host nests is fixed, and the probability that the eggs laid by the cuckoo will be found by the host bird is Pa∈[0, 1].
Based on the above three idealized rules, the cuckoo optimization search uses Formula (3) to update the position of the next generation nest:
(3)
Where represents the i-th nest-position in the t-th generation, ⊕ represents the point-to-point multiplication, and α represents the step size factor, which is used to control the step size, and the value is usually set to 1. Levy(λ) is a random search path generated by the Levy flight that obeys the parameter λ, and its moving step length obeys the stable distribution of Levy:
(4)
Where μ obeys the normal distribution, λ is the power coefficient, and λ = 1.5. It can be seen from Formula (4) that the optimization path of the CS algorithm is composed of two parts, namely frequent short jumps and occasional long jumps. This optimization method can make it easier for the algorithm to jump out of the local optimum. The pseudo code of CS algorithm is shown in Fig 2.
2.3 Histogram equalization
The histogram equalization (HE) algorithm [6] is a method of automatically adjusting image contrast using grayscale transformation, it calculates the gray scale transformation function by the probability density function of gray scale. It is a histogram correction method based on the cumulative distribution function transformation method. For an image X of size M×N, take (i,j) as the spatial coordinates of the pixels in the image, and X(i,j) represents the intensity of pixels in the image. The pixel value range of each coordinate is k = 0,1,…,L−1, where L is the maximum gray level in the image. Then the histogram H associated with the image describing the frequency of the intensity value in the image is defined as:
(5)
Where nk represents the number of occurrences of intensity k in the image. The probability density function p(k) related to H is defined as:
(6)
Where p(k) represents the probability of intensity k. Then its cumulative density function c(k) is:
(7)
Where X0 is the lowest intensity within the range for calculating the cumulative density function, the transformation function f(k) associated with standard histogram equalization uses c(k) to map the input image to the dynamic range [X0, XL−1]. For the special case of HE, X0 = 0, XL−1 = L−1, the mapping function is as follows:
(8)
Then the result image Y = {Y(i,j)} produced by the histogram equalization can be expressed as:
(9)
HE algorithm expands the dynamic range of the histogram, but at the same time, the gray level of the transformed image is reduced, which will cause the loss of some details. For some images with peaks, the processing of HE algorithm can cause unnatural effects of excessive enhancement.
3 The proposed method
3.1 Framework description
Starting from the characteristics of low-light images, the proposed method aims to improve the contrast of low-light images, balance its brightness while retaining its detailed features. In order to obtain an image with good contrast and balanced image brightness, this paper uses the DHEDAPL method based on improved CS algorithm. First, the image histogram is segmented, and then the histogram statistical information and improved CS algorithm are used to determine the limit value of each sub-histogram platform, use the corresponding platform limit to clip each sub-histogram separately, and after obtaining the corrected histogram, perform traditional histogram equalization on the sub-histogram. Here the histogram of the original image is divided into two, there are four platform limits and eight corresponding limit values. The improved CS optimization method is used here, each platform value can be selected within a certain range, and we select the optimal value through the evaluation function. This method is more adaptive than other methods that calculate the platform limit in a fixed way. The image detail texture feature is a kind of global feature, which is different from the color feature. The texture feature is not based on the feature of the pixel, but needs to be statistically calculated in the area containing multiple pixels. An image can use "structure + Texture" to express, but usually extracting the image structure will emphasize the regularity of the texture. We use the method based on the total variation model to extract the main structure of the image. The model does not require texture rules or symmetry, and is general and random. By removing the main structure of the image, we can get the texture that contains all the details of the image. An image can be expressed in the form of "structure + texture", but the regularity of the texture is usually emphasized when extracting the image structure. We use the method based on the total variation model to extract the main structure of the image, which does not require texture rules or symmetrical, it is general and random. By removing the main structure of the image, we can get the texture that contains all the details of the image. Different from the image detail layer extracted by other transformations, the image detail obtained by texture extraction has good noise resistance. Finally, we add the texture to the image processed by DHEDAPL optimized based on improved CS algorithm to obtain the final enhanced image. As shown in Fig 3, it is a main schematic of low-light image enhancement. Next, we will introduce the specific steps of each link in detail.
3.2 Improved CS algorithm
In the subsection 2.1 and subsection 2.2 of chapter 2, the standard PSO algorithm and CS algorithm have been introduced. At present, these two swarm intelligence optimization algorithms are widely used in various fields and have been verified in the direction of image enhancement, they are all effective optimization methods. However, the individual evaluation of the two algorithms shows certain shortcomings. The most obvious feature of the PSO algorithm is that it is easy to fall into the local optimum. In addition, CS algorithm uses frequent short jumps and occasional long jumps in the optimization process, this optimization method can make it easier to jump out of the local optimum for the algorithm, at the same time, CS algorithm also has the disadvantages of slowing convergence speed and lacking of vitality in later search [38], the reason is that the CS algorithm uses random selection when updating the population, so that the local update can’t quickly find the truly optimal nest position. Inspired by PSO algorithm, it is easier to find the best nest by traversing the population position. Under the framework of the CS algorithm, improved CS algorithm uses the traversal idea of PSO algorithm to design a new optimization search method. This method is easy to jump out of the local optimum and converges quickly. Its algorithm flow is shown in the Fig 4. Then, we conducted a comparative experiment of improved CS algorithm, PSO algorithm, and CS algorithm, the results are shown in Fig 5, It can be seen that the convergence speed of improved CS algorithm is obviously better than that of PSO algorithm and CS algorithm, the particles can still maintain the exploration vitality in the middle and later stage, and the optimization result is also better than that of PSO algorithm and CS algorithm.
3.3 Double histogram equalization with double automatic platform
As shown in subsection 3.1, we first perform DHEDAPL processing on the image, in the new equalization method proposed here, the original histogram is first divided to obtain two sub-histograms. Here, four platform limits are used, and each sub-histogram uses two limit values. Different from other methods, the method in this paper is no longer just based on the statistics of the histogram for the selection of the platform limit value, but for each platform limit value according to the histogram statistical information to set the upper and lower bounds, and then use the improved CS algorithm to wandering evaluation within the range, the selection of the platform limit value will be selected within this range adaptively.
First, we calculate the average intensity M of the global histogram, and clip the histogram according to the average intensity M (as shown in Fig 6). The calculation formula for M is as follows:
(10)
Where p(k) is the probability of intensity k, its calculation method is shown in Formula (6).
After calculating the value of M, the original histogram is divided into obtain two sub-histograms, which are the lower histogram HD and the upper histogram HU. HD contains all the intensity values between [Imin, M], and HU contains All intensity values between [M+1, Imax], where Imin represents the minimum intensity value that has appeared at least once in the image, and Imax represents the maximum intensity value that appeared in the image.
After segmenting the global histogram, calculating the platform limit L of each sub-histogram is the next task. The classic platform value calculation method is as follows:
(11)
In the formula, C is a coefficient whose value is between 0 and 1, and Pk is the peak value in the histogram, that is:
(12)
This article will use the local information obtained from the input histogram to predetermine the range of the platform limit L. Here, the gray scale ratio GC of the sub-histogram is calculated instead of C in the formula, and GC is used as a coefficient to calculate the basic value of each platform limit:
(13)
(14)
(15)
(16)
Where PkD and PkU are the maximum intensity peaks of the lower sub-histogram and the upper sub-histogram respectively, LD1 and LD2 are the base values of the lower and upper platform for lower sub-histograms, LU1 and LU2 are the base values of the lower and upper platform for upper sub-histogram. The gray scale ratios GCD1 and GCD2 of the lower sub-histogram and the gray scale ratios GCU1 and GCU2 of the upper sub-histogram are defined as:
(17)
(18)
(19)
(20)
Where MD and MU are the average intensities of the lower sub-histogram and the upper sub-histogram respectively, λD and λU are the gray scale ratios differences of the lower sub-histogram and the upper sub-histogram respectively. The calculation method is as follows:
(21)
(22)
(23)
(24)
Where ND and NU are the total number of pixels in the lower sub-histogram and the upper sub-histogram respectively.
At this time, the basic values LD1, LD2, LU1, and LU2 of each platform have been obtained, and finally we set the range of each platform value as follows:
(25)
(26)
(27)
(28)
Within this range, improved CS is used to optimize the value selection to obtain the final platform limit values ,
,
, and
, and then clip the histogram. The application of improved CS in this method will be described in detail in next section. For the value of the lower sub-histogram (Imin<k≤M) that is less than or equal to
, use the value of
to modify the sub-histogram, if the value is greater than the value of
, use the value of
to modify the histogram. Similarly, for the value of the upper sub-histogram (M+1<k≤Imax) that is less than or equal to
, use the value of
to modify the sub-histogram, if the value is greater than the value of
, use the value of
to modify the histogram. The specific rules are as follows:
(29)
(30)
The process of modifying the histogram is shown in Fig 7. After the histogram is modified, the sub-histogram can be independently equalized according to Formula (9) to obtain an enhanced image. This process can be expressed by the following formula:
(31)
Where I represents the original input image, Γ represents the process of dual-histogram dual-adaptive platform processing, and IE represents the output image after our enhancement processing.
3.4 Optimization of platform value with improved CS
As mentioned in the subsection 3.3, the method to obtain the basic range of each platform value has been determined. After obtaining the range, we use improved CS to explore the space range and judge the quality of the platform value through the evaluation function, the platform value optimization process can be divided into the following steps:
Step 1: Initialize the number of nests N and dimension D, initialize the nest position X within the basic range of the platform, find probability Pa, the maximum number of iterations T and the fitness value set fitness;
Step 2: Traverse the position X(i,:) in the population, use the parameters of the corresponding position as the platform value, perform DHEDAPL enhancement on the image, evaluate the corresponding enhanced image using the evaluation function, store the corresponding fitness value fitnessi, compare it to get the largest fitness value fitnessmax and save the position of the nest as X(max,:);
Step 3: Update the population through Lévy flight to obtain a new population X’, traverse the population position X’(i,:), repeat the operation of step 2 to obtain the corresponding image enhancement result and fitness value , as well as the corresponding maximum fitness value
and its position X’(max,:);
Step 4: Determine whether to perform population replacement. If , replace the original population with the population updated by Lévy flight, and update the maximum fitness value fitnessmax and position X(max,:) at the same time;
Step 5: Determine whether the cuckoo egg is found, if it is found, update the population by Formulas (1) (2), if it is not found, keep the nest position.
Step 6: Judge whether the iteration stop condition is reached, if not, then jump to Step2, if the iteration stop condition is met, exit the loop. The last retained position with the maximum fitness value carries the parameters as the final DHEDAPL corresponding platform value.
Where the fitness function, as a criterion for judging the quality of the location, is the key to the effectiveness of the final algorithm. The design of the fitness value in this paper takes into account the global and local information of the image. The specific form is as follows:
(32)
Where Ii represents the image enhanced with the parameters of the corresponding position; Iei is the sobel edge image of Ii; sum(Iei) is the sum of the intensity values of all pixels of Iei; n_edge(Iei) is the number of such edge pixels with an intensity value greater than a certain fixed intensity value; Mi×Ni is the size of Ii, that is, the number of Ii pixels; H(Ii) is the entropy value of image Ii, the larger the value, the more information the image contains, its calculation method is as Formula (48); MSE(Ii) is the image mean square error, the smaller the value indicates that the degree of image distortion is smaller, its calculation method is as Formula (33).
Where g(i,j) is the gray value of the pixels in row i and j column of the input image; h(i,j) is the gray value of the pixels in row i and column j of the enhanced image.
3.5 Image texture extraction
As shown in the framework flow of subsection 3.1, another direction is to extract the texture of the image. An image can usually be decomposed into a "structure-texture" form, it can be shown by Formula (34), reference [39, 40] extracts the main structure of the image by means of total variation regularization, namely Formula (35).
Where I is the input, which can be the brightness channel; x is the image pixel index; IS is the extracted structure image; data item (ISx−Ix)2 is to make the extracted structure similar to the structure in the input image; β is a weight value; is the total variational regularizer, written as:
(36)
In the two-dimensional anisotropy expression, ∂h and ∂v are partial derivatives in two directions. Through experiments, it is found that this kind of total variation regularizer has limited ability in distinguishing strong structural edges and textures. For different images in the natural world, the texture characteristics are usually not fixed and without certain rules. In order to further highlight the texture and structural elements, a general pixel-level windowed total change metric D(x) and a new window inherent change metric L(x) are introduced here.to form a new regularizer, such as Formula (37):
(37)
(38)
(39)
(40)
(41)
Where y∈R(x), R(x) is rectangular areas centered on pixel x; Dh(x) and Dv(x) are the total window changes of pixel x in the directions of h and v; Lh(x) and Lv(x) are a new kind of inherent variation window. They are different from Dh(x) and Dv(x) in that they do not contain modulus, only the position of the absolute variation changes. The absolute variation from the gradient becomes the total absolute variation. Because the gradient variation has positive and negative values, the inherent variation of this window is almost zero in the undulating and uniform background area, and the variation is very large only at relatively large edges, and they pay more attention to the overall spatial change of the image; ε is a small positive number used to avoid the denominator being 0; gx,y is a weighting function defined according to the spatial affinity, expressed as
(42)
gx,y is the gaussian kernel function, where σ is the space ratio for controlling window, which also affects the smoothness of the image to a certain extent. In summary, the new structure extraction method is as follows:
(43)
The difference between Formula (43) and Formula (35) lies in the regularizer. The regularizer of Formula (43) is Formula (37), which adds a new kind of inherent variation window (40) and (41). The function of is the same as before, β is a weight, it is also used to control the smoothness of the image. By adjusting the values of β and σ, the separation amount of image texture can be controlled to a certain extent. Generally, the value range of β is (0, 0.05), and the value range of σ is (0, 6). Through experiments, we find that when β slowly increases, the amount of texture separation of the image also increases, the texture separation is particularly obvious when β is set to 0.02; Similarly, when σ increases, the amount of texture separation will also increase, the separation of texture and structure is relatively clean when σ is set to 3. Considering the two parameters, and as shown in Table 1, we finally decide β = 0.015 and σ = 3. The effect is shown in Fig 8.
Finally, when the structure IS is extracted, we can get the texture IT of the image, namely:
(44)
3.6 Image texture addition
At the end of the enhancement framework process in subsection 3.1, we enhanced the image with DHEDAPL optimized based on improved CS to obtain an image IE with suitable brightness and good contrast, but its disadvantage is that the image details are not obvious enough. While we strip out the detail information IT in the original image through the new total variational regularization method described in subsection 3.5. At this time, we can add IT to IE, which can enrich the details of the image, and this operation has little effect on the brightness characteristics and contrast characteristics of IE at the same time. As shown in Fig 9, the effects of performing separate DHEDAPL processing on an image and adding texture details of the original image after DHEDAPL processing are compared, it can be seen that after the original image is processed by DHEDAPL, the brightness and contrast are improved significantly, but the image details are smoother. On this basis, the original image texture details are added, the overall visual effect of the image is good, and the brightness contrast is suitable for human eyes. View the enlarged details of the image, the image with texture details IT contains more details and the texture is clearer. Therefore, the final output image of the method proposed in this paper is represented by the following definition:
(45)
Where IO is the final output image we obtained, IE is the image obtained after the original image is processed by DHEDAPL, and IT is the texture details of the original image.
(a) Original image (b) Image processed by DHEDAPL (c) The image processed by DHEDAPL and adding detailed texture.
4 Experiments
In this section, we shown the experimental results of the proposed method for low-light image. All the codes in this method are run in matlabR2019a, and all the experiments are performed on a PC with operating system windows10, 4GB memory and 3.20GHz i5 CPU. The test images selected in this article are all low-light images, which can be divided into three categories. Due to the limited space of the article, the following are selected as the representatives for display. The test images selected in this article are all low-light images, which can be divided into three groups: (1) 20 random images from MIT-Adobe 5K data set [41]; (2) The first 100 images from MIT-Adobe 5K data set; (3) 64 images from DICM data set [42]. The first two groups of images are paired images, and the third group of images are unpaired images. These images all have the characteristics of low brightness, unclear image details and low contrast. In this chapter, we first verify the anti-noise performance of the new total variational model in making image texture mask, and compare it with the unsharp mask method, which is the classical method of making mask [43]. In addition, we have compared the proposed method with other mainstream algorithms. The contrast algorithm includes the adaptive gamma algorithm AGCWD in [44], Al-Ameen proposed the new image illumination enhancement method [45], dual-platform limit-based double histogram equalization (BHE2PL) [33], the biologically inspired multi-exposure fusion framework method (BIMEF) [46], brightness-maintained dynamic histogram equalization (BPDHE) [11], a simple yet effective low-light image enhancement (LIME) method [22], multi-scale retinex with color restoration algorithm (MSRCR) [21], the nature preservation enhancement algorithm (NPEA) in [47] and the simultaneous reflectance and illumination estimation method (SRIE) in [48]. Therefore, there are three sections in this chapter: (1) Anti-noise performance analysis of texture mask made by the new total variational model; (2) Subjective evaluation of the experimental results; (3) Objective evaluation of the experimental results of the proposed method.
4.1 Anti-noise performance analysis
In chapter 3, we have introduced the method in detail, in order to enrich the texture details of the image processed by DHEDAPL, we try to use a new total variational model to extract the image texture information and avoid noise as much as possible. In this section, we verify it through experiments, and analyze the image information contained in the image before and after adding texture information. At the same time, we also compare it with the unsharp mask, which is the classic method [43]. As shown in the Fig 10, we show the changes of the two images before and after adding texture information. Although the images processed by DHEDAPL have been satisfactorily evaluated by human eyes in terms of contrast and brightness, the image details are not clear enough and there are few image details; The contrast of the image with unsharp mask is improved, but not too much image detail is added; After the texture details extracted by the new total variational model are added to the DHEDAPL processed image, we can obviously feel the richness of details, such as several details shown in the figure, there are bees and stamens.
(a) Image processed by DHEDAPL (b) Image processed by DHEDAPL adding unsharp mask (c) Image processed by DHEDAPL adding texture mask.
In addition, we conducted comparative experiments on 20 randomly selected images in MIT-5K data set. From the data results shown in the Table 1, there are the average values of the four evaluation values of 20 images before and after adding texture information, which are peak signal to noise ratio (PSNR), structural similarity (SSIM), average gradient (AG) and natural image quality evaluator (NIQE) respectively. PSNR and SSIM are full reference metrics. PSNR is the ratio of the energy of the image peak signal to the average energy of noise, which can be used to evaluate the image noise content, the larger the value, the less the image noise and the better the image quality; SSIM is an index to measure the similarity between two images, its value range is (0,1), the closer the value is to 1, the more complete the image structure is and the better the image quality is; AG and NIQE are non-reference metrics, in which the value of AG reflects the gradient detail contained in the image, the larger the value, the more image detail information; The blind image quality evaluation index NIQE is a relatively comprehensive image evaluation index, the smaller its value, the better the image visual feeling. It can be seen from the data in the Table 2, the AG value of the image with unsharp mask and the image with texture mask has been improved compared with the previous images, especially the image with texture mask; But the probability of noise increased by adding unsharp mask is very high. From the average value of PSNR of 20 images, it can be seen that the PSNR value of the image added with unsharp mask decreases, but the PSNR of the image does not decrease after adding the texture information extracted by the new total variational model, even the PSNR value is improved; For the SSIM index, there also has the same evaluation. Moreover, for the NIQE index, the image with texture information extracted by the new total variational model has also received the best evaluation. Therefore, it is enough to show that our model not only increases the image texture information, but also does not introduce too much noise to destroy the image quality and structure, and achieves the purpose of enriching the image details.
4.2 Subjective evaluation
The visual perception of an image is one aspect of evaluating image quality, and people’s perception of an image includes various aspects such as color, brightness, and clarity. Different algorithms have different degrees of improvement in various aspects of low-light images, as shown in Figs 11–30, which are the effect of each test image processed by various methods, from these experimental results, our method as a whole has the greatest improvement in image visual effects compared to other algorithms.
(a) low-light image (b) AGCWD (c) Al-Ameen (d) BHE2PL (e) BIMEF (f) BPDHE (g) LIME (h) MSRCR (i) NPEA (j) SRIE (k) OURS (l) Normal light image.
(a) low-light image (b) AGCWD (c) Al-Ameen (d) BHE2PL (e) BIMEF (f) BPDHE (g) LIME (h) MSRCR (i) NPEA (j) SRIE (k) OURS (l) Normal light image.
(a) low-light image (b) AGCWD (c) Al-Ameen (d) BHE2PL (e) BIMEF (f) BPDHE (g) LIME (h) MSRCR (i) NPEA (j) SRIE (k) OURS (l) Normal light image.
(a) low-light image (b) AGCWD (c) Al-Ameen (d) BHE2PL (e) BIMEF (f) BPDHE (g) LIME (h) MSRCR (i) NPEA (j) SRIE (k) OURS (l) Normal light image.
(a) low-light image (b) AGCWD (c) Al-Ameen (d) BHE2PL (e) BIMEF (f) BPDHE (g) LIME (h) MSRCR (i) NPEA (j) SRIE (k) OURS (l) Normal light image.
(a) low-light image (b) AGCWD (c) Al-Ameen (d) BHE2PL (e) BIMEF (f) BPDHE (g) LIME (h) MSRCR (i) NPEA (j) SRIE (k) OURS (l) Normal light image.
(a) low-light image (b) AGCWD (c) Al-Ameen (d) BHE2PL (e) BIMEF (f) BPDHE (g) LIME (h) MSRCR (i) NPEA (j) SRIE (k) OURS (l) Normal light image.
(a) low-light image (b) AGCWD (c) Al-Ameen (d) BHE2PL (e) BIMEF (f) BPDHE (g) LIME (h) MSRCR (i) NPEA (j) SRIE (k) OURS (l) Normal light image.
(a) low-light image (b) AGCWD (c) Al-Ameen (d) BHE2PL (e) BIMEF (f) BPDHE (g) LIME (h) MSRCR (i) NPEA (j) SRIE (k) OURS (l) Normal light image.
(a) low-light image (b) AGCWD (c) Al-Ameen (d) BHE2PL (e) BIMEF (f) BPDHE (g) LIME (h) MSRCR (i) NPEA (j) SRIE (k) OURS (l) Normal light image.
(a) low-light image (b) AGCWD (c) Al-Ameen (d) BHE2PL (e) BIMEF (f) BPDHE (g) LIME (h) MSRCR (i) NPEA (j) SRIE (k) OURS (l) Normal light image.
(a) low-light image (b) AGCWD (c) Al-Ameen (d) BHE2PL (e) BIMEF (f) BPDHE (g) LIME (h) MSRCR (i) NPEA (j) SRIE (k) OURS (l) Normal light image.
(a) low-light image (b) AGCWD (c) Al-Ameen (d) BHE2PL (e) BIMEF (f) BPDHE (g) LIME (h) MSRCR (i) NPEA (j) SRIE (k) OURS (l) Normal light image.
(a) low-light image (b) AGCWD (c) Al-Ameen (d) BHE2PL (e) BIMEF (f) BPDHE (g) LIME (h) MSRCR (i) NPEA (j) SRIE (k) OURS (l) Normal light image.
(a) low-light image (b) AGCWD (c) Al-Ameen (d) BHE2PL (e) BIMEF (f) BPDHE (g) LIME (h) MSRCR (i) NPEA (j) SRIE (k) OURS (l) Normal light image.
(a) low-light image (b) AGCWD (c) Al-Ameen (d) BHE2PL (e) BIMEF (f) BPDHE (g) LIME (h) MSRCR (i) NPEA (j) SRIE (k) OURS (l) Normal light image.
(a) low-light image (b) AGCWD (c) Al-Ameen (d) BHE2PL (e) BIMEF (f) BPDHE (g) LIME (h) MSRCR (i) NPEA (j) SRIE (k) OURS (l) Normal light image.
(a) low-light image (b) AGCWD (c) Al-Ameen (d) BHE2PL (e) BIMEF (f) BPDHE (g) LIME (h) MSRCR (i) NPEA (j) SRIE (k) OURS (l) Normal light image.
(a) low-light image (b) AGCWD (c) Al-Ameen (d) BHE2PL (e) BIMEF (f) BPDHE (g) LIME (h) MSRCR (i) NPEA (j) SRIE (k) OURS (l) Normal light image.
(a) low-light image (b) AGCWD (c) Al-Ameen (d) BHE2PL (e) BIMEF (f) BPDHE (g) LIME (h) MSRCR (i) NPEA (j) SRIE (k) OURS (l) Normal light image.
AGCWD is one of the derivative methods of gamma correction based on adjusting the brightness of the image. It is outstanding when adjusting the brightness of the general image. However, in this article, AGCWD shows an unstable effect when processing low-light images. For some images, it can get good results, but for some images, there are still unsatisfactory results. In addition, the brightness of the overall output image is dark, and the image contains dark areas obviously, such as trees and rivers in Fig 11(B), the ground in Figs 13(B) and 14(B), lamp in Fig 20(B), cars in Fig 26(B), etc.
Al-Ameen’s method is also aimed at improving the brightness of the image, but it can be clearly seen in the experimental part of this article, the Al-Ameen’ method has an excessive enhancement effect on low-light images, such as Figs 11(C), 15(C), 16(C), 18(C), 20(C) and 23(C). On the whole, the output images are whitened, and the image details are too smooth.
The effect of BHE2PL is similar to that of Al-Ameen’s method, with over enhanced effect and even serious exposure, such as Figs 11(D)–14(D), 17(D), 18(D), 21(D), 24(D) and 30(D), especially for images Figs 14(D), 17(D), 18(D) and 24(D) with sky area.
BIMEF is a fusion-based approach. The structure of the image enhanced by BIMEF is well preserved, the image brightness has been improved to a certain extent, the image information content has increased, but the contrast is not obvious. From the image experimental results, the image processed by BIMEF is always dim in color, and even has a hazy feeling.
BPDHE is relatively stable in terms of color output, but when processing the image, the brightness of the image is unevenly improved, and many dark areas in the image are not well corrected, such as the lifebuoy in Fig 15(F), the lawn in Fig 17(F), the car in Fig 18(F), the light bulb in Fig 20(F), the car in Fig 26(F), the trees in Fig 27(F), etc., so that these subjects in the image are not clear. Secondly, the color of the sky area in some images is gray, such as Figs 17(F), 18(F) and 24(F), and the clouds in the sky in Figs 23(F) and 28(F) are strange.
The LIME algorithm is based on retinex theory, it mainly tends to estimate the illumination map of low illumination image to realize enhancement. This algorithm has stable enhancement effect and is often used as a comparative experiment of image enhancement algorithm. In our experimental results, it also shows a good processing effect. The overall visual feeling of the image is comfortable, but the texture details of individual images are not rich enough, such as the wall and wooden chair in Fig 13(G), the building in Fig 17(G), the text in Fig 19(G), the wood board in Fig 22(G) and the fire hydrant in Fig 25(G). In addition, the colors of Figs 13(G), 17(G), 19(G), 23(G), 29(G) and 30(G) are slightly dim.
MSRCR is a classical multi-scale retinex algorithm. From the experimental results, there are serious artifacts in its processing of low illumination images is the same as other images.
NPEA has achieved good results in nature protection. This method is relatively good in color stability and overall structure maintenance, but the image contrast is not obvious after processing. In addition, the restoration of the light and shadow of the image is very unnatural and does not conform to the normal natural phenomenon, as shown in Figs 20(I) and 27(I), here is an obvious bubble feeling at the light and shadow.
Most of the images processed by the SRIE method are close to those processed by NPEA, and the color of the processed image is relatively stable, but the overall brightness of the output image is darker than that of NPEA, such as Figs 12(J), 13(J), 14(J), 17(J), 19(J), 22(J), 29(J), etc. Because of the brightness problem, these images have a hazy feeling as a whole, and the visual effect is not very good, Compared with normal light images, there is still a certain gap.
The method proposed in this paper has comprehensive performance in this experiment. By observing the effects of each method in each image, the method in this paper shows the effect most suitable for human observation. Compared with other methods, the method proposed in this paper is also closest to the image taken under normal light. In Fig 11, the image content includes petals, stamens and bees. It is difficult for other methods to enhance image contrast, maintain image color and restore image details at the same time. Our method has completed these tasks well. Through observation, the small details in the image have been well maintained, one is stamens and the other is bees. Moreover, in the stamen part, we can not only restore the details, but also maintain the sense of color hierarchy, and the fine hairs of bees can be seen clearly. For other images, the effect is also excellent. The overall image contrast and brightness are suitable for human eye observation, with natural color and rich details. From these experimental results, compared with other algorithms, our method has the best evaluation in image visual effect.
4.3 Objective index evaluation
In order to evaluate the quality of experimental images more objectively, this paper uses several metrics such as image peak signal-to-noise ratio (PSNR), image structural similarity (SSIM), image information entropy (Entropy), image average gradient (AG), image contrast ratio (CR), image brightness difference (BD), image standard deviation (STD) and natural image quality evaluator (NIQE) to evaluate the experimental images from different angles. Next, we describe these indicators in detail.
4.3.1 Image peak signal-to-noise ratio (PSNR).
PSNR is the most extensive and commonly used objective evaluation method to measure the effect of image denoising. The larger the PSNR value, the less the image noise content and the better the image quality. The specific calculation method of PSNR is shown in Formula (46).
Where, MaxValue is the maximum pixel value of the image, in this process, the gray value of RGB three channels shall be processed; bits is generally taken as 8; The MSE measure represents the direct deviation between the enhanced image and the reference image, its calculation method has been introduced in Formula (33).
From the value of PSNR in Table 3, our method performs best in the experiment of 20 random images of MIT-5K, followed by AGCWD and LIME. In our method, the PSNR value of each image can be ranked in the top three, and most of them are the first. And as show in Table 4, in the experiment of the top 100 images of MIT-5K, the performance of the method in this paper is not weak, ranking second only to AGCWD. It shows that our method has a good effect on improving image quality.
4.3.2 Image structural similarity (SSIM).
In addition to analyzing the differences between input and output images from a mathematical point of view, some researchers found that natural images show some special structural features, such as strong correlations between pixels, which capture most of the important structural information for the image. Therefore, Wang et al. [49] proposed an image quality evaluation method based on structural similarity (SSIM), SSIM evaluates the quality of the processed image relative to the reference image according to the comparison of brightness l(x,y), contrast c(x,y) and structure s(x,y) between the two images. The three values are combined to obtain the overall similarity measure SSIM(x,y).
From the value of SSIM in Table 5, the average SSIM obtained by the method in this paper is the highest in 20 random images of MIT-5K, followed by LIME and AGCWD, NPEA ranked fourth. On the whole, our method has an absolute advantage. Because only one image of 20 random images of MIT-5K ranks sixth in SSIM value, and the SSIM values of other images of 20 random images of MIT-5K are in the top three. In addition, as show in Table 6, the average SSIM of the top 100 images of MIT-5K is rank second, AGCWD ranks first here and SRIE ranks third. It shows that the method in this paper can maintain the image structure well.
4.3.3 Image information entropy (entropy).
The information entropy of an image represents the average amount of information conveyed by each gray-level pixel of the image, which is used to measure the importance of the target in the image. The larger the value, the richer the image details and the better the image quality. The calculation method of image information entropy is Formula (48). As shown in Tables 7 and 8, it is the information entropy value of the image processed by each method. From the data in the table, where the method in this paper has achieved outstanding results in the 20 random images of MIT-5K, the entropy value of each image here is excellent, and the average value of the method in this paper is the highest in the three group of images. In the group of 20 random images of MIT-5K, the method in this paper ranked first, AGCWD was the second and BPDHE was the third; In the group of top 100 images of MIT-5K, AGCWD was the first, the method in this paper was the second and LIME was the third; In the group of DICM data set, the method in this paper ranked first, BPDHE was the second and NPEA was the third; It shows that the method proposed in this paper shows considerable advantages in increasing the amount of image information.
4.3.4 Image average gradient (AG).
The average gradient of an image refers to the average value of all pixels on an image gradient map, which reflects the characteristics of detailed texture changes of the image, and the clarity of the image. The larger the average gradient value, the richer the image level and the clearer the image. The calculation formula of the average gradient AG is as follows:
(49)
Where M×N represents the image size, and
represent the gradient in the horizontal and vertical directions respectively.
As shown in Table 9, it is the AG value of the 20 random images of MIT-5K processed by each method. From the data in the Table 9, most of the AG values obtained from the images processed by the method in this paper are ranked in the top three, only one image is ranked third with BHE2PL, and one image is ranked fourth. In addition, as shown in Table 10, in the group of 20 random images of MIT-5K, the average AG value of the method in this paper ranked first, LIME was the second and AGCWD was the third; In the group of top 100 images of MIT-5K, LIME was the first, the method in this paper was the second and AGCWD was the third; In the group of DICM data set, LIME was the first, the method in this paper was the second and BHE2PL was the third. The method in this paper performs well on three groups of experimental images, indicating that the method in this paper has outstanding performance in improving image clarity.
4.3.5 Image contrast ratio (CR).
Contrast ratio refers to the measurement of different brightness levels between the brightest white and the darkest black in an image. The larger the difference range, the greater the contrast ratio, and the smaller the difference range, the smaller the contrast ratio. Generally speaking, the higher the contrast ratio, the clearer and more striking the image, and the brighter the colors, while the lower the contrast, the whole picture will be gray. The calculation formula of image contrast is as follows:
(50)
Where δ(i,j) = |i−j| is the gray level difference between adjacent pixels, and Pδ(i,j) is the probability distribution of the pixel whose gray level difference is δ between adjacent pixels. As shown in Table 11, it is the contrast value of the 20 random images of MIT-5K processed by each method. Among all the test images, the image processed by the method proposed in this paper has nine images that ranked first in contrast ratio, seven images ranked second, three images ranked third, and only one ranked fourth. Among them, Al-Ameen, BHE2PL, LIME and MSRCR method occasionally have a better contrast than the method in this paper. However, from the overall average of Table 12, only LIME has surpassed the method in the last two groups of image experiments. Besides, the method is the best. The method in this paper is relatively advantageous in contrast enhancement, although not the best performer in all images, the overall ranking is high. At the same time, we can draw some reasons from the image visual perception in the previous section. The occasional lack of contrast in our method is to maintain a better visual effect while retaining a more natural appearance and image details.
4.3.6 Image brightness difference (BD).
The average value of the image reflects the brightness of the image. The larger the average value, the greater the brightness of the image, and vice versa. Its value can be calculated by the formula:
(51)
Where M×N is the image size, which is the gray value of the pixel in the x row and y column of the image. Under normal circumstances, brightness is an important indicator in evaluating the quality of an image. An image needs sufficient brightness to bring people a good visual experience, but it is not that the larger the brightness value, the better the visual effect. The reference [50] pointed out that when the image gray value is around 128, it indicates that the visual effect is good. In Formula (52), BD represents the difference between the average brightness of the test image and 128, the smaller the value of BD, the better the visual effect of the test image enhancement result. Its definition is as follows:
(52)
As shown in Table 13, it is the brightness difference value of the 20 random images of MIT-5K processed by each method. Among all the test images, most of the images processed by the method proposed in this paper are close to 128 in brightness difference, and from the average value of the three groups of images in Table 14, the method proposed in this paper is relatively stable in maintaining the visual comfort brightness of the image, it maintains consistent performance on low-illuminance images in different environments. NPEA is inferior to the method proposed in this paper, and the performance of AGCWD, BHE2PL and LIME is not stable. Although they show good results in a single image, these three methods are not adaptable, and no consistent good results are shown in other low light images.
4.3.7 Image standard deviation (STD).
The image gray standard deviation reflects the degree of dispersion between the image pixel value and the image average. The larger the standard deviation, the better the image quality. The calculation formula of the image gray standard deviation is as follows:
(53)
As shown in Table 15, it is the standard deviation value of the 20 random images of MIT-5K processed by each method. Among all the test images, the method proposed in this article has achieved the first ranking with nine test images, six test images ranked second, two test images ranked third, and two images ranked fourth, and it can be seen more intuitively from Fig 31 that although the STD value of the image processed by the proposed method is not absolutely optimal, its performance is better overall. In addition, as shown in Table 16, the method proposed in this paper and Al-Ameen’s method and BHE2PL show good performance on STD data. Among the three groups of test images, Al-Ameen’s method ranks second, first and third respectively, BHE2PL ranks third, third and first respectively, while our method ranks first, second and second respectively. Compared with other methods in this paper, our method is relatively stable on low-light images in different environments, which shows that the method proposed in this paper also has a certain advantage in improving image quality.
4.3.8 Natural image quality evaluator (NIQE).
After evaluating the experimental images based on the above evaluation indexes, we also use NIQE to evaluate the experimental images. NIQE is based on a set of “quality perception” features and fits them into the MVG model. The quality perception feature comes from a simple but highly regularized NSS model. Then, the NIQE index of the given test image is expressed as the distance between the MVG model of NSS features extracted from the test image and the MVG model of quality perception features extracted from natural image corpus [51]. Its image evaluation results are more consistent with the feeling of human eyes observing the image, and the smaller the value of NIQE, the better the visual effect of the image.
As shown in Table 17, it is the NIQE value of the 20 random images of MIT-5K processed by each method. Among all the test images, most of the images processed by the method in this paper can get relatively good NIQE values, and on the whole, as shown in Table 18, the NIQE value of the method in this paper is also very good in the three groups of test images, and the average value of the three groups images of the method in this paper is the smallest. It shows that the method in this paper has obtained good visual effects in image processing.
5 Conclusion
In this paper, a low-light image enhancement algorithm with brightness equalization and detail preservation is proposed. The image is processed in both directions. On the one hand, the proposed dual-histogram dual-automatic platform equalization method based on improved CS algorithm is used to improve image brightness and contrast. On the other hand, the image detail mask is made by the method based on the total variation model. Finally, the results of the two parties are merged to obtain the final enhanced image. The main contributions of this paper are: (1) The method in this paper can maintain image detail information while equalizing image brightness and improving image contrast; (2) This paper proposes a new search optimization strategy based on CS and PSO, which is not easy to fall into locally optimal, and can maintain the exploration ability in the later stage, which is more conducive to the selection of the optimal value. Through experiments, from the results of human subjective evaluation and objective index evaluation, it is shown that this method has a good effect in processing low-light images, and this method is suitable for low-light images produced in various environments.
References
- 1. Wang W, Chen Z, Yuan X, et al. Adaptive Image Enhancement Method for Correcting Low-Illumination Images [J]. Information Sciences, 2019, 496.
- 2. Lan X, Zhang S, Yuen P C, et al. Learning Common and Feature-Specific Patterns: A Novel Multiple-Sparse-Representation-Based Tracker [J]. IEEE Transactions on Image Processing, 2018, 27(4):2022–2037. pmid:29989985
- 3.
Gonzalez R. C, Woods R E. Digital Image Processing, vol. 793, 2nd ed. Upper Saddle River, NJ, USA: Prentice-Hall, 2002.
- 4. Rahman S, Rahman M M, Abdullah-Al-Wadud M, et al. An adaptive gamma correction for image enhancement [J]. Eurasip Journal on Image and Video Processing, 2016, 2016(1):35.
- 5. Zhen-Zhong W U. Research on Image Enhancement Algorithm Based on Retinex Theory [J]. Modern Computer, 2016.
- 6. Le-Peng L I, Sun S F, Xia C, et al. Survey of Histogram Equalization Technology [J]. Computer Systems and Applications, 2014.
- 7. Pizer S M, Amburn E P, Austin J D, et al. Adaptive histogram equalization and its variations[J]. Computer vision, graphics, and image processing, 1987, 39(3): 355–368.
- 8. Zuiderveld K. Contrast limited adaptive histogram equalization[C]. Graphics gems IV, 1994: 474–485.
- 9. Kim Y T. Contrast enhancement using brightness preserving bi-histogram equalization [J]. IEEE Transactions on Consumer Electronics, 1997, 43(1):1–8.
- 10. Chen S D, Ramli A R. Contrast enhancement using recursive mean-separate histogram equalization for scalable brightness preservation [J]. IEEE Transactions on consumer Electronics, 2003, 49(4): 1301–1309.
- 11. Ibrahim H, Kong N. Brightness Preserving Dynamic Histogram Equalization for Image Contrast Enhancement [J]. IEEE Transactions on Consumer Electronics, 2007, 53(4):1752–1758.
- 12.
Khan M F, Khan E, Abbasi Z A. Multi Segment Histogram Equalization for Brightness Preserving Contrast Enhancement[C]// IEEE International Conference on Signal Processing. IEEE, 2012.
- 13. Chen H O, Kong N, Ibrahim H. Bi-histogram equalization with a plateau limit for digital image processing [J]. IEEE Transactions on Consumer Electronics, 2009, 55(4):2072–2080.
- 14. Chen H O, Isa N. (2010a) Quadrants dynamic histogram equalization for contrast enhancement [J]. IEEE Transactions on Consumer Electronics, 2010, 56(4):2552–2559.
- 15. Chen H O, Isa N. (2010b) Adaptive contrast enhancement methods with brightness preserving [J]. IEEE Transactions on Consumer Electronics, 2010, 56(4):2543–2551.
- 16. Cao G, Huang L, Tian H, et al. Contrast enhancement of brightness-distorted images by improved adaptive gamma correction [J]. Computers & Electrical Engineering, 2018, 66:569–582.
- 17. Land E H. The retinex [J]. American Scientist, 1964, 52(2):247–264.
- 18. Xia H, Liu M. Non-uniform Illumination Image Enhancement Based on Retinex and Gamma Correction [J]. Journal of Physics: Conference Series, 2019.
- 19. Rahman Z. Properties of a center/surround Retinex: Part 1: Signal processing design [J]. NASA Contractor Report, 1995, 198194:13.
- 20.
Rahman Z, Jobson D J, Woodell G A. Multi-scale retinex for color image enhancement [C]//Proceedings of 3rd IEEE International Conference on Image Processing. IEEE, 1996, 3:1003–1006.
- 21.
Parthasarathy S, Sankaran P. An automated multi scale retinex with color restoration for image enhancement [C]//2012 National Conference on Communications (NCC). IEEE, 2012:1–5.
- 22. Guo X, Yu L, Ling H. LIME: Low-light Image Enhancement via Illumination Map Estimation [J]. IEEE Transactions on Image Processing, 2016, (99):1–1.
- 23. Fu X, Zeng D, Huang Y, et al. A fusion-based enhancing method for weakly illuminated images [J]. Signal Processing, 2016, 129:82–96.
- 24. Jiang X, Yao H, Liu D. Nighttime image enhancement based on image decomposition [J]. Signal, Image and Video Processing, 2019, 13(1):189–197.
- 25. Mahmood A, Khan S A., Hussain S, Almaghayreh E M, “An Adaptive Image Contrast Enhancement Technique for Low-Contrast Images,” in IEEE Access, vol. 7, pp. 161584–161593, 2019,
- 26.
Can B, Beham A, Heavey C. A comparative study of genetic algorithm components in simulation-based optimisation[C]// Winter Simulation Conference. IEEE, 2008.
- 27.
Eberhart R, Kennedy J. Particle swarm optimization[C]//Proceedings of the IEEE international conference on neural networks. 1995, 4:1942–1948.
- 28. Karaboga D, Akay B. A comparative study of Artificial Bee Colony algorithm[J]. Applied Mathematics and Computation, 2009, 214(1):108–132.
- 29. Yang X-S, Deb S. Cuckoo search via Lévy flights[C]. 2009 World Congress on Nature & Biologically Inspired Computing (NaBIC), 2009:210–214.
- 30.
Singh R. P, Dixit M, Silakari S. “Image Contrast Enhancement Using GA and PSO: A Survey,” 2014 International Conference on Computational Intelligence and Communication Networks, Bhopal, 2014, pp. 186–189, https://doi.org/10.1109/CICN.2014.51
- 31. Pareyani A, Mishra A. Low Contrast Gray Image Enhancement using Particle Swarm Optimization (PSO) with DWT [J]. International Journal of Computer Applications, 2015, 130(8):8–13.
- 32. Wu Y Q, Yin J, Dai Y M. Image enhancement in NSCT domain based on fuzzy sets and artificial bee colony optimization [J]. Journal of South China University of Technology, 2015, 43(1):59–65.
- 33. Premkumar S, Parthasarathi K. An Efficient Method for Image Enhancement by Channel Division Method Using Discrete Shearlet Transform and PSO Algorithm [J]. Social Science Electronic Publishing, 2016.
- 34. Malik R, Dhir R, Mittal S K. Remote sensing and landsat image enhancement using multi objective PSO based local detail enhancement [J]. Journal of Ambient Intelligence and humanized computing, 2019, 10(9):3563–3571.
- 35. Pabla B, Aquino-Morínigo , Freddy R, Lugo-Solís , Diego P, Pinto-Roa , et al. Bi-histogram equalization using two plateau limits [J]. Signal Image and Video Processing, 2017.
- 36. Xu L, Yan Q, Xia Y, et al. Structure extraction from texture via relative total variation[J]. ACM Transactions on Graphics, 2012, 31(6).
- 37. Viswanathan G, Afanasyev V, Buldyrev S V, et al. Lévy flights in random searches [J]. Physica A: Statistical Mechanics and its Applications, 2000, 282(1–2):1–12.
- 38. I F Jr, Yang X S, Fister D, et al. Cuckoo Search: A Brief Literature Review [J]. Physics, 2014, 516(2):49–62.
- 39. Zhang Y, Pu Y F, Hu J R, et al. A class of fractional-order variational image inpainting models [J]. Applied Mathematics & Information Sciences, 2012, 6(2):299–306.
- 40. Wu H, Wu Y, Wen Z. Texture Smoothing Based on Adaptive Total Variation [J]. Advances in Intelligent Systems & Computing, 2014, 277:43–54.
- 41.
Bychkovsky V, Paris S, Chan E, et al. Learning photographic global tonal adjustment with a database of input/output image pairs[C]// Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on. IEEE, 2011.
- 42.
Lee C, Lee C, Kim C S. Contrast enhancement based on layered difference representation[C]//2012 19th IEEE International Conference on Image Processing. IEEE, 2012:965–968.
- 43. Deng G. A Generalized Unsharp Masking Algorithm [J]. IEEE Transactions on Image Processing A Publication of the IEEE Signal Processing Society, 2011, 20(5):1249–61. pmid:21078571
- 44. S Huang F Cheng , Chiu Y, Efficient Contrast Enhancement Using Adaptive Gamma Correction With Weighting Distribution [J]. IEEE Transactions on Image Processing, 2013, 22(3):1032–1041. pmid:23144035
- 45. Al-Ameen Z. Nighttime image enhancement using a new illumination boost algorithm [J]. IET Image Processing, 2019, 13(8):1314–1320.
- 46. Ying Z, Li G, Gao W. A bio-inspired multi-exposure fusion framework for low-light image enhancement [J]. arXiv preprint arXiv:1711.00591, 2017.
- 47. Wang S, Zheng J, Hu H M, et al. Naturalness Preserved Enhancement Algorithm for Non-Uniform Illumination Images [J]. IEEE Transactions on Image Processing A Publication of the IEEE Signal Processing Society, 2013, 22(9).
- 48.
Fu X, Zeng D, Huang Y, et al. A Weighted Variational Model for Simultaneous Reflectance and Illumination Estimation [C]// 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2016.
- 49. Wang Z, Bovik A C, Sheikh H R, et al. Image quality assessment: from error visibility to structural similarity [J]. IEEE Transactions on Image Processing, 2004, 13:600–612. pmid:15376593
- 50. Bing Y, Mei Y. Enhancement of remote sensing image based on contourlet transform [J]. Computer Engineering and Design, 2008.
- 51. Mittal A, Fellow , IEEE, et al. Making a ‘Completely Blind’ Image Quality Analyzer [J]. IEEE Signal Processing Letters, 2013, 20(3):209–212.