Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

A new Gaussian curvature of the image surface based variational model for haze or fog removal

  • Muhammad Arif ,

    Contributed equally to this work with: Muhammad Arif, Noor Badshah, Tufail Ahmad Khan, Asmat Ullah

    Roles Conceptualization, Investigation

    Affiliation Department of Basic Sciences, University of Engineering and Technology, Peshawar, Pakistan

  • Noor Badshah ,

    Contributed equally to this work with: Muhammad Arif, Noor Badshah, Tufail Ahmad Khan, Asmat Ullah

    Roles Formal analysis, Investigation, Validation

    Affiliation Department of Basic Sciences, University of Engineering and Technology, Peshawar, Pakistan

  • Tufail Ahmad Khan ,

    Contributed equally to this work with: Muhammad Arif, Noor Badshah, Tufail Ahmad Khan, Asmat Ullah

    Roles Formal analysis, Supervision

    Affiliation Department of Basic Sciences, University of Engineering and Technology, Peshawar, Pakistan

  • Asmat Ullah ,

    Contributed equally to this work with: Muhammad Arif, Noor Badshah, Tufail Ahmad Khan, Asmat Ullah

    Roles Conceptualization, Formal analysis, Investigation, Software, Writing – review & editing

    asmatullah75@gmail.com

    Affiliation Department of Basic Sciences, University of Engineering and Technology, Peshawar, Pakistan

  • Hena Rabbani ,

    Roles Conceptualization, Formal analysis, Funding acquisition

    ‡ HR, HA and NB also contributed equally to this work.

    Affiliation Department of Basic Sciences, University of Engineering and Technology, Peshawar, Pakistan

  • Hadia Atta ,

    Roles Data curation, Formal analysis, Funding acquisition, Methodology

    ‡ HR, HA and NB also contributed equally to this work.

    Affiliation Department of Mathematics, Islamia College Peshawar, Peshawar, Pakistan

  • Nasra Begum

    Roles Funding acquisition, Investigation, Methodology, Project administration

    ‡ HR, HA and NB also contributed equally to this work.

    Affiliation Department of Mathematics, Shaheed Benazir Bhutto Women University, Peshawar, Pakistan

Abstract

Outdoor images are usually affected by haze which limits the visibility and reduces the contrast of the images. Removal of haze from real-world images is always a challenging task. Recently, many mathematical models have been proposed for the effective removal of haze from real-world images. However, these models may produce staircase effects or lower the image contrast or smooth the edges of the object. In this paper, we propose a model based on Gaussian curvature for the de-hazing of images. The atmospheric veil estimate is included based on dark channel prior (DCP), which can significantly reduce the artifacts on the edge of the image and increase the accuracy. The transmission map then changes to a high-quality map to reduce haze or fog from gray and color images. DCP combined with Gaussian curvature is done for the first time for image de-hazing/de-fogging. The augmented Lagrangian method is used to find the minimizer of the proposed functional, which will be a system of partial differential equations. To get fast convergence, fast Fourier transforms (FFT) is used to solve the system of PDEs. The performance of the proposed model is compared with other state-of-the-art models qualitatively and quantitatively. The proposed model is tested on various real and synthetic images which show better efficiency in staircase effects reduction, haze/fog removal, image contrast, corners, and sharp edges conservation respectively.

1 Introduction

The atmospheric light is dispersed in different angles due to the presence of atmospheric particles (e.g. fog, haze, smog, smoke, and mist) in the air. The incoming light blended to the layer of ambient light (air-light), which reflects atmospheric particles in the line of sight, depending on the turbidity and distance from the scene to the observer (visual range), causing low contrast, reduced visibility, distortion of colour, deterioration of essential elements of the photography scene and makes lots of trouble in the recognition and detection of a target in video surveillance system by blocking the direct scene transmission. Therefore, it is primal imperative to improve visibility and restore the essential features of the scene with a simple and effectual image restoration algorithm.

The removal of haze or fog is considered to be an important procedure since haze or fog-free images are visually pleasing and can improve performance of computer vision tasks like object detection, classification, visual navigation, etc. According to Koschmieder [1], a degraded scene formed as displayed in Fig 1 can be formulated mathematically as follows (1) where the observed hazy or foggy image intensity is I(x), the intensity of the scene is J(x), the intensity of the atmospheric light is I and T(x) = eβd(x) is the transmission map (scene reflected light captured by the signal receiver depending upon the amount of haze) corresponds to values between 0 (no visibility) and 1 (clear visibility) with degradation coefficient β and distance d(x) from scene point to the observer. The first term T(x)J(x) in the R.H.S of Eq (1) is the direct attenuation, describes the radiance of the scene while the second term I(−T(x) + 1) is the air-light (result of the scattered atmospheric light due to atmospheric particles), twists the radiance of the scene. The key task to get J from an observed hazy frame I is the T estimate and I atmospheric light. Many researchers have made significant progress in the estimation of transmission T using a single image [24] and took the average value from I or I as the atmospheric light I as its brightest pixel.

It should be noted that inaccurate assumption or estimation of atmospheric light I leads towards imprecise results in the recuperation of J haze free image. Without the use of rational processes, several methods assume the pixels with dense fog to be pure white and take the transmissions in a direct relation with hazy colors [57]. Due to providing unsatisfactory transmission map they proceed towards Laplacian matting algorithm or guided filter algorithm for refinement. The inappropriate assumption of linearity between transmission map and haze colors applying by these two algorithms reflects every variance of hazy colors on transmission map without difference. Taking advantages of the physical properties of air-light map or atmospheric veil, Cho et al. [9] used a variational approach for the estimation of air-light map in order to restore a fog-free image. This approach can adequately remove the haze and satisfy the edge preserving property, but the restored images have very low contrast. For this purpose, a variety of histogram equalization methods were applied to make better the contrast of the restored images. During image de-hazing, it should be take into account that the accurate estimation of depth or medium transmission information leads towards finest de-hazing results. Most de-hazing procedures are based on the pre-estimated depth or medium transmission information. Fang et al. [10] estimated the depth map and de-haze image using a total variation (TV) regularizer and [11, 12] used the TV regularizer to medium transmission refining. TV regularizer has the advantage of preserving edges of the image but produces blocky effects due to piecewise constant solutions where the region is homogeneous. Total generalised variation (TGV) second order regularizer was implemented to overcome the blocking artifacts which has the ability of reducing stair-casing effects along with edge preservation [13, 14].

The de-hazing methods are mainly classified in methods based on enhancement and physics. Methods based on enhancement do not take the mechanism of image degradation into account. They simply intend to make better the degraded image contrasts and ignore the colour restoration. The simple, fast and most widely used enhancement-based methods are typical retinex [15] and histogram-based method [16]. These methods may not properly work to remove visual defects of the image having inhomogeneous fog. The restored image can be restored overly and contains halo objects.

Physics-based methods design a physical model based on a hazy image’s degradation phenomenon. Then, using that physical model, restore the haze-free image. In contrast of methods based on enhancement, physics-based methods yield natural, finest and accurate de-hazing results that thrust someone into attention as explained in many scientific theories and experimental results. Therefore, the main subject of this paper is the dehazing processes based on the physical model. In physics-based de-hazing methods, the dark channel prior [17] is most widely used and assuming that there are some dark pixels have a very low intensity of approximately zero in at least one of the RGB color channels in the non-sky patches of a haze free scene. Since the DCP is based on the non-sky regions, therefore it gives very poor results for images with sky regions. He et al. [17] used the soft matting algorithm to annihilate halo artifacts generated by the kernel operation and refined the transmission map. They improved the de-hazing results very well but the soft matting algorithm is very costly and time consuming. He et al. [18] also employed guided filtering in place of soft matting algorithm for transmission map optimization. The guided filtering gives similar results as that of soft matting algorithm but the de-hazing speed is very fast. The de-hazing performance of DCP is further improved by Xu et al. [19] and Li et al. [20]. Color lines in RGB space can also better approximate pixels as shown by [2123]. Tarel et al. [24] assumed the atmospheric veil to be changed easily in a local medium and applied the median filter to improve the proposed de-hazing results. The median filter can’t conserve the object edges well, therefore, in case of discontinuous scene depth it may fail. Meng et al. [25] used a boundary constraint and L1-norm-based regularization to estimate and refine the transmission map. They became successful in reducing halo artifacts but the de-hazed results have some color distortion. Nishino et al. [26] and Wang et al. [27] used Bayesian probabilistic method and Bayesian theory based on the Markov random field to calculate the scene albedo and scene depth to remove haze. By reducing the kernel operation using Bayesian-based algorithms, halo artifacts can be reduced and the problem of color distortion occurs as in Meng et al. [25]. Ancuti and Ancuti [28] employed a Laplacian pyramid and a Gaussian pyramid and proposed a multi-scale fusion-based haze removal algorithm. The algorithm have fast processing speed but the resulting image have high saturation and color distortion due to not focusing on the distribution of haze amount relative to the scene depth. Lee et al. [31] stated that only Gaussian curvature can maintain important structures (such as corners(edges) and wrinkles). The work was supported by Elsey and Esedoglu [29] and proved that the Gaussian curvature of the surface is a good preserver for important features of image as compared to mean curvature.

Many haze removal techniques have been proposed so far. The problems of halo artifacts, stair-cases like artifacts, and color distortion that need to be solved during the image dehazing/defogging process are still unsolved. In this regard, our contributions to tackling such issues can be outlined as follows:

  • A new variation model based on the Gaussian curvature (GC) of the image surface of a given scene is proposed, to estimate an air-light map which preserves the important image features such as edges and wrinkles while handling the problems of halo artifacts, stair case effects and color distortions simultaneously quite well.
  • We also take some prior assumptions called the dark channel prior (DCP) to approximate the transmission map. The transmission map is then changed into a depth map.
  • Dark channel prior combined with Gaussian curvature is employed for the first time for image de-hazing/de-fogging.
  • We implement augmented Lagrangian method (ALM) for the Gaussian curvature (GC) regularization based de-hazing model and design a special minimization procedure to minimize the augmented Lagrangian functional.
  • For fast convergence, fast Fourier transform is implemented to compute the system of linear partial differential equations arisen from the minimization of the augmented Lagrangian functional.

This paper is summarized in the following. In section 2, a review of some well-known image de-hazing and defogging related methods is given. In section 3, our novel model for addressing the problem is proposed. The augmented Lagrangian method is used in Section 4 for the resolution of the Euler Lagrange PDEs system. Section 5 provides some image restoration and analysis results for outdoor and indoor scenes to demonstrate the performance of the proposed model. Lastly, we make some concluding remarks, applications and future directions in section 6. Next, we discuss some well known related image de-hazing and defogging methods as under.

2 Preliminaries

In this section, the following two well-known recent image de-hazing or defogging methods are reviewed. The restoration results of these have been compared with the proposed model discussed in subsection-5.4.

2.1 Edge preserving regularization based single image de-fogging model (M1)

In order to recover a fog-free scene, Cho et al. [9] (M1), based on the physical properties of air-light A, proposed a model for estimating the air-light map A. The energy functional of the model is (2) where data fidelity term is the first term and a regularization term is the second term. W = min(I) is the image of minimal component of image I, H(⋅) is the Heaviside function, Ω is bounded open subset of R2, ||∇A|| is the A gradient modulus, and λ is the regularization parameters that balance the influence between two terms of Eq (2). The function ψ is presumed to be an even function of class C2(R) as the edge preserving regularization function.

The following slightly regularized versions of the function H and δ (one-dimensional Dirac measure), denoted here by Hϵ and , are considered.

As ϵ → 0, they converge respectively to H and δ. The method improves the quality of the defogged image significantly and restores a fogged image in better colour. The algorithm is good even in case of heavy fog. However, without using histogram for the post-processing of the restored image, the image is not as bright as the atmospheric light, since it does not normally have the same brightness(taken by top 1% brightness pixel values) [9].

2.2 Dark channel prior based single image haze removal model (M2)

Dark Channel Prior (DCP) [4] (M2) is based on the assumption that in at least one of the non-sky space patches of a hazelnut free image there are some very low intensity dark pixels equal to zero. The J image can be formulated as following with the dark Jdark channel: (3) where Jc is a single band (color) image of J(x), c ∈ {R, G, B} is a colour channel, and Ω*(x) is a local patch centered at pixel x and the DCP represents the approximate value for every pixel of the dark channel.

Estimating the transmission map.

Assuming I to be known. According to the DCP approximation, the patch’s transmission can be represented as: (4)

Since, the colour of the sky nearly approach to I in hazy scenes and on removing fog or haze thoroughly may result un-natural scenes [4, 17] and depth information may also be lost. (5)

A constant ω ∈ (0, 1] is multiplied to prevent the depth information of the natural scene and is generally fixed as 0.95. Considering that the transmission is a constant in Ω*(x), and performing the important operation in the local patch and three color channels on the haze scene I. The estimated value of T(x) defined by can be computed as follows (6)

This process of estimating the transmission map is sensibly well. As in local patches, the transmission is taken constant which generates some blocky artifacts because the fact is that in a patch, the transmission is not always constant. Furthermore, the transmission map is refined through soft matting process.

Soft matting.

He et al. [17] employed a soft matting algorithm [30] for refining the transmission map T(x) and minimized the following cost function: (7) where T and are the vectorial forms of T(x) and , λ is a regularization parameter and L is the Matting Laplacian matrix whose (i,j) element is computed as: (8) where δi,j is the Kronecker delta, μk and Σk are the average and co-variance matrix of the colours in windows wk, Ii and Ij are the colours of the scene I at pixels i and j, |wk| is the number of pixels in wk, ε is a regularization parameter and U3 ia a 3 × 3 identity matrix.

The optimal value of T is the following sparse linear system solution: (9) where U is an identity matrix equal with L.

Recovering the scene radiance.

Once the transmission map T(x) and atmospheric light I is known, the scene radiance J(x) is computed from Eq (1) by: (10) where T0 represents a lower bound of T(x) and its optimal value is nearly 0.1. The scene radiance has low brightness as compared to atmospheric light, the restored images seem to be faint and the exposure of J(x) is increased for displaying.

Approximating the atmospheric light.

To improve the atmospheric light estimation using the DCP (more faster than the “brightest pixel” method), the pixels with (highest) intensity in the top 0.1% most haze opaque brightest pixels in the hazy scene I is selected as the atmospheric light I. Although single image haze removal can easily enhance the appearance of images applying DCP on the degradation model but it may not work on images having scene object similar to the atmospheric light in an inherent manner and no shadow is diffused on them. The DCP underrate the transmission for these objects and some halo artifacts are also found in the resulting images. To overcome the above-mentioned drawbacks of M1 and M2 methods, a new variational model for haze or fog removal is proposed as follows:

3 The proposed de-hazing model (M3)

In this section, we propose a new model based on dark channel prior (a kind of statistics of the haze/fog-free outdoor scenes) and using Gaussian curvature (GC) as a regularizer. Dark channel prior depends on key perception-most local patches in haze/fog-free (outdoor) images contain a few pixels which have extremely low intensities in at least one color channel. Utilizing this prior to the haze/fog imaging model, we can straightforwardly approximate the thickness of the haze and recuperate a top-quality haze/fog-free image. This model uses advantages of both DCP and GC, which leads to good restoration results, because dark channel prior performs well for estimating the thickness of the haze/fog and recovering good quality scene, while Gaussian curvature performs well for eliminating staircase effect, preserving textures and sharp edges. The resulting model has the advantage of better preserving image regions containing textures and fine details, leading to a more natural scene while reducing the staircase effect in smooth regions.

Hence motivated by the advantages of Gaussian curvature compared to the mean curvature and total variation in 2D image de-noising pointed out by Elsey and Esedoglu [29] and Lee and Seo [31] in geometry processing and dark channel prior, here we design a Gaussian curvature of the scene surface regularization and DCP based model for single image de-hazing. Hence employing this model, reasonable improvement in image quality is obtained. The Gaussian curvature of a 3D surface S can be clearly expressed by the zero-level set function Φ and can be defined by (11) where ∇Φ = (Φx, Φy, Φz)t is the gradient vector, is the norm of ∇Φ, and are the Hessian matrix and its adjoint, respectively.

Let us consider the surface S to be the graph of a hazy image I and the unexplored atmospheric region between the scene and sensor be the channel of air-light mapping (x, y): → (x, y, A(x, y)). Then the relation Φ = −z + A(x, y) can be used to find a formula for kGC. The new coordinate system implicates ∇Φ = (Ax, Ay, −1)t,

The GC of a 2D image surface may be formulated as (12)

Now, it is possible to explicate a new image de-hazing model using the following regularizer (13)

Therefore, our new designed Gaussian curvature based image de-hazing model is (14) where the λ parameter is positive and RGC(A) is the Gaussian curvature regularization term. To minimize the Gaussian curvature term, let us take

Here, we apply some new notations (, S = sign(AxxAyyAxyAyx) where sign(∘) is the sign function and (ν1, ν2) = ν the unit normal vector) to reduce the complexity in writing the equations and use the divergence theorem when required.

In a state of proper arrangement, we drop the boundary terms.

Eventually, we define (15) (16) and the Euler Lagrange equation with the above defined boundary conditions for the GC based image de-hazing model can be written as (17)

To solve the non-linear PDE (17), an augmented Lagrangian method is employed as discussed in the next section.

4 Numerical implementation

Augmented Lagrangian method (ALM) plays an important role in solving constraint minimization problems and is used in several image restoration problems such as [3238]. ALM convert the constraint minimization problem to unconstraint problem by incorporating the constraints in the energy functional, on account of which some additional terms arise known as Lagrange multiplier terms. In ALM, the problem is classified into different sub-problems using alternating minimization procedure that can be solved easily.

Notation. Representing a gray-scale image by an N2 matrix and Euclidean space by V. The gradient operator (discrete) is a mapping : VQ = V2. For AV, ∇A is computed by: with where i, j = 1, …, N. and indicate forward difference operators with boundary condition (periodic)(A is periodically extended).We represent the usual inner product of V and Q as 〈⋅, ⋅〉V and 〈⋅, ⋅〉Q and the Euclidean norm of V and Q by ‖⋅‖V and ‖⋅‖Q that can be defined as follows: for p = (p1, p2) ∈ Q and q = (q1, q2) ∈ Q. Moreover, at each pixel (i, j), is the usual Euclidean norm in . For the sake of convenience, we neglect the subscripts V and Q and simply use (⋅, ⋅) and ‖⋅‖ to represent the usual inner product and L2 norm. For solving the GC de-hazing model (14) with ALM, we are introducing a new dual variable q where q = ∇A, and come into possession of the following refined constrained optimization problem: (18)

The constraint minimization problem is further reformulated to get the augmented Lagrangian functional as follows: (19) where μ is the multiplier of Lagrange and r is a constant(positive). The alternating minimization procedure classify the functional into two sub-problems in order to find the optimal values of A, q and μ.

4.1 Sub-problem for q

For a given A and μ, the functional (objective) is defined by (20)

The Euler Lagrange equations for the functional Eq (20), with is (21) (22) where

Without applying any iterative procedures, it is very easy to solve Eqs (21) and (22) for q1 and q2.

4.2 Sub-problem for A

For a q and μ given, we have (23)

The optimality condition for Eq (23) is defined by the linear PDE (24) with the periodic boundary conditions. We implement Fourier transform (and FFT implementation) to compute the above linear equation by following [36, 3941]. The Fourier transform of Eq (24) is (25) whose solution is (26) where and are Fast Fourier transform and inverse Fast Fourier transform, μk = (μ1, μ2) and q = (q1, q2); and Fourier transforms of operators such as , , are treated as the transforms of their corresponding convolution kernels. The iterative procedure shown in Algorithm 1 is used to solve Eq (19).

Algorithm 1. ALM for the Gaussian Curvature Based Image Restoration Model

1. Initialize μ1 = μ2 = 0, , r ∈ (0, 0.999) and ϵ = β ∈ (0, 1).

2. For K = 0, 1, …, IMAX

 (a) Step 1: Solve Eqs (21) and (22) for and with A = A(k)

 (b) Step 2: Solve Eq (26) for A(k+1) with

 (c) Step 3: Update Lagrange multipliers automatically at every iteration.

μ(k+1) = r(q(k+1) − ∇A) + μ(k)

3. End Procedure

5 Experimental results and analysis

Some images are presented in this section to show the improved performance and efficiency of the proposed model in showing the corresponding visible edge maps, transmission maps, air-light maps, and final de-hazing results. In some scenes (outdoor and indoor) contaminated with haze or fog we describe and analyze the simulation results. The de-hazing results are then comparable to current state of the art methods and corresponding visible edge maps. For quantitative analysis, different quality descriptors/indicators (measures) are taken to assess the amount of newly visible edges, contrast, and mean visibility enhancement restoration. The entropy of the de-hazed images is also computed and compared with other recent methods. In MATLAB R2013a all of the experiments listed here and all simulations have been carried out on a Haier Win8.1 PC, Intel Core i3 CPU @ 1.70GHz with 4.00GB RAM. For better results, the values of Lagrange multipliers μ1 and μ2 are tuned automatically at every iteration until optimal restoration results are obtained and choose adaptively r ∈ (0, 0.999) and ϵ = β ∈ (0, 1) according to the image size and type.

In this paper, four measures (indicators or descriptors) are taken to evaluate quantitatively the de-hazing outputs of the proposed model (M3) and its comparison with other recent existing methods in contrast and visibility recovery. These measures can be formulated as follows

e: Let n0 and nr be the cardinal numbers of the set of visible edges of the original I0 image and restored image Ir. Then assuming e be the rate of the restored edges in Ir that can be computed by (27)

The value of e measures the amount of edges that are capable of being seen in Ir and are concealed in I0 due to haze and fog.

: Let be the geometric mean of the ratio of the gradient in the recovered image and in the original ri image relative to the visible edges for each pixel i. The is given by (28)

The value of display the essential and distinguishing attributes of the proposed model in the recovery of image contrast and forecast the average visibility enhancement of the restoration algorithm.

σ: The measure σ is obtained by normalizing the number ns of entirely black or white pixels of the algorithm after contrast restoration on the size of the image and can be formulated as (29) where the width and height of the image is dimx and dimy.

Entropy: Entropy is the corresponding states of intensity level adapted by the individual pixels in an image and is a measurement of image information content which make sense of the average precariousness of a collection of facts. It is used in the quantitative analysis, evaluation and providing better comparison of the image details. The higher entropy value result indicates more image details information.

In other words, a high entropy value means that the average amount of information in the image is high, which indicates there is a lot of information about the features or corners. Hence, the higher the entropy value of the image, the finer the contrast. The 2-D information entropy of the image reflects the average amount of information in the image, its value range is [0 8]. The larger the value is, the more uniform the gray value of the image is.

5.1 Outdoor and indoor scenes

This section examines the image de-hazing performance of the proposed model (M3) with five natural scenes (outdoor and indoor) corrupted with haze or fog shown in Fig 2 ((a-forest image), (f-home outdoor image), (k-dense forest image), (p-indoor room image) and (u-indoor building image)) respectively. The first column shows the given hazy or foggy images and the rest of the columns from left to right show the corresponding visible edge maps, transmission maps, air-light maps and final de-hazing results of the proposed model (M3). The image restoration results shown in Fig 2(e), 2(j), 2(o), 2(t) and 2(y) looks natural while the details and features of the images are preserved quite well. The claim can be further confirmed by the visual inspection of the de-hazing outputs displayed in the last column of Fig 2, corresponding visible edge maps given in the 2nd column of Fig 2 and quantitative results of the four measures mentioned in Table 1. It could be found that our approach can efficiently remove haze or fog and can restore high-quality haze-free images while generating no specious edges, halos, or artifacts. Note that the transmission map T(x) = eβd(x) reflecting by the density of haze or fog as given in the 3rd column of Fig 2 depends on the distance d(x) from scene to the sensor and atmospherical degradation coefficient β caused by the scattering of light due to haze or fog in the atmosphere and is reputed as a grading variant of the depth map in case of both homogeneous and inhomogeneous haze or fog.

thumbnail
Fig 2. De-hazing outputs of the proposed model (M3).

From left (L) to right (R), the original images (a,f,k,p,u), the corresponding visible edge maps (b,g,l,q,v), transmission maps (c,h,m,r,w), air-light maps (d,i,n,s,x) and final de-hazing results (e,j,o,t,y).

https://doi.org/10.1371/journal.pone.0282568.g002

thumbnail
Table 1. Quantitative results of the model-M3 under e, , σ, entropy measure and CPU time for the de-hazed images of outdoor and indoor scenes.

https://doi.org/10.1371/journal.pone.0282568.t001

5.2 Synthetic road scene images under heterogeneous fog

This section discusses the restore image performance of the proposed model (M3), shown in Table 2 and Fig 3(a), 3(f), 3(k) and 3(p) on several synthetic road scene images under heterogeneous fog respectively. In the Fig 3, the first column displays the given road scene foggy images and the rest of the columns from left to right show the associated visible edge maps ((b),(g),(l),(q)), transmission maps ((c),(h),(m),(r)), air-light maps ((d),(i),(n),(s)) and restoration results ((e),(j),(o),(t)) of the proposed model (M3). The proposed de-hazing method (M3) can restore highly improved images while maintaining very good image details and producing no artifacts, as can be seen from the result provided in Fig 3. Furthermore, the restoring results of the proposed model in Fig 3 and quantitative results of the four measures given in Table 2 show that the results of de-hazing can be efficiently used to treat the inland river image.

thumbnail
Fig 3. De-hazing results of the proposed model (M3).

From L to R, the original images (a,f,k,p), the corresponding visible edge maps (b,g,l,q), transmission maps (c,h,m,r), air-light maps (d,i,n,s) and final de-hazing results (e,j,o,t).

https://doi.org/10.1371/journal.pone.0282568.g003

thumbnail
Table 2. Quantitative results of the model-M3 under e, , σ, entropy measure and CPU time for the de-hazed images of synthetic road scene images under heterogeneous fog.

https://doi.org/10.1371/journal.pone.0282568.t002

5.3 Realistic CCTV camera captured images under heterogeneous haze

In this section, the image visibility restoration performance of the proposed method (M3) was tested for images from CCTV cameras recorded under hazy weather conditions. In the Fig 4, the first column shows the given CCTV based hazy scenes and the remaining of the columns from left to right show the associated visible edge maps ((b),(g)), transmission maps ((c),(h)), air-light maps ((d),(i)) and restoration results ((e),(j)) of the proposed model (M3) respectively. The haze-removal outputs obtained through (M3) were checked through visual inspection shown in Fig 4 ((e),(j)) appeared brighter and more clear while keeping more visible edges and other details. Moreover, the qualitative de-hazing results of (M3) in Fig 4 and quantitative outputs in Table 3 show that the restoration results can be applied efficiently to defense and surveillance image processing.

thumbnail
Fig 4. De-hazing results of the proposed model (M3).

From L to R, the original images (a,f), the corresponding visible edge maps (b,g), transmission maps (c,h), air-light maps (d,i) and final de-hazing results (e,j).

https://doi.org/10.1371/journal.pone.0282568.g004

thumbnail
Table 3. Quantitative results of the model-M3 under e, , σ, entropy measure and CPU time for the de-hazed images of realistic CCTV camera captured images under heterogeneous haze.

https://doi.org/10.1371/journal.pone.0282568.t003

5.4 Comparison with other image restoration methods

In this part, we compare the de-hazing results of our model (M3) with Cho et al. [9] (M1) and He et al. [4] (M2) algorithms. All the methods were applied to a large number of natural and synthetic data-set of hazy, foggy, and contrast-degraded images. In Figs 5 to 16, we show the comparative restore results. As shown in the de-hazing results, we can easily see that the results of our proposed model are better than other methods while maintaining edges and other image features quite well. Furthermore, we can observe also that the recovered outputs of Cho et al. [9] and He et al. [4] approaches can increase details and visuality of the image but the color and brightness of these approaches are trivial and fiddling. The de-hazing results of Fig 5(e) and 5(m) are just similar to Cho et al. [9] approach and the proposed approach but there is a problem of generating stair-case effects in Fig 5(e) results generated by all the three approaches. Although Cho et al. [9] approach is free of generating noise and stair-cases like artifacts in all the Figs except Fig 5(e), however, it gives very poor de-hazing results as compared to He et al. [4] and our method (M3).

thumbnail
Fig 5. De-hazing performance of the proposed model (M3) in comparison with M1 and M2.

From L to R columns represent the given degraded images, M1, M2 and M3 results, respectively.

https://doi.org/10.1371/journal.pone.0282568.g005

thumbnail
Fig 7. De-hazing performance of the proposed model (M3) in comparison with M1 and M2 models.

From L to R columns represent the given degraded images, M1, M2 and M3 results, respectively.

https://doi.org/10.1371/journal.pone.0282568.g007

thumbnail
Fig 9. Restoration performance of the proposed model (M3) in comparison with M1 and M2.

From L to R columns represent the given images, Cho et al. [9] (M1) results, He et al. [4] (M2) results and proposed model results, respectively.

https://doi.org/10.1371/journal.pone.0282568.g009

thumbnail
Fig 11. De-hazing performance of the proposed model in comparison with M1 and M2.

From L to R columns represent the given images, M1, M2 and proposed model results, respectively.

https://doi.org/10.1371/journal.pone.0282568.g011

thumbnail
Fig 13. De-hazing performance of the proposed model in comparison with existing models.

From L to R, columns represent the given images, Cho et al. [9] results, He et al. [4] results and proposed model results, respectively.

https://doi.org/10.1371/journal.pone.0282568.g013

thumbnail
Fig 15. De-hazing performance of the M3 de-hazing model in comparison with the existing models and clear images.

From L to R, columns represent the given images, Cho et al. [9] results, He et al. [4] results, M3-model results and clear images, respectively.

https://doi.org/10.1371/journal.pone.0282568.g015

In He et al. [4] de-hazing results, Figs 5(s), 7(c), 7(k), 9(s), 11(k), 13(c) and 13(o) shows some halo artifacts around the objects in the images, Figs 5(g), 5(k), 9(c) and 9(k), 13(g), 13(o) and 13(s) are less clearly visible and fade the color of the images, Fig 5(k) produces noise in large number, Fig 7(c) produces stair cases like artifacts or blocky effects and Fig 7(s) contains some blocks in the area behind the object. Due to these roughness and drawback, He et al. [4] approach generates artificial and spurious edges as shown in the corresponding visible edge maps of these images. Also their approach is based on the dark channel prior, it works well for the images with non-sky regions like Figs 5(c), 5(o), Fig 11(o) and especially in Fig 7(g). It may fails for images containing sky regions and water (adapting shape of the sky) as shown in Fig 11((c), 11(g), 11(k) and 11(o)). Visual inspection shows that we perform more effectively with images taken at night, as shown in Fig 15 and is free of generating noise, stair-cases like artifacts and spurious edges as well. The discussion above shows how our proposed model can effectively reduce unwanted halos and our method can preserve more essential image features than other methods and can outstrip other state-of-the-art approaches in the field of image de-hazing as well.

5.5 Multiple real-world foggy image data-set (MRFID)

We compare our model (M3) de-hazing results with Cho et al. [9] model (M1), He et al. [4] model (M2) and clear images. Various experiments are performed on various images of one of the latest data-sets, multiple real-world foggy image data-set (MRFID). Some of the test results are displayed in Fig 17. The columns from left to right represent given images, M1 de-hazing results, M2 de-hazing results, proposed de-hazing results and clear images. The given images and clear images are taken from MRFID which are captured in different weather conditions. MRFID is available at http://www.vistalab.ac.cn/MRFID-for-defogging/. By visual inspection of the experimental results on MRFID images, one can analyze that the de-hazing results of the M3-model is much better than M1 and M2 models and have a little difference with clear images.

thumbnail
Fig 17. De-hazing performance of the proposed de-hazing model in comparison with the existing models.

From L to R, columns represent the given images, Cho et al. [9] results, He et al. [4] results and proposed model results, respectively.

https://doi.org/10.1371/journal.pone.0282568.g017

5.6 Quantitative assessment

In this part, we quantify, analyze and compare the results of a proposed model (M3) of the de-hazing results with Cho et al. [9] (M1) and He et al. [4] (M2) methods given in Table 4 by showing the mean and standard deviation of the indicator values (e, , σ) and entropy post de-hazed values of the scenes tested in Figs 5 to 16 respectively. The indicator values (e, , σ), and entropy show the amount of corresponding newly visible edges, contrast and visibility enhancement, completely black or white pixels of the algorithms and states of intensity level adapted by the individual pixels of the restored images. By analyzing the values of indicators (e, , σ) and entropy post de-hazed images, we can conclude that the M3-method performs better and preserve the concealed edges of the images due to the fact of limiting contrast and visibility enhancement by removing haze and fog satisfactorily as compared to Cho et al. [9] and He et al. [4] methods. In some of the results, He et al. [4] method yields high values of e than the proposed model as in Figs (5(i), 7(i) and 7(q)) but by inspection of the corresponding visible edge maps of these Figs (6(i), 8(i) and 8(q)), it is clear that they produce spurious edges. Cho et al. [9] and He et al. [4] methods can’t remove the haze and fog completely in some of the images due to which the visible edges of the images remain concealed.

thumbnail
Table 4. Quantitative outputs under coefficients (e, , σ) and entropy measure for the de-hazed images of Figs 5 to 16.

https://doi.org/10.1371/journal.pone.0282568.t004

Where s.d is the standard deviation. The proposed model has the indicator σ values close to zero showing no existence of entirely black or white pixels comparing to state of the arts methods.

6 Conclusion, applications and future work

This work is intended for the restoration of hazy or foggy images through the new Gaussian curvature of image surface based variation model combined with dark channel prior (DCP). The proposed method (M3) first estimates the atmospheric veil using the DCP. The transmission map is then converted to the high-quality depth map, with which the improved proposed model can be framed to obtain the haze or fog free image for both (indoor and outdoor) realistic and synthetic data set images. The proposed method can be employed for color and grey images. The resulting augmented Lagrangian functional was efficiently solved utilizing the augmented Lagrangian method and a special minimization procedure is adopted as well. We also employ FFT to compute numerically the system of PDEs arisen from the minimization of the energy functional. We have tested several (indoor and outdoor) hazy and foggy images in order to validate the effectiveness of the proposed method. We also compared our results with other state-of-the-art de-hazing methods. The test results have demonstrated that the M3-method is more effective while preserving sharp edges, contrast and other image details during the haze or fog removal process quite well than other comparing methods.

Applications:

The applications of the proposed method may be extended to cover image segmentation, image inpainting, inland river image processing, road scenes image processing under homogeneous and heterogeneous haze or fog, defense and surveillance images, underwater image processing, and hazy or foggy videos processing.

Future work:

The next work involves developing the methodology for structure detectors to indicate the structures in distinct directions and scales and then to improve the performance of the proposed method. We will also continue to deeply analyze the method proposed and demonstrate the convergence of the scheme. Some fast numerical schemes for the PDE system could be considered in future work as a result of minimizing the augmented Lagrangian functional.

References

  1. 1. Koschmieder H. Theorie der horizontalen Sichtweite. Beitrage zur Physik der freien Atmosphare, Keim and Nemnich, Frankfurt, Germany, (1924).
  2. 2. Robby TT. Visibility in bad weather from a single image. Proceedings of IEEE CVPR, (2008).
  3. 3. Raanan F. Single image dehazing. In ACM Trans. Graph, 27(3), (2008), 1–72.
  4. 4. Kaiming H, Jian S, Xiaoou T. Single image haze removal using dark channel prior. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 33(12), (2009), 1956–1963.
  5. 5. Qingsong Z, Jiaming M, Ling S. A fast single image haze removal algorithm using color attenuation prior. IEEE Transactions on Image Processing, 24(11), (2015), 3522–3533.
  6. 6. Dubok P, David KH, Changwon J, Hanseok K. Fast single image dehazing using characteristics of rgb channel of foggy image. IEICE Transactions on Information and Systems, 96(8), (2013), 1793–1799.
  7. 7. Zhengguo L, Jinghong Z. Edge-preserving decomposition-based single image haze removal. IEEE Transactions on Image Processing, 24(12), (2015), 5432–5441.
  8. 8. Yang J, Zhang B, Shi Y. Scattering Removal for Finger-Vein Image Restoration. Sensors, 12(3), 2012, 3627–3640. pmid:22737028
  9. 9. Wan HC, In SN, Seong CS, Sang KK, Soon YP. Single image de-fogging method using variational approach for edge-preserving regularization. World cademy of Science, Engineering and Technology, International Journal of Computer, Electrical, Automation, Control and Information Engineering, 7(6), (2013), 829–833.
  10. 10. Faming F, Fang L, Tieyong Z. Single image dehazing and denoising: A fast variational approach. SIAM J. Imaging Sci., 7(2), (2014), 969–996.
  11. 11. Liangkai L, Wei F, Jiawan Z. Contrast enhancement based single image dehazing via TV-l1 minimization. In Proc. IEEE ICME, Chengdu, China, July (2014), 1–6. https://doi.org/10.1109/ICME.2014.6890277
  12. 12. Xuan L, Fanxiang Z, Zhitong H, Yuefeng J. Single color image dehazing based on digital total variation filter with color transfer. In Proc. IEEE ICIP, Melbourne, VIC, Australia., September (2013), 909–913. https://doi.org/10.1109/ICIP.2013.6738188
  13. 13. Kristian B, Karl K, Thomas P. Total generalized variation. SIAM J. Imaging Sci., 3(3), (2010), 492–526.
  14. 14. Ryan WL, Lin S, Simon CHY, Defeng W. Box-constrained second-order total generalized variation minimization with a combined L1,2 data-fidelity term for image reconstruction. J. Electron. Imaging, 24(3), June (2015), pp. 033026.
  15. 15. Shu Z, Ting W, Junyu D, Yu Hui. Underwater image enhancement via extended multi-scale Retinex. Neurocomputing, 245(C), July (2017), 1–9. https://doi.org/10.1016/j.neucom.2017.03.029
  16. 16. Jin HK, Won DJ, Jae YS, Chang SK. Optimized contrast enhancement for real-time image and video dehazing. J. Vis. Commun. Image Represent, 24(3), April (2013), 410–425.
  17. 17. Kaiming H, Jian S, Xiaoou T. Single image haze removal using dark channel prior. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(12), December (2011), 2341–2353.
  18. 18. Kaiming H, Jian S, Xiaoou T. Guided image filtering. In Proceedings of the 2010 European Conference on Computer Vision, 35(6), (2010), 1–14.
  19. 19. Yong X, Jie W, Lunke F, Zheng Z. Review of video and image defogging algorithms and related studies on image restoration and enhancement. IEEE Access, 4, (2016), 165–188.
  20. 20. Yu L, Shaodi Y, Michael SB, Robby TT. Haze visibility enhancement: A survey and quantitative benchmarking. Comput. Vis. Image Underst, 165, (2017), 1–16.
  21. 21. Ido O, Michael W. Color lines: Image specific color representation. In Proc. IEEE CVPR, Washington, DC, USA, July (2004), 946–953. https://doi.org/10.1109/CVPR.2004.62 Corpus ID: 15322258
  22. 22. Raanan. F: Dehazing using color-lines. ACM Trans. Graph, 34(1), (2014), pp 1–14.
  23. 23. Dana B, Tali T, Shai A. Non-local image dehazing. In Proc. IEEE CVPR, Las Vegas, NV, USA, June (2016), 1674–1682.
  24. 24. Jean-Philippe T, Nicolas H. Fast visibility restoration from a single color or gray level image. In Proceedings of the 12th IEEE Conference on Computer Vision, 30(2), (2009), 2201–2208. https://doi.org/10.1109/ICCV.2009.5459251
  25. 25. Gaofeng M, Ying W, Jiangyong D, Shiming X, Chunhong P. Efficient image dehazing with boundary constraint and contextual regularization. in Proc. IEEE Int. Conf. Comput. Vis., December (2013), 617–624. https://doi.org/10.1109/ICCV.2013.82
  26. 26. Ko N, Louis K, Stephen L. Bayesian defogging. In Proc. Int. J. Comput. Vis., 98(3), (2012), 263–278.
  27. 27. Yuan-Kai W, Ching-Tang F. Single image defogging by multiscale depth fusion. IEEE Trans. Image Processing, 23(11), (2014), 4826–4837.
  28. 28. Codruta OA, Cosmin A. Single image dehazing by multi-scale fusion. IEEE Trans. Image Processing, 22(8), August (2013), 3271–3282.
  29. 29. Suk-Ho L, Jin KS. Noise removal with Gauss curvature-driven diffusion. Trans. Img., 14(7), (2005), 904–909.
  30. 30. Matthew E, Selim E. Analogue of the total variation denoising model in the context of geometry processing. SIAM Multiscale Model. Simul., 7(4), (2009), 1549–1573.
  31. 31. Anat L, Dani L, Yair W. A closed form solution to natural image matting. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 1, (2006), 61–68. https://doi.org/10.1109/TPAMI.2007.1177
  32. 32. Shu QL, Wu CS, Zhong QX, Liu RW. Total Generalized Variation-Regularized Variational Model for Single Image Dehazing. Proceedings Volume 10615, Ninth International Conference on Graphic and Image Processing (ICGIP 2017); 106152M (2018). https://doi.org/10.1117/12.2302936
  33. 33. Wei Z, Chan T. Image denoising using mean curvature of image surface. SIAM J. Imaging Sciences, 5(1), (2012), 1–32.
  34. 34. Wei Z, Xuecheng T, Tony C. A fast algorithm for a mean curvature based image denoising model using augmented lagrangian method. In Efficient Algorithms for Global Optimization Methods in Computer Vision. Springer Berlin Heidelberg, (2014), 104–118. https://doi.org/10.1007/978-3-642-54774-45
  35. 35. Stanley HC, Ramsin K, Kristofor BG, Philip EG, Truong QN. An augmented Lagrangian method for total variation video restoration. IEEE Transactions on Image Processing, 20(11), (2011), 3097–3111.
  36. 36. Chunlin W, Xue-cheng T. Augmented Lagrangian Method, Dual Methods, and Split Bregman Iteration for ROF, Vectorial TV, and High Order Models. SIAM Journal on Imaging Sciences, 3(3), (2010), 300–339. Corpus ID: 1520529
  37. 37. Chunlin W, Juyong Z, Xue-Cheng T. Augmented Lagrangian method for total variation restoration with non-quadratic fidelity. Inverse Problems and Imaging, 5(1), (2011), 237–261.
  38. 38. Zhu Wei, Tai Xuecheng, Tony C. Augmented Lagrangian method for a mean curvature based image denoising model. Inverse Problems and Imaging, 7(4), (2013), 1409–1432.
  39. 39. Yilun W, Junfeng Y, Wotao Y, Yin Z. A new alternating minimization algorithm for total variation image reconstruction. SIAM J. Imaging Sciences, 1, (2008), 248–272.
  40. 40. Junfeng Y, Yin W, Yin Z, Yilun W. A fast algorithm for edge-preserving variational multichannel image restoration. SIAM J. Imaging Sciences, 2, (2009), 569–592. Corpus ID: 11179439.
  41. 41. Junfeng Y, Yin Z, Wotao Y. An efficient TVL1 algorithm for deblurring multichannel images corrupted by impulsive noise. SIAM J. Sci. Comput., 31, (2009), 2842–2865. https://hdl.handle.net/1911/102092