Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Dictionary learning based noisy image super-resolution via distance penalty weight model

  • Yulan Han ,

    Roles Conceptualization, Data curation, Formal analysis, Methodology, Software, Writing – original draft, Writing – review & editing

    hanyulanbox@126.com (YH); zhaoyp2590@126.com (YZ)

    Affiliation Department of Automatic Test and Control, Harbin Institute of Technology, Harbin, Heilongjiang, China

  • Yongping Zhao ,

    Roles Methodology, Project administration, Supervision, Writing – review & editing

    hanyulanbox@126.com (YH); zhaoyp2590@126.com (YZ)

    Affiliation Department of Automatic Test and Control, Harbin Institute of Technology, Harbin, Heilongjiang, China

  • Qisong Wang

    Roles Methodology, Software, Writing – review & editing

    Affiliation Department of Automatic Test and Control, Harbin Institute of Technology, Harbin, Heilongjiang, China

Dictionary learning based noisy image super-resolution via distance penalty weight model

  • Yulan Han, 
  • Yongping Zhao, 
  • Qisong Wang
PLOS
x

Abstract

In this study, we address the problem of noisy image super-resolution. Noisy low resolution (LR) image is always obtained in applications, while most of the existing algorithms assume that the LR image is noise-free. As to this situation, we present an algorithm for noisy image super-resolution which can achieve simultaneously image super-resolution and denoising. And in the training stage of our method, LR example images are noise-free. For different input LR images, even if the noise variance varies, the dictionary pair does not need to be retrained. For the input LR image patch, the corresponding high resolution (HR) image patch is reconstructed through weighted average of similar HR example patches. To reduce computational cost, we use the atoms of learned sparse dictionary as the examples instead of original example patches. We proposed a distance penalty model for calculating the weight, which can complete a second selection on similar atoms at the same time. Moreover, LR example patches removed mean pixel value are also used to learn dictionary rather than just their gradient features. Based on this, we can reconstruct initial estimated HR image and denoised LR image. Combined with iterative back projection, the two reconstructed images are applied to obtain final estimated HR image. We validate our algorithm on natural images and compared with the previously reported algorithms. Experimental results show that our proposed method performs better noise robustness.

1 Introduction

Single image super-resolution (SR) is a classical problem in computer vision. In general, it uses signal processing techniques to recover a high resolution (HR) image from only one low resolution (LR) image. SR methods can be broadly classified into three categories: interpolation-based methods, reconstruction-based methods, and example-based methods.

Interpolation-based SR such as [1, 2] has been proposed for in various applications and it demonstrates the advantage of fast computational simplicity. But they usually fail to generate fine details in discontinuous regions and often result in introducing blurring of edges and other high-frequency features in practice [3].

Reconstruction-based methods usually integrate one or more sophisticated priors such as gradient profile prior [4], edge prior [5], and total variation [6] into SR literature to estimate the missed details. Recently, sparse-based regularization [710] has also been shown to be particularly effective for the ill-posed problems of SR. Usually, these methods achieved impressive results in preserving sharper edges and suppressing aliasing artifacts. However, the performance depends heavily upon a rational prior imposed on the up-sampled image [11].

Over the years, many example-based SR methods [1214] have been proposed with demonstrated promising results and become the mainstream approaches of SR domain. The methods assume that the missing high frequency details can be estimated based on learning the mapping relationship from LR-HR patch pairs of external database and input LR patches. Two kinds of relationship models exist for these methods. One is that between LR patches and the corresponding HR patches in the database. After Freeman et al. [15] used Markov network to model the relationship, regression functions [16] are employed to exploit the relationship between HR and LR patch pairs. In addition, supervised or semi-supervised learning models are introduced into some of the algorithms [1719]. Recently, a mapping of LR-HR image pairs was learned using a deep convolutional neural network [20], and has shown favorable results. D. Dai et al. [21] jointly learned a collection of regressors from LR to HR patches, which collectively yielded the smallest error for all training data. The other is that between LR example patches and input LR patches. Most of the methods [22, 23] is based on Nearest Neighbor Embedding (NNE). In these methods, a fixed number of nearest neighbors are extracted from database for each input LR patch, and then the corresponding HR patches are used to estimate the output HR patch by a linear combination determined by LR patch and its neighbors. Despite the algorithms are demonstrated by successful results, they highly depend on the number of neighbors which is difficult to determine. For this problem, [24] operates on a dynamic k-nearest neighbor algorithm, where k is small for test point with highly relevant neighbors and large others. Some researchers calculate the distance between input patch and its neighbors respectively. The neighbors will be abandoned when the distance is smaller than mean value. Yang [25] exploited sparse coding to perform image SR. The algorithm assumes that LR-HR patch pairs share the same sparse coefficients with respect to their respective dictionaries which are jointly learned from a set of external training images. It can be considered as neighbor embedding in sparse domain without choosing the number of neighbors. Since then, sparse coding is applied to SR problem [2123], and achieves impressive results. Zeyde [26] used dimensionality reduction and orthogonal matching pursuit for sparse representation to improve efficiency. S. Wang [27], proposed a semi-coupled dictionary learning model, under which a pair of dictionaries and a mapping function describing the relationship between sparse coefficients of LR-HR patch pairs will be simultaneously learned. In [28], kernel ridge regression is employed to connect sparse coefficients of LR-HR patch pairs. Kaibing Zhang [29] determine the relationship between LR image patches and HR image patches by assuming that LR image patches and HR image patches are share the same sparse coefficients. R. Timofte et al. [30] proposed a fast image SR method called anchored neighbourhood regression (ANR) which learns sparse dictionaries and regressors anchored to dictionary atoms. This algorithm is faster, while making no compromise on quality. R. Timofte et al. [31] then produced an improved variant of ANR. The study in [31] enhanced these features and anchored regressors for ANR. Instead of learning the regressors on the dictionary, their method uses the full training material. It obtained improved quality, and became the fastest method indisputably. S. Gu [32] proposed a convolutional sparse coding based SR method to address consistency issue. In addition, researches show that image structures tend to repeat themselves within and across scales. [3335] exploits the self-similarity of structures in nature image and extracts the database directly from the LR input image instead of the external database. Good reconstruction quality relies on much additional memory and running time to build counterparts across different scales in a recursive scheme. Therefore, its application is limited.

Although the algorithms can results in better performance, most of the SR algorithms including other learning-based methods assume that the input LR image is noise-free. Such assumption is not in accord with real applications. The algorithms are less robustness to noisy image SR. So another challenge is the super-resolution for noisy images. While compared with SR on clear LR input images, less attention has been paid to develop effective SR algorithms for noisy ones. J. Xie [36] first employs an adaptively regularized Shock filter to tackle the jagged noise, and then perform SR for depth image. The disadvantage of such scheme is that the artifacts can be created in denoising process and magnified in super-resolution process. Therefore, researchers started on simultaneously denoising and super-resolution. In [37], LR training images are magnified by a TV regularization model with a constraint before dictionaries training stage. However, the level of noise dealt with the method is small. Furthermore, it focuses on magnification only. Based on the current research status, we devote to design an algorithm to complete SR and denoising in the same framework to deal with noisy image patches.

Sparse representation makes the signal energy only concentrated in a few atoms. Because of the special nature, some sparse coding based SR algorithms such as [25] show certain robustness to noisy image. In addition, sparse representation has been successfully employed in image denoising [38, 39], image restoration [40, 41] and other processing [42, 43]. The dictionary plays an important role in the sparse representation process. A predefined analytical dictionary (e.g., wavelet dictionary, Gabor dictionary) make the coding fast and explicit, but it is less effective to model the complex local structures of natural images. A synthesis dictionary (e.g., K-SVD dictionary) can be learned from example natural images and has more expensive computation but can better model complex image local structures [44]. In recent years, lots of dictionary learning methods have been proposed and achieved obvious performance. Feng et al. [45] propose to learn jointly the projection matrix for dimensionality reduction and the discriminative dictionary for face representation. Zhang et al. [46] propose a semisupervised label consistent dictionary learning framework for machine fault classification. Inspired by these, we introduce sparse theory to our research. The synthesis procedure is illustrated in Fig 1. The input LR image and example images are firstly cropped into patches. The example images are noise-free. Then the features of example patch pairs are extracted, which will be learned for dictionary pair. For each input LR patch, according to its features, it is easy to achieve simultaneously similar dictionary atom pairs finding and calculating distance bi between input LR patch and its similar atoms. Next, combined with the input LR image patch feature, LR dictionary atom and distance bi are used to compute weight ωi. After the weight is computed, we can obtain estimated HR image patch and denoised LR image patch from . Put all the estimated HR patches into an estimated HR image, which is computed by averaging in overlapping regions. In the same way, we obtain the denoised LR image from all the denoised LR patches. At last, combined with the iterative back projection (IBP), the estimated HR image and the denoised LR image are applied to obtain the final output HR image.

The contributions can be summarized as follows.

(1) Different from the conventional methods, the proposed algorithm can process noisy image, and present for simultaneously image superresolution and denoising. Furthermore, in the training stage of our method, LR example images are noise-free. For different input LR images, even if the noise variance varies, the dictionary pair does not need to be retrained.

(2) The core idea of our proposed method is that the estimated HR patch is weighted average of similar HR example patches. To reduce computational cost for finding similar patches from millions of examples, example patches are replaced by the learned sparse dictionary which makes the signal energy only concentrate in few atoms.

(3) Penalty function is applied to least squares regression regularized by l2-norm for modeling weight. It makes the objective function treat each similar atom unequally. The function is determined by the similarity between input LR patch and its similar atom of LR dictionary. When the similarity is strong, we make the penalty small, which forces large weight at the same time. Conversely, when the similarity is weak, we make the penalty large, which forces small or zero weight at the same time.

(4) LR example patches subtracted mean pixel value are used for training dictionary rather than just their gradient features like other literatures such as [25]. In the training stage, for each LR example patch, we first subtract its mean pixel value, then connect it to its corresponding HR example patch into a single vector. All the new vectors are used as new HR examples to learn HR dictionary. Thus, the HR dictionary represents textures of HR example patches, but also that of LR example patches which are noise-free. Therefore, in the reconstruction stage, the HR dictionary can also be used to recover denoised input LR patches. This is different from conventional learning methods. Combined with iterative back projection (IBP), the denoised LR patches are applied to enhance robustness to noise.

The remainder of this paper is organized as follows. The proposed algorithm is presented in detail in Section 2. Experimental results and comparisons are demonstrated in section 3. Section 4 concludes this paper.

2 The proposed method

Firstly, let us recall the image degradation model which is shown in Eq (1). Given an observed LR image YRM that is a degraded version of a HR image XRN of the same scene (1) Where, Gs is the down-sampling operator with scaling factor s; H is the blurring operator; v is the noise. It is the task of SR reconstruction to recover X from Y as accurate as possible. It is considered that the image is noise-free by conventional SR methods.

2.1 Example database

From the example images , LR images are first obtained, which are considered as noise-free ones. For each image , its corresponding LR image is determined by (2)

A set of vectorized HR patches of size are taken from example HR images and a set of vectorized LR patches of size are taken from example LR images . Consequently, we obtain a database of HR-LR patch pairs (3)

2.2 Distance penalty weight model

For the super-resolution, given a LR image YL, which is generated form HR image XH by Eq (1), the task is to recover the unknown XH from YL with the help of example patch pairs. The algorithm is performed with patch for the unit. Similar to [25], YL is firstly divided into overlapping patches (4) Where, is the vectorized LR image patch of size , Ny is the number of patches of YL.

The estimated vectorized HR image XH can be represented as (5) Where, is the estimated HR image patch of size .

According to Eq (1), the relationship can be described by (6) Where,vi is the noise. We assume that it is Gaussian noise with zero-mean and variance σ2.

Thus, it become the purpose of super-resolution to estimate HR image patch from input LR image patch .

As we known, for each , it can be approximated by HR example patches through weighted average, which have similar structures. Therefore, based on this core idea, the problem in this method is to find the similar patches of in database and to calculate the weight.

Due to the repetition of local structures of images, a subset of patches in which has similar structures with exists. That is (7) Where, weight vector is ωi = [ωi1, ωi2, …, ωij, …, ωik]T, k is the number of the patch pairs in this subset .

There are many methods to determine the weight, such as set the weights to be inversely proportional to the distance between patches. These methods relying on number of similar patches heavily, and cannot suppress noise. Now, we discuss a new weight model in details. According to the degradation model Eqs (1) and (7), we have (8)

From Eq (8), we can obtain (9) Where, vi is assumed as Gaussian noise with zero-mean and variance σ2.

Thus, (10) (11) Where, εi is related to σ2. We can see that the LR patch can be represented by the same weight vector ωi over , with an error εi. That is to say, we can get the weight from input LR image patch and similar LR example patches with a controlled error.

Based on the above discussions, We formulate the weight solution as a least squares regression regularized by l2-norm: (12)

From Eq (12), the objective function treats the patches equally. It is not flexible to obtain accurate weights for the input patch. Motivated by this, we introduce distance penalty to the least square problem (13) Where, ⋅ denotes a point wise vector product, bi = [bi1, bi2, …, bij, …, bik]T. bi is the distance between and each similar example patch in . When the similarity between and is strong, we make the bij small, which forces large ωij at the same time. Conversely, when the similarity is weak, we make bij large, which forces small or zero ωij at the same time. It is simply determined by the squared Euclidean distance.

Eq (13) can be written as (14) Where, λ is a regularization parameter.

According to Eq (10), we have (15) Where, γ is a positive constant. So we set λ = γσ2, when σ ≠ 0.

Thus, the main task in reconstruction stage is to find the patches from pl, which is similar to and compute the weight. Squared Euclidean distance can be adopted in to quantify the similarity. The corresponding is assumed to have similar structures with . But it is uneasy to find similar patches for each input patch from millions of example patch pairs. It will take lots of time for the repetitive computation. Sparse dictionary make the signal energy only concentrate in few atoms, and some sparse coding based SR algorithm [25] show certain robustness to noisy image, so that we use a learned sparse dictionary instead of examples. We find similar patch pairs from dictionary atom pairs, meaning .

Two dictionaries Dh and Dl are trained to have the same sparse coding for each HR and LR patch pair. Similar to Yang [25] and Chang [22], we subtract the mean pixel value for each HR example patch, so that the dictionary Dh represents image textures rather than absolute intensities. In the reconstruction stage, the mean value for each estimated patch is then predicted by its LR version. Also we employ first- and second-order derivatives as the feature extraction for LR example patches to train. Thus, Dl represents the gradient feature of images rather than absolute intensities. The four filters used here are: (16)

In addition, to enhance robustness to noise, we also subtract mean pixel value for each LR example patch, and connect the LR example patch to its corresponding HR example patch into a single vector, which is also used to learn Dh. Thus, dictionary Dh represents textures of HR example patches, but also that of LR example patches which are noise-free. In the reconstruction stage, the Dh can also be used to recover denoised input LR patches. This is different from conventional learning methods.

From above, the training set is obtained by (17) Where, (ph,pl) is original HR-LR patch pairs in Eq(3), is the mean value of , is the mean value of , F(⋅) is the operator to get four gradient vectors by Eq (16) and connect the four vectors into a single vector.

The set (PH,PL) is used to jointly train the dictionaries as (18) Where, N and M are the vector dimensions of PH and PL, respectively.

To solve the problem easily, Eq (18) can be rewritten as (19) Where, , .

The minimization of Eq (19) is a typical patch-based sparse problem. Many methods can be used to solve it. Yang [25] proposed the framework and acquired good results. However, it takes a large amount of time to solve this sparse model. Zeyde [26] improve the execution speed by dimensionality reduction on the patches through PCA and Orthogonal Matching Pursuit for the Sparse coding. For sparse dictionaries learning, we use the approach of Zeyde [26].

Gradient features(see Eq (16)) of LR example patches are used to learn LR dictionary. Dl represents the image gradient feature and . Therefore, the weight model is rewritten by (20) Where, is the weight.

This problem Eq (20) is l2-norm constraint. We solve it for by taking . The closed-form solution is (21) Where, , Bi is a k × k diagonal matrix, (22)

The final optimal weight is obtained by rescaling it so that .

2.3 Reconstruction

Based on the above discussions, for each input , we start by extracting its gradient features and finding k similar atom pairs . Because the dictionary atoms are learned basis vectors, we find the similar atoms based on the correlation between the LR dictionary atoms and input LR patch rather than the Euclidean distance. Now, we describe how to compute the correlation.

can be represented by dictionary ( is the LR dictionary atom, nd is dictionary size) (23) Where, β = [β1, β2, …, βj, …, βnd], βj is the correlation between and .

Eq (23) shows that every dictionary atom makes its own contribution to representing the input patch. The contribution of the jth atom can be evaluated by βj. In other words, βj is a measurement of the similarity between the input patch and the jth dictionary atom. We consider that the larger the βj, the larger scale of similarity between input patch and dictionary atom ; and a small βj means that there is little similarity. We can solve β by (24)

Thus, could return the correlation. In Eq (20), we use distance bi as the penalty. When the similarity between and is strong, we make the bij small, which forces large at the same time. Conversely, when the similarity is weak, we make bij large, which forces small or zero at the same time. Therefore, we use the reciprocal of βj to compute the penalty. The atom pairs corresponding to the maximal k correlation coefficients constitute . bi in Eq (20) is determined by (25) Where, Sort(a, num) is a function returning num top biggest values of vector a, abs(.) is absolute value operation. The scheme can achieve simultaneously similar atoms finding and distance computing. If σ = 0, after finding similar atoms, we set bi = 1.

After this, we can easily obtain the weight by Eq (20) and . According to section 2.2, the reconstructed vector represents the estimated HR patch and the denoised LR patch correspondent to . And the estimated patch and the denoised patch are subtracted mean pixel value. Based on this, we have (26) Where, is the estimation of , is the denoised patch of , is an all-one column vector, is an all-one column vector, w1 is the size of , w2 is the size of , E(⋅) is the mean evaluation operator.

Noise here is assumed as zero-means, so (27)

We can see that the noise has little effect on image mean. The mean of and could be estimated by the mean of . Eq (26) can be written by (28)

Put all estimated patches into a HR image , which is computed by averaging in overlapping regions. In the same way, we obtain a denoised image from . In order to strengthen the reconstruction constraint Eq (1), we compute the final estimated HR image X* by (29)

The iterative back-projection (IBP) method [32] is used to solve this optimization problem (30) Where, is the estimate of the HR image at the tth iteration, ↑s denote up-scaling by factor s, p is a symmetric Gaussian filter.

The entire SR process is summarized as Algorithm 1.

Algorithm 1: The Proposed SR Algorithm

Input: the sparse dictionaries Dh and Dl; input LR image Y; number of similar atoms k; a positive constant γ;

output: HR image X*;

1: for each patch of Y do

2:  Extract the gradient features for by Eq (16).

3:  Find k similar atom pairs and compute bi by Eq (25).

4:  Solve Eq (21) for .

5:  Generate estimated HR patch and denoised patch by Eq (28).

6: end for

7: Put the patches and into an image and , respectively.

8: Perform IBP Eq (30) to obtain a HR image X*.

3 Experiments

In this section, we will show the robustness of the proposed algorithm to noise and compare the state-of-the-art methods [20, 22, 25, 26, 31, 32]. In the training stage, we used 77 standard natural images as training set. For testing, we used Set5 [20, 31], Set14 [20, 31] and B100 [20, 31] to evaluate the performance of upscaling factors ×2, ×3 and ×4, respectively. Set5 and Set14 contain 5 and respectively 14 images for super-resolution evaluation. B100 contains 100 testing images of Berkeley Segmentation Dataset called BSDS300.

All LR images (training or test images) are generated from the original HR images. Firstly, the original HR images are directly blurred and down-sampled. The MATLAB function “imresize” is used here to complete the process. The function “imresize” involved a smooth filtering before down-sampling. Similar to [7], the noise is generated by MATLAB function “randn”, and σ times noise is added to the blurred and down-sampled test images. It should be noted that LR example images for training dictionary are noise-free. For color images used in experiments, SR algorithms are performed only on luminance channel, because humans are more sensitive to illuminant changes. Therefore, we first changes channels into YCbCr ones and then apply our method to the Y channel. We interpolate the color layers (Cb, Cr) using bicubic interpolation.

3.1 Parameters

In this section, we analyze the main parameters of our algorithm. The standard settings we use are Set5 [20, 31] database, dictionary size 1024, γ = 0.08 and k = 24 for upscaling factor ×2, k = 8 for upscaling factor ×3, ×4. Peak signal-to-noise ratio (PSNR) and reconstruction time were used as the objective criteria.

3.1.1 Regularization parameter.

γ is a key regularization parameter of our method. Here, we validate the effectiveness of using different γ, and choose an appropriate one. The results of Set5 are shown in Fig 2. Experimental setting is dictionary size 1024 and k = 24 for upscaling factor ×2, k = 8 for upscaling factor ×3, ×4. We can see that the curves are not monotonic, and PSNR peaks at γ = 0.08. For different datasets, the optimal γ is slightly different (0.06 of Set14 and B100 compared to 0.08 of Set5) for reconstruction quality. The results of Set14 and B100 are shown in S1S6 Figs. Therefore, we suggest determining γ to be around 0.08 in practice. Here, in all of our following experiments, we set γ as 0.08 for convenience.

thumbnail
Fig 2. γ versus average PSNR on Set5.

(A) upscaling factor ×2; (B) upscaling factor ×3; (C) upscaling factor ×4.

https://doi.org/10.1371/journal.pone.0182165.g002

3.1.2 Dictionary size.

In this experiments, dictionary size is varied from 32 up to 2048, while the training samples are extracted from the same training images previously mentioned. In Fig 3, we present the results that show the relation between our method’s performance and the dictionary size when γ = 0.08 and k = 24 for upscaling factor ×2, k = 8 for upscaling factor ×3, ×4. Actually, noise has little effect on reconstruction time. So we only show the reconstruction time when σ = 10. We can see that the larger we learn the dictionary, the better reconstruction quality becomes. However, this comes with a higher computational cost. The result is the same as that of [25, 47]. Other datasets Set14 and B100 can also achieve similar results. The results of Set14 and B100 are shown in S7S12 Figs. In practice, we suggest choosing the appropriate dictionary size as a tradeoff between reconstruction quality and computation. Dictionary size here is 1024 in our following experiments.

thumbnail
Fig 3. Dictionary size influence on performance on average on Set5.

(A) upscaling factor ×2; (B) upscaling factor ×3; (C) upscaling factor ×4.

https://doi.org/10.1371/journal.pone.0182165.g003

3.1.3 Number of similar atoms.

The proposed method finds the similar atom pairs for each input patch. The performance of the method depends on the number of similar atoms k. The effect of k is shown in Fig 4 when dictionary size is 1024 and γ = 0.08. Here, we also only show the reconstruction time when σ = 10. We can see that k = 24 is best for reconstruction quality when upscaling factor is ×2. The PSNR peaks at k = 8 when upscaling factor is ×3 or ×4. Moreover, average reconstruction time increases distinctly as k increases. It is due to the fact that by having a larger k, the computation of matrix inversion in Eq (21) increases. Other datasets Set14 and B100 can also achieve similar results. The results of Set14 and B100 are shown in S13S18 Figs. Therefore, in resource-limited systems, a reasonable selection of k depends on the tradeoff between reconstruction quality and computational time. We will use k = 24 when upscaling factor is ×2, k = 8 when upscaling factor is ×3 or ×4 in our further experiments.

thumbnail
Fig 4. Number of similar atoms influence on performance on average on Set5.

(A) upscaling factor ×2; (B) upscaling factor ×3; (C) upscaling factor ×4.

https://doi.org/10.1371/journal.pone.0182165.g004

3.1.4 Patch size and overlap.

Intuitively, using a too large or too small patch size tends to produce a smooth or unwanted artifact as noticed also in [25, 29] and a larger overlapping leads to a better SR results [25]. Therefore, patch size is set as 6×6, 6×6 and 8×8 for upscaling factor ×2, ×3 and ×4, respectively, and overlap is set as 4, 3 and 4 for upscaling factor ×2, ×3 and ×4, respectively.

3.2 Performance evaluation

In this section we analyze the performance of our algorithm in quantitative and qualitative comparison with the state-of-the-art methods including NE [22], SCSR [25], Zeyde [26], A+ [31], SRCNN [20], and CSC [32]. We also show the reconstruction times of the algorithms. The code of the compared method was downloaded from the authors’ homepage. Peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) were used as the objective criteria. The parameters are analyzed in the previous section. Besides the patch size and overlap(see section 3.1.4), the other parameter are unified (γ = 0.08, dictionary size = 1024, k = 24 for upscaling factor ×2, k = 8 for upscaling factor ×3 and ×4).

3.2.1 Quality.

Tables 13 list the PSNR and SSIM comparisons. When σ = 0, the approach CSC [32] achieves the best performance. But it is not in accord with real application. When σ ≠ 0, as repeatedly shown, the results demonstrate the superiority of our proposed algorithm over other approaches on Set5, Set14 and B100. The average PSNR of the recent method CSC [32] is 0.24 dB (Set14, upscaling factor ×4, σ = 5) and 7.4 dB (Set5, upscaling factor ×2, σ = 20) behind our method. Compared with CSC, for dataset B100, the average PSNR improvement is from the minimum 0.52 dB (upscaling factor ×4, σ = 5) to the maximum 6.18 dB (upscaling factor ×2, σ = 20). In addition, our method improves on average 3.62 dB (Set5, upscaling factor ×2, σ = 20) over the next top robustness method SCSR [25]. Figs 58 provide a visual assessment. We can see that our method gets similar quality performance as the top methods it was compared to when σ = 0, and it has the strongest robustness.

thumbnail
Table 3. The results of average PSNR (dB) and SSIM on the Set14 and B100 dataset.

https://doi.org/10.1371/journal.pone.0182165.t003

thumbnail
Fig 5. Comparisons with various image super-resolution methods on “coastguard” from Set14 with upscaling factor ×2 (σ = 0, PSNR in dB).

(A) Ground truth HR; (B) NE [22]; (C) SCSR [25]; (D) Zedye [26]; (E) A+ [31]; (F) SRCNN [20]; (G) CSC [32]; (H) ours.

https://doi.org/10.1371/journal.pone.0182165.g005

thumbnail
Fig 6. Comparisons with various image super-resolution methods on “16077” from B100 with upscaling factor ×2 (σ = 10, PSNR in dB).

(A) Ground truth HR; (B) NE [22]; (C) SCSR [25]; (D) Zedye [26]; (E) A+ [31]; (F) SRCNN [20]; (G) CSC [32]; (H) ours.

https://doi.org/10.1371/journal.pone.0182165.g006

thumbnail
Fig 7. Comparisons with various image super-resolution methods on “241004” from B100 with upscaling factor ×3 (σ = 10, PSNR in dB).

(A) Ground truth HR; (B) NE [22]; (C) SCSR [25]; (D) Zedye [26]; (E) A+ [31]; (F) SRCNN [20]; (G) CSC [32]; (H) ours.

https://doi.org/10.1371/journal.pone.0182165.g007

thumbnail
Fig 8. Comparisons with various image super-resolution methods on “208001” from B100 with upscaling factor ×4 (σ = 10, PSNR in dB).

(A) Ground truth HR; (B) NE [22]; (C) Zedye [26]; (D) A+ [31]; (E) SRCNN [20]; (F) CSC [32]; (G) ours.

https://doi.org/10.1371/journal.pone.0182165.g008

3.2.2 Reconstruction time.

Average reconstruction time of test images in Set5 was compared when σ = 10. Actually, noise has little effect on test results. The experiments were conducted on the same computer. The results are summarized in Table 4. The reconstruction time varies a lot for different upscaling factors. Our algorithm cost fewer than 10s. The reconstruction time of our algorithm is comparable to that of SCSR, CSC, and SRCNN. SCSR is the slowest method.

3.3 Effect of IBP

Combined with iterative back projection (IBP), the denoised LR patches are applied to improve SR performance in our algorithm. According to [47], IBP has an important role to improve SR performance. But if the input is a noisy image, the model of IBP will propagate the noise to the HR image. Experimental results show that if we use IBP algorithm directly on the input LR image, the performance will become worse. The results are listed in Table 5. The iteration number of IBP here is 20. From this comparison, we can see that the superiority of our method is obvious. Other datasets Set14 and B100 can also achieve similar results. The results of Set14 and B100 are shown in S1 Table.

3.4 Effect of distance penalty

Distance penalty is applied to model the weight. To check the effect of the penalty for improving SR performance, we perform our method with and without the penalty respectively on Set5 database. The experiments are done in different γ. The results are shown in in Fig 9. We can see that our method with distance penalty obtains better performance and the superiority of our method with distance penalty is obvious. Other datasets Set14 and B100 can also achieve similar results. The results of Set14 and B100 are shown in S19S24 Figs.

thumbnail
Fig 9. Effect of distance penalty on average PSNR (dB)(Set 5).

(A) upscaling factor ×2; (B) upscaling factor ×3; (C) upscaling factor ×4.

https://doi.org/10.1371/journal.pone.0182165.g009

4 Conclusion

In this research, we proposed an algorithm of noisy image super-resolution based on sparse representation. For the problem of noisy image super-resolution, most of the existing methods will become less effective because they assume that the input LR image is noise-free. The proposed algorithm can achieve simultaneously image super-resolution and denoising. For different input LR images, even if the noise variance varies, the dictionary pair does not need to be retained. The core idea of the proposed algorithm is that HR image patch is reconstructed through weighted average of similar HR example patches. In particular, atoms of learned sparse dictionary are used to compute the weight and reconstruct HR patch instead of example patches. This strategy can reduce time computation and suppress noise. In addition, LR example patches subtracted mean pixel value are also used to learn dictionary rather than just their gradient features, which will help IBP to further improve the SR performance. The experimental results show that our method performs better noise robustness.

Supporting information

S1 Fig. γ versus average PSNR on Set14. (upscaling factor ×2).

https://doi.org/10.1371/journal.pone.0182165.s001

(TIF)

S2 Fig. γ versus average PSNR on Set14. (upscaling factor ×3).

https://doi.org/10.1371/journal.pone.0182165.s002

(TIF)

S3 Fig. γ versus average PSNR on Set14. (upscaling factor ×4).

https://doi.org/10.1371/journal.pone.0182165.s003

(TIF)

S4 Fig. γ versus average PSNR on B100. (upscaling factor ×2).

https://doi.org/10.1371/journal.pone.0182165.s004

(TIF)

S5 Fig. γ versus average PSNR on B100. (upscaling factor ×3).

https://doi.org/10.1371/journal.pone.0182165.s005

(TIF)

S6 Fig. γ versus average PSNR on B100. (upscaling factor ×4).

https://doi.org/10.1371/journal.pone.0182165.s006

(TIF)

S7 Fig. Dictionary size influence on performance on average on Set14. (upscaling factor ×2).

https://doi.org/10.1371/journal.pone.0182165.s007

(TIF)

S8 Fig. Dictionary size influence on performance on average on Set14. (upscaling factor ×3).

https://doi.org/10.1371/journal.pone.0182165.s008

(TIF)

S9 Fig. Dictionary size influence on performance on average on Set14. (upscaling factor ×4).

https://doi.org/10.1371/journal.pone.0182165.s009

(TIF)

S10 Fig. Dictionary size influence on performance on average on B100. (upscaling factor ×2).

https://doi.org/10.1371/journal.pone.0182165.s010

(TIF)

S11 Fig. Dictionary size influence on performance on average on B100. (upscaling factor ×3).

https://doi.org/10.1371/journal.pone.0182165.s011

(TIF)

S12 Fig. Dictionary size influence on performance on average on B100. (upscaling factor ×4).

https://doi.org/10.1371/journal.pone.0182165.s012

(TIF)

S13 Fig. Number of similar atoms influence on performance on average on Set14. (upscaling factor ×2).

https://doi.org/10.1371/journal.pone.0182165.s013

(TIF)

S14 Fig. Number of similar atoms influence on performance on average on Set14. (upscaling factor ×3).

https://doi.org/10.1371/journal.pone.0182165.s014

(TIF)

S15 Fig. Number of similar atoms influence on performance on average on Set14. (upscaling factor ×4).

https://doi.org/10.1371/journal.pone.0182165.s015

(TIF)

S16 Fig. Number of similar atoms influence on performance on average on B100. (upscaling factor ×2).

https://doi.org/10.1371/journal.pone.0182165.s016

(TIF)

S17 Fig. Number of similar atoms influence on performance on average on B100. (upscaling factor ×3).

https://doi.org/10.1371/journal.pone.0182165.s017

(TIF)

S18 Fig. Number of similar atoms influence on performance on average on B100. (upscaling factor ×4).

https://doi.org/10.1371/journal.pone.0182165.s018

(TIF)

S19 Fig. Effect of Distance Penalty on Average PSNR (dB) on average on Set14. (upscaling factor ×2).

https://doi.org/10.1371/journal.pone.0182165.s019

(TIF)

S20 Fig. Effect of Distance Penalty on Average PSNR (dB) on average on Set14. (upscaling factor ×3).

https://doi.org/10.1371/journal.pone.0182165.s020

(TIF)

S21 Fig. Effect of Distance Penalty on Average PSNR (dB) on average on Set14. (upscaling factor ×4).

https://doi.org/10.1371/journal.pone.0182165.s021

(TIF)

S22 Fig. Effect of Distance Penalty on Average PSNR (dB) on average on B100. (upscaling factor ×2).

https://doi.org/10.1371/journal.pone.0182165.s022

(TIF)

S23 Fig. Effect of Distance Penalty on Average PSNR (dB) on average on B100. (upscaling factor ×3).

https://doi.org/10.1371/journal.pone.0182165.s023

(TIF)

S24 Fig. Effect of Distance Penalty on Average PSNR (dB) on average on B100. (upscaling factor ×4).

https://doi.org/10.1371/journal.pone.0182165.s024

(TIF)

S1 Table. Effect of IBP on Average PSNR (dB) and SSIM (Set14 and B100).

https://doi.org/10.1371/journal.pone.0182165.s025

(PDF)

References

  1. 1. Li X, Orchard MT. New edge-directed interpolation. IEEE Trans. Image Process. 10 (2001) 1521–1527. pmid:18255495
  2. 2. Zhang X, Wu X, Image interpolation by adaptive 2-d autoregressive modeling and soft-decision estimation, IEEE Trans. Image Process. 17 (2008) 887–896. pmid:18482884
  3. 3. Lia Xiaoyan, Hea Hongjie, Yina Zhongke, Chena Fan, Cheng Jun, KPLS-based Image Super-resolution using Clustering and Weighted Boosting, Neurocomputing. 149 (2015) 940–948.
  4. 4. Sun J, Xu Z, Shum H-Y, Image super-resolution using gradient profile prior, IEEE Conf. Computer Vision and Pattern Recognition. (2008), 1–8.
  5. 5. Tai YW, Liu S, Brown MS, and Lin S. Super resolution using edge prior and single image detail synthesis, IEEE Conf. Comput. Vis. Pattern Recognit. (2010) 2400–2407.
  6. 6. Marquina A and Osher SJ, Image super-resolution by TV-regularization and Bregman iteration, J. Sci. Comput. 37(3) (2008) 367–382.
  7. 7. Dong W, Zhang L, Shi G, et al., Nonlocally Centralized Sparse Representation for Image Restoration, IEEE Trans. On Image Processing. 22(4) (2013) 1620–1630.
  8. 8. Dong W, Zhang L, Sparse Representation based Image Interpolation with Nonlocal Autoregressive Modeling, IEEE Trans. on Image Processing. 4(22) (2013) 1382–1394.
  9. 9. Zhang J, Zhao D, Gao W, Group-based Sparse Representation for Image Restoration, IEEE Transactions on Image Processing. 8(23) (2014) 3336–3351.
  10. 10. Zhang J, Zhao D, Xiong R, et al., Image Restoration Using Joint Statistical Modeling in a Space-Transform Domain, IEEE Transactions on Circuits System and Video Technology. 6(24) (2014) 915–928.
  11. 11. Lin Z, Shum H, Fundamental limits of reconstruction-based super-resolution algorithms under local translation, IEEE Trans. Pattern Anal. Mach. Intell. 26(1) (2004) 83–97. pmid:15382688
  12. 12. Lee Hui Jung, Choi Dong-Yoon, Song Byung Cheol, Learning-based superresolution algorithm using quantized pattern and bimodal postprocessing for text images, Journal of Electronic Imaging. 24(6) (2015) 063011.
  13. 13. Zhang Kaibing, Gao Xinbo, Tao Dacheng, et al., Single Image Super-Resolution With Multiscale Similarity Learning, IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS. 24(10) (2013) 1648–1659.
  14. 14. Trinh Dinh-Hoan, Luong Marie et al., Novel Example-Based Method for Super-Resolution and Denoising of Medical Images, IEEE Trans on Image Processing. 23(4) (2014) 1882–1895.
  15. 15. Freeman WT, Jones TR, Pasztor EC, Example-based superresolution, IEEE Comput. Graph. 22(2) (2002) 56–65.
  16. 16. Kim KI, Kwon Y, Example-based learning for single-image super-resolution, Proc. DAGM. (2008) 456–465.
  17. 17. Ni Karl S, Nguyen TQ, Image superresolution using support vector regression, IEEE Trans. Image Process. 16(6) (2007) 1596–1610. pmid:17547137
  18. 18. Wei Zhao, Tao Feng, Jun Wang, Kalman filter based method for image superresolution using a sequence of low-resolution images, Journal of Electronic Imaging. 23(1) (2014).
  19. 19. Tang Songze, Xiao Liang, Liu Pengfei, Zhang Jun, Huang Lili, Edge and color preserving single image superresolution, Journal of Electronic Imaging. 23(3) (2014).
  20. 20. Dong Chao, Loy Chen Change, He Kaiming and Tang Xiaoou, Learning a Deep Convolutional Network for Image Super-Resolution, European Conference on Computer Vision. (2014) 184–199.
  21. 21. Dai D, Timofte R, Van Gool L, Jointly Optimized Regressors for Image Super-resolution, Image and Video Processing. 34(2) (2015) 95–104.
  22. 22. Chang H, Yeung D-Y, Y Xiong, Super-resolution through neighbor embedding, IEEE Conf. Comput. Vis. PatternRecog. (2004) 275–282.
  23. 23. Gao X, Zhang K, Li X, Tao D, Image super-resolution with sparse neighbor embedding, IEEE Trans. Image Process. 21(7) (2014) 3194–3205.
  24. 24. Ni Karl S, Nguyen Truong Q. An Adaptable K-Nearest Neighbors Algorithm for MMSE Image Interpolation, IEEE Trans. Image Process. 18(9) (2009) 1976–1987. pmid:19473939
  25. 25. Yang J et al., Image super-resolution via sparse representation, IEEETrans. Image Process. 19(11) (2010) 861–2873.
  26. 26. Zeyde R, Elad M, Protter M, On single image scale-up using sparse-representations, in Proc. 7th Int. Conf. Curves Surf.. (2010) 711–730.
  27. 27. Wang S, Zhang L, Liang Y, Pan Q, Semi-Coupled Dictionary Learning with Applications to Image Super-Resolution and Photo-Sketch Image Synthesis, In CVPR 2012.
  28. 28. Zhou Fei, Yuan Tingrong, Yang Wenming, et al., Single-Image Super-Resolution Based on Compact KPCA Coding and Kernel Regression, IEEE Signal Processing Letters. 3(22) (2015) 336–340.
  29. 29. Zhang Kaibing, Tao Dacheng, Gao Xinbo, Learning Multiple Linear Mappings for Efficient Single Image Super-Resolution, IEEE Trans. Image Process. 3(24) (2015) 846–861.
  30. 30. Timofte Radu, Vincent De Smet, Luc Van Gool, Anchored Neighborhood Regression for Fast Example-Based Super-Resolution, IEEE International Conference on Computer Vision. (2013) 1920–1927.
  31. 31. Timofte Radu, Vincent De Smet, Luc Van Gool, A+: Adjusted Anchored Neighborhood Regression for Fast Super-Resolution, ACCV. 3(24) (2014) 111–126.
  32. 32. Gu S, Zuo W, Xie Q, Meng D, Feng X, Zhang L, Convolutional Sparse Coding for Image Super-resolution, ICCV. 3(24) (2015).
  33. 33. Yang Min chun, Wang Yuchiang, A self-learning Approach to single Image Super-resolution, IEEE Trans. Multimedia. 15(3) (2013) 498–508.
  34. 34. Singh A and Ahuja N, Super-resolution using sub-band self-similarity, ACCV. (2014) 1–8.
  35. 35. Huang Jia-Bin, Singh Abhishek, and Ahuja Narendra, Single Image Super-resolution From Transformed self-exemplars, CVPR. (2015) 5197–5206.
  36. 36. Xie Jun, Feris RS, Yu Shiaw-Shian, Sun Ming-Ting, Joint Super Resolution and Denoising From a Single Depth Image, Transactions on Multimedia. 17(9) (2015) 1525–1537.
  37. 37. Xu Jian, Chang Zhiguo, Fan Jiulun, Zhao Xiaoqiang, Wu Xiaomin, Wang Yanzi, Noisy image magnification with total variation regularization and order-changed dictionary learning, Journal on Advances in Signal Processing. 41 (2015).
  38. 38. Elad M, Aharon M, Image denoising via sparse and redundant representations over learned dictionaries, IEEE Trans. Image Process. 15(2) (2006) 3736–3745. pmid:17153947
  39. 39. Chen Chun Lung Philip, Liu Licheng, Chen Long, Yan Tang Yuan, Zhou Yicong, Weighted Couple Sparse Representation With Classified Regularization for Impulse Noise Removal, IEEE Trans. Image Process. 24(1) (2015) 4014–4026. pmid:26186781
  40. 40. Liu Yu, Wang Zengfu, Simultaneous image fusion and denoising with adaptive sparse representation, IET Image Processing. 9(9) (2015) 347–357.
  41. 41. Dong Weidsheng, Zhang Lei, Nonlocally Centralized Sparse Representation for Image Restoration, IEEE Trans. Image Process. 4(22) (2013) 1620–1630.
  42. 42. Zhang Zhao, Jiang Weiming, Li Fanzhang, Zhao Mingbo, Bing Li, Zhang Li, Structured Latent Label Consistent Dictionary Learning for Salient Machine Faults Representation-Based Robust Classification, IEEE Trans. On Industrial Informatics. 13(2) (2017) 644–656.
  43. 43. Zhang Zhao, Fanzhang Li, Chow Tommy WS, Zhang Li, Yan Shuicheng, Sparse Codes Auto-Extractor for Classification: A Joint Embedding and Dictionary Learning Framework for Representation, IEEE Trans. On Signal Processing. 64(14) (2016) 3790–3805.
  44. 44. Gu Shuhang, Zhang Lei, Zuo Wangmeng, Feng Xiangchu, Projective dictionary pair learning for pattern classification, Advances in Neural Information Processing Systems. (1) (2014) 793–801.
  45. 45. Feng Zhizhao, Yang Meng, Zhang Lei, Liu Yan, Zhang David, Joint discriminative dimensionality reduction and dictionary learning for face recognition, Pattern Recognition. 46(8) (2013) 2134–2143.
  46. 46. Jiang Weiming, Zhang Zhao, Fanzhang Li, Zhang Li, Zhao Mingbo, Jin Xiaohang, Joint Label Consistent Dictionary Learning and Adaptive Label Prediction for Semisupervised Machine Fault Classification, IEEE Trans. On Industrial Informatics. 12(1) (2016) 248–256.
  47. 47. Timofte Radu, Rothe Rasmus, Luc Van Gool, Seven ways to improve example-based single image super resolution, CVPR. (2016).