Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

< Back to Article

Fig 1.

Graphical demonstration of attenuation rates corresponding to different wavelengths when light propagates in water.

Blue travels the longest because it has the shortest wavelength. This is one of the main reasons why underwater images often appear blue [1].

More »

Fig 1 Expand

Fig 2.

Comparison of RGB tricolor histograms for underwater image and underwater restoration image.

Of which, the lower left figure is the input image, the upper left figure is the recovered underwater image, and the bottom right image is the input underwater image, the top right image is the recovered underwater image.

More »

Fig 2 Expand

Fig 3.

The overall structure of our model.

The first step is to input the underwater image of the degradation domain, reconstruct the features through the generator G(Ix), and extract the features from the two images by the encoder Genc. Then the Que-Attn module selects the important features to establish the contrastive loss. The generated image is input to the discriminator for identification, and then the parameters of the whole network are updated.

More »

Fig 3 Expand

Fig 4.

Detailed diagram of attention block.

We first use the encoder to extract the features Fx and Fy of the input picture and the generated image. Then the attention matrix Ag is obtained by matrix operation on Fx. We recombine the features according to their importance and select n rows to form the attention matrix AQA we need. The AQA is used to choose the patch between the source domain and the target domain value features. The positive, negative, and anchor features are obtained to construct the contrastive loss Lcon. Both positive and negative samples are from the real domain image Ix, while the anchor is from the translated image G(Ix).

More »

Fig 4 Expand

Table 1.

Quantitative results on synthetic underwater images in terms of SSIM, PSNR, and UIQM for EUVP dataset.

More »

Table 1 Expand

Table 2.

Quantitative results on synthetic underwater images in terms of SSIM, PSNR, and UIQM for UIEB dataset.

More »

Table 2 Expand

Fig 5.

Qualitative results on the HICRD test dataset, in which all examples are randomly selected from the test dataset.

We compare our model with other underwater image restoration baselines. Traditional restoration methods cannot remove underwater images’ green and blue tones. Our model shows satisfactory visual effects without content loss and structure loss.

More »

Fig 5 Expand

Table 3.

Quantitative results on synthetic underwater images in terms of SSIM, PSNR, and UIQM for HICRD dataset.

More »

Table 3 Expand

Table 4.

Quantitative results on synthetic underwater images in terms of SSIM, PSNR, and UIQM.

A represents the use of ordinary GAN network, B represents the use of GAN network and contrastive loss, and C represents the use of GAN network, query attention and contrastive loss.

More »

Table 4 Expand