Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

< Back to Article

Fig 1.

Colorization process of our algorithm.

In the dotted box, the corrected grayscale image is fed into the network model for color prediction of a* and b*.

More »

Fig 1 Expand

Fig 2.

Our model.

Some semantic segmentation information of the image can help the prediction network understand the content of the image and coloring accurately.

More »

Fig 2 Expand

Table 1.

Parameter settings of semantic segmentation network.

More »

Table 1 Expand

Table 2.

Parameter settings of color prediction network.

More »

Table 2 Expand

Fig 3.

Comparison of coloring effects under different epochs.

The full name of GT is the ground truth, that is, is also the source image. The full name of Eps is Epochs. For convenience, the following image subtitles will use GT instead of the ground truth.

More »

Fig 3 Expand

Fig 4.

Quantitative evaluation.

The data in this figure is from Fig 3, and the images are in the same order.

More »

Fig 4 Expand

Fig 5.

The image and its histogram before and after gray-histogram equalization.

More »

Fig 5 Expand

Fig 6.

The effect of HE on image colorization.

From top to bottom: two normal exposure image, two overexposed images, two underexposed images. From left to right: the ground truth, the coloring effect of Iizuka et al. [16] (the original method), the coloring effect of Iizuka et al. [16] with histogram equalization(the improved method), the coloring effect of ours without histogram equalization, and the coloring effect of ours with histogram equalization.(HE = histogram equalization). (a) GT, (b) [16], (c) [16] + HE, (d) Ours—HE, (e) Ours.

More »

Fig 6 Expand

Table 3.

The influence of HE on the colorfulness of Iizuka et al. [16].

More »

Table 3 Expand

Table 4.

The influence of HE on the colorfulness of ours.

More »

Table 4 Expand

Fig 7.

Recolor of natural images.

The first four are simple natural scenes such as the lawn, single object, the sky and simple architecture, while the last four are complex natural scenes such as the water, many objects, brilliant lights and complex color levels. Literature [48] has two ways of automatic coloring and manual coloring, and the effect of automatic coloring is shown here. (a) GT, (b) [16], (c) [17], (d) [18], (e) [47], (f) [48], (g) Ours.

More »

Fig 7 Expand

Table 5.

User ratings.

The data in the table is from Fig 7, and the images are in the same order.

More »

Table 5 Expand

Table 6.

PSNR.

The data in Tables 6–8 are all from Fig 7, and the order of images is the same.

More »

Table 6 Expand

Table 7.

SSIM.

More »

Table 7 Expand

Table 8.

QSSIM.

More »

Table 8 Expand

Fig 8.

Color restoration of black-and-white images.

(a) GT, (b) [16], (c) [17], (d) [18], (e) [47], (f) [48], (g) Ours.

More »

Fig 8 Expand

Fig 9.

Ranking distribution of coloring effects of six algorithms.

Each group of images contains the coloring results of six algorithms, which are sorted in ascending order of 1–6. No parallel ranking is allowed in the same group of images. Among them, the smaller the sorted number, the better the coloring effect processed by the algorithm.

More »

Fig 9 Expand

Table 9.

The ranking analysis of coloring effects of six algorithms.

Each set of images is still sorted in ascending order of 1 − 6. The formula satisfied here is as follows: Top1 = No.1, Top3 = No.1 + No.2 + No.3, Last1 = No.6. It should be noted that the values of the last three columns in Table 9 are reserved only for the integer portion of the percentage.

More »

Table 9 Expand

Fig 10.

Transferability test of our algorithm.

These five groups of images show the comparison of coloring effects of six algorithms on different materials, including wool, forging tape, ceramic, glass, stone Buddha, oil painting, etc. (a) GT, (b) [16], (c) [17], (d) [18], (e) [47], (f) [48], (g) Ours.

More »

Fig 10 Expand

Fig 11.

Limitations of our algorithm.

The three rows of images from top to bottom are the ground truth, ours, and the partial zoom effect corresponding to the red box in the second row.

More »

Fig 11 Expand

Table 10.

Comparison of calculation speed.

The data in Table 10 is the average running time of each image on the CPU/GPU, which is obtained by dividing the total time spent on testing the six models on the ILSVRC2012 by the total number of images.

More »

Table 10 Expand