Fig 1.
Colorization process of our algorithm.
In the dotted box, the corrected grayscale image is fed into the network model for color prediction of a* and b*.
Fig 2.
Some semantic segmentation information of the image can help the prediction network understand the content of the image and coloring accurately.
Table 1.
Parameter settings of semantic segmentation network.
Table 2.
Parameter settings of color prediction network.
Fig 3.
Comparison of coloring effects under different epochs.
The full name of GT is the ground truth, that is, is also the source image. The full name of Eps is Epochs. For convenience, the following image subtitles will use GT instead of the ground truth.
Fig 4.
The data in this figure is from Fig 3, and the images are in the same order.
Fig 5.
The image and its histogram before and after gray-histogram equalization.
Fig 6.
The effect of HE on image colorization.
From top to bottom: two normal exposure image, two overexposed images, two underexposed images. From left to right: the ground truth, the coloring effect of Iizuka et al. [16] (the original method), the coloring effect of Iizuka et al. [16] with histogram equalization(the improved method), the coloring effect of ours without histogram equalization, and the coloring effect of ours with histogram equalization.(HE = histogram equalization). (a) GT, (b) [16], (c) [16] + HE, (d) Ours—HE, (e) Ours.
Table 3.
The influence of HE on the colorfulness of Iizuka et al. [16].
Table 4.
The influence of HE on the colorfulness of ours.
Fig 7.
The first four are simple natural scenes such as the lawn, single object, the sky and simple architecture, while the last four are complex natural scenes such as the water, many objects, brilliant lights and complex color levels. Literature [48] has two ways of automatic coloring and manual coloring, and the effect of automatic coloring is shown here. (a) GT, (b) [16], (c) [17], (d) [18], (e) [47], (f) [48], (g) Ours.
Table 5.
The data in the table is from Fig 7, and the images are in the same order.
Table 6.
The data in Tables 6–8 are all from Fig 7, and the order of images is the same.
Table 7.
SSIM.
Table 8.
QSSIM.
Fig 8.
Color restoration of black-and-white images.
(a) GT, (b) [16], (c) [17], (d) [18], (e) [47], (f) [48], (g) Ours.
Fig 9.
Ranking distribution of coloring effects of six algorithms.
Each group of images contains the coloring results of six algorithms, which are sorted in ascending order of 1–6. No parallel ranking is allowed in the same group of images. Among them, the smaller the sorted number, the better the coloring effect processed by the algorithm.
Table 9.
The ranking analysis of coloring effects of six algorithms.
Each set of images is still sorted in ascending order of 1 − 6. The formula satisfied here is as follows: Top1 = No.1, Top3 = No.1 + No.2 + No.3, Last1 = No.6. It should be noted that the values of the last three columns in Table 9 are reserved only for the integer portion of the percentage.
Fig 10.
Transferability test of our algorithm.
These five groups of images show the comparison of coloring effects of six algorithms on different materials, including wool, forging tape, ceramic, glass, stone Buddha, oil painting, etc. (a) GT, (b) [16], (c) [17], (d) [18], (e) [47], (f) [48], (g) Ours.
Fig 11.
The three rows of images from top to bottom are the ground truth, ours, and the partial zoom effect corresponding to the red box in the second row.
Table 10.
Comparison of calculation speed.
The data in Table 10 is the average running time of each image on the CPU/GPU, which is obtained by dividing the total time spent on testing the six models on the ILSVRC2012 by the total number of images.