Fig 1.
Block diagram of our algorithm.
See the text for more details.
Fig 2.
Gradient images on the Middlebury dataset “Tsukuba”.
We reconstruct the gradient image based on gradient measure and zoom in on the nose region marked by the yellow rectangle. We also zoom in on the discontinuous region near the nose region marked by the red rectangle. Note that adding a gradient measure can enhance the reliability of the correspondences.
Fig 3.
Comparison of disparity maps without and with gradient measure on “Teddy”.
(a) Disparity map without and with gradient. (b) The disparity samples from (a) marked by a red rectangle. (c) 3D views based on the disparity of (a). Note that performance of flat in textureless regions with gradient measure is much better than without the gradient measure in the red rectangle.
Fig 4.
Local message passing procedure in a Markov network.
Green nodes are intensive variables, where gray nodes are observable variables. The new message sent from node p to q is computed with probability υ(·).
Fig 5.
(a) Performance of the optimal weighting from 0 to 0.5 between the intensity measures and gradient measures on the Middlebury “Tsukuba” dataset. (b) Performance of a1 and a2 on the Middlebury “Tsukuba” dataset.
Table 1.
Abbreviations of six different measures.
Table 2.
Comparison of the results with an error threshold of 1 in the Middlebury dataset.
Our algorithms (IVBP and WIVBP) are shown.
Fig 6.
Comparison of our algorithm and five other BP algorithms with the Middlebury datasets “Tsukuba” and “Cones”, respectively.
Note that our algorithm has much better edge-preserving smooth effects compared with the other BP algorithms
Table 3.
Comparison of the results with an error threshold of 1 in the Middlebury dataset.
Our algorithms BP with SDDT (denoted BP-SDDT), IVBP and WIVBP are shown.
Fig 7.
Comparison diagram of the errors using four methods (BP, BP-SDDT, IVBP, and WIVBP) in non-occluded measures (non.) in the Middlebury datasets “Tsukuba”, “Teddy”, “Cones” and “Venus”.
Fig 8.
Performance on the Middlebury datasets using four methods (BP, BP-SDDT, IVBP, and WIVBP).
From left to right: “Teddy”, “Tsukuba”, “Venus” and “Cones”. From top to bottom: left reference images, image segmentation maps, disparity maps of BP, BP-SDDT, IVBP, and WIVBP, and the ground truth disparity maps.
Fig 9.
Comparison of our algorithms by 3D and contours on the Middlebury datasets “Cones” and “Tsukuba”.
(a) BP, (b) BP-SDDT, (c) IVBP, and (d) WIVBP.
Fig 10.
Error statistics for the percentage of ‘bad’ matching pixels at six different thresholds.
(a) “Tsukuba”, (b) “Teddy”, (c) “Venus”, and (d) “Cones”, including all pixels, pixels in non-occluded areas, pixels in textured areas, pixels in textureless areas and pixels in discontinuous areas. (e) Error statistics for the four methods in the occluded areas.
Fig 11.
Results on the Middlebury datasets.
(a) “Aloe”, (b) “Art”, (c) “Flowerpot”, (d) “Cloth3”, (e) “Baby1”, and (f) “Wood1”. From top to bottom: disparity maps of RealtimeBP, CSBP, HBP, TSGO, IVBP and the ground truth disparity maps. Edge-preserving smoothing effects are indicated by pink arrows in our algorithm.
Table 4.
Results with the Middlebury datasets “Aloe”, “Art”, “Baby1”, “Wood”, “Flowerpot”, and “Cloth3”.
The errors by IVBP are greatly reduced compared with other BP algorithms.
Fig 12.
Synthesized views with HBP and IVBP from Fig 11 in the Middlebury datasets.
Poorer edge-preserving smoothing properties are observed for HBP compared with IVBP, as indicated by pink arrows.
Fig 13.
Performance with the 2014 new Middlebury datasets.
(a) “Livingroom”, (b) “Djembe”, (c) “Australia”, and (d) “Plants”. From top to bottom: left reference images, disparity maps of BP algorithm, disparity maps of CSBP algorithm, disparity maps of IVBP, and the ground truth disparity maps.
Table 5.
Comparison of performance of BP and IVBP in “Tsukuba” datasets.
Fig 14.
Iteration images in “Tsukuba”.
(a) Iteration images with IVBP. (b) Iteration images with BP and IVBP.
Fig 15.
Comparison diagram of mean running time in the Middlebury datasets “Tsukuba”, “Teddy”, “Cones” and “Venus”.
Table 6.
Comparison with other methods on the runtime and errors for “Tsukuba”.