Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

< Back to Article

Table 1.

3D reconstruction statistics.

More »

Table 1 Expand

Fig 1.

Illustration of the relationship between 3D reef reconstructions and images.

3D reconstructions are composed of linked triangular elements forming a surface mesh. The image locations where each triangular element is captured can be calculated from the camera transformation matrix and camera model as illustrated here for an element on a small S. siderea coral. The color of the mesh elements represents height above bottom.

More »

Fig 1 Expand

Table 2.

Class descriptions.

More »

Table 2 Expand

Fig 2.

Diagram of the nViewNet neural network used to merge information from multiple views.

A set number of image patches (n) of a mesh element are passed through an abbreviated ResNet152 CNN. The feature vectors summarizing each view are concatenated and passed into a fully connected (FC) collapse layer followed by an output layer, which provides a single prediction for each mesh element.

More »

Fig 2 Expand

Fig 3.

Confusion matrices showing the performance of the multi-view classifiers: (A) voting, (B) averaging, and (C) nViewNet-8. Rows represent the true class of the annotated mesh elements and the columns indicate the class predicted by the classifier. Each entry in the matrix is the number of the true class elements (row) classified into each predicted class (column).

More »

Fig 3 Expand

Fig 4.

Effect of number of views input to nViewNet on classification accuracy.

A) hard corals, B) octocorals, C) other classes and overall accuracy. The classification accuracies of the voting (triangles) and averaging (squares) approaches are shown to the right of the dashed vertical lines.

More »

Fig 4 Expand

Table 3.

Overall and per class accuracies of leave one out tests conducted to assess model transferability.

If there were <20 annotations for any class, its accuracy was not reported.

More »

Table 3 Expand

Fig 5.

Sample texture-mapped and classified 3D reconstructions.

A) Texture-mapped and B) classified reconstructions of a segment of Little Grecian reef (LG1) viewed from overhead. C) Shows a side view of the classified reconstruction with shading to highlight its three-dimensional nature. D) Texture-mapped and E) classified reconstructions of a portion of Horseshoe reef (H1). F) Shows a side-view of the classified reconstruction. 3D reconstructions were generated and texture-mapped from the original images using commercial software (Agisoft Photoscan). The 3D reconstructions were then classified using the nViewNet-8 neural network. This figure is best viewed on a computer screen.

More »

Fig 5 Expand

Fig 6.

Reconstruction and classification (nViewNet-8) of a larger section of Little Grecian reef (LG9; ~40 m x 30 m) demonstrating the potential to expand the method to landscape scale.

This figure is best viewed on a computer screen.

More »

Fig 6 Expand