Skip to main content
Advertisement

< Back to Article

Fig 1.

Dataflow graphs of shape analysis pipelines used.

Rounded rectangular objects represent data objects and arrows depict the procedures/algorithms operating on or producing them. All three analyses begin with triangular mesh models of bone. In (A), triangular mesh models are manually digitized. The configurations of corresponding/homologous points obtained are subjected to Generalized Procrustes analysis (GPA) to yield Procrustes pseudolandmark coordinate shape variables. (B) represents automated quantification using Auto3DGM. Farthest point sampling (FSP) is used to subsample triangular meshes. The Generalized Dataset Procrustes Framework (GDPF) of auto3DGM assigns correspondences and aligns subsampled shapes to a common coordinate system (rigid alignment). (C) outlines the proposed descriptor learning approach that builds on auto3DGM with learned non-rigid/deformable functional correspondences between aligned polygon models. These functional maps are used to estimate latent shape space differences that characterize morphological variation expressed as area-based and conformal operators. Note that the GDPF step here subsamples shape at a low resolution with only 128 and 256 pseudolandmarks for initial and final alignment steps, respectively. While not sufficient for capturing shape variation, low-resolution auto3DGM produces the rotation and translation information needed to rigidly align our polygon models.

More »

Fig 1 Expand

Fig 2.

Deep Functional Maps network architecture demonstrating functional and soft P2P map estimation in both directions.

We start with an initial pair of source and target shapes S1 and S2, respectively. Θ is a Siamese harmonic surface network, and Φ and Ψ are the truncated Laplacian eigenbases for S1 and S2. Learned spatial descriptors are then projected to their corresponding bases to form F and G. C12 and C21 are 70x70 functional maps (FMs) estimated in the forward and backward directions between source and target. On the far right are the recovered P2P maps T12 and T21, respectively. In P2P maps, visual representation of correspondence is demonstrated between (homologous) features that have the same color.

More »

Fig 2 Expand

Fig 3.

Improving correspondence with Consistent ZoomOut.

We compare maps generated by HSN ResUNet descriptor learning model to their Consistent ZoomOut-refined counterparts for four randomly chosen pairs of hominoid cuboids. Rows 3A.1 and 3A.2 show source and target shape pairs produced by our model, respectively. Rows 3B.1 and 3B.2 show the same source and target shape pairs after Consistent ZoomOut refinement. Between shape pairs, surface regions of the same color (green/purple/pink/yellow) are considered homologous. Black areas on source shapes indicate a lack of bijective coverage with its associated location on the corresponding target.

More »

Fig 3 Expand

Fig 4.

Estimating and improving hominoid medial cuneiform correspondences.

We compare maps generated by HSN ResUNet descriptor learning model to their Consistent ZoomOut-refined counterparts for four randomly chosen pairs of hominoid cuboids.

More »

Fig 4 Expand

Fig 5.

Estimating and improving mouse humeri correspondences.

We compare maps generated by HSN ResUNet descriptor learning model to their Consistent ZoomOut-refined counterparts for two randomly chosen pairs of mouse humeri.

More »

Fig 5 Expand

Fig 6.

Shape space differences between manually digitized and automated morphological quantification approaches.

The principal component (PC) scores PC1 and PC2 plotted in A and B are from the Procrustes aligned coordinates of the manually digitized landmarks. C and D are based on Procrustes-aligned pseudolandmarks obtained from auto3DGM analyses with 256 and 512 points, respectively. E and F are PCs obtained from our LSSD characterization. E is a morphospace derived from area-based differences in hominoid cuboid shape, while F is based on conformal shape differences.

More »

Fig 6 Expand

Table 1.

Accuracies and standard deviations for hominoid group classification task.

More »

Table 1 Expand

Fig 7.

Polygon models of hominoid cuboid group-mean shape representations.

A and B are polygon models based on landmark and semi-landmark patches where the vertices are manually digitized points. C shows pseudolandmark points (the vertices) with Delaunay triangulation. D features a randomly selected cuboid polygon model from each group for reference in the same orientation.

More »

Fig 7 Expand

Fig 8.

Weighted distinctive functions highlighting where regions are most variable in area-based distortion between hominoid groups.

Dark red indicates the most variability, white the least. Rows: I Proximo-medial view; II Lateral; III Medio-distal; IV Dorso-distal.

More »

Fig 8 Expand

Fig 9.

Weighted distinctive functions highlighting where regions are most variable in conformal distortion on randomly selected pairs of hominoid cuboids.

Dark red indicates the most variability in surface angulation, white the least. Rows: I Proximo-medial view; II Lateral; III Medio-distal; IV Dorso-distal.

More »

Fig 9 Expand

Table 2.

Prominent between-group shape differences detected in representations from each analysis.

More »

Table 2 Expand

Fig 10.

Hylobates Cuboid showing validation landmarks.

Red dots indicate where landmarks are manually placed on 69 sample cuboid meshes.

More »

Fig 10 Expand

Fig 11.

Boxplot of Validation Results.

Red dots indicate where landmarks are manually placed on 69 sample cuboid meshes.

More »

Fig 11 Expand

Table 3.

Mean Euclidean distance between estimated points and ground-truth landmarks.

More »

Table 3 Expand

Fig 12.

Limit Shape and consistent ZoomOut Illustration.

Given a shape collection, Limit Shape S0 is the “mean” shape considering all the shape variation within this shape collection via Yi. Cij (or Cji) is the functional map between shape Si and Sj. The bottom pipeline indicate the way Consistent ZoomOut is performed to refine the functional maps through a joint work of 1. Limit Shape Recomputing; 2. P2P map conversion given the new Limit Shape; 3. Convert back to functional maps from P2P map.

More »

Fig 12 Expand