Figure 1.
(a) Example of registered nodes. (b) Distances between coordinate pairs excluding symmetries. Numbers 1 to 48 correspond to landmarks; red: pairwise edges, excluding symmetries; black: Delaunay triangulation. Example of symmetric distances (25, 24) and (23,24).
Table 1.
Description of data set with number of patients per class.
Figure 2.
Illustration of the procedure to compute importance for point δ. Contributions of point p1, area of triangle t1, distance d1, and angle a1 (blue) are weighted according to distance to δ (red). Distances to p1, centroid c1, midpoint m1, vertex v1 are used for p1, t1, d1, and a1, respectively.
Figure 3.
Average misclassification error glmnet.
Average misclassification error with 95% confidence intervals across leave-one-out cross-validation for models with different values of mixing parameter α. (a) all features (red) and only points (blue) were used and (b) all features and their squares (red) and only points and their squares (blue) were used.
Table 2.
Average misclassification error (AME) with 95% confidence interval for leave-one-out cross validation for glmnet, 20 different values of α (see text), and PCA using only points (p), all features (a), only points and their squares (p+p2) and all features and their squares (a+a2).
Figure 4.
Average misclassification error for values of tuning parameter λ when α = .11.
Table 3.
Simultaneous average misclassification error (AME) per syndrome.
Table 4.
Confusion matrix for the best glmnet model, α = .11, using all features.
Table 5.
Number of non zero coefficients for each syndrome for the best glmnet model (α = .11 using all features).
Table 6.
Pairwise average misclassification error rate for the best glmnet model.
Figure 5.
Visualization of simultaneous classification for syndromes. For each syndrome an importance plot (row I) and a plot visualizing classification features (row F) is provided. Importance plot assigns an importance with respect to classification to each point as described in the text. Feature plots visualize absolute regression coefficients by thickness of line segments (distances), size of points (coordinates), color of areas (areas; dark red more important than light red) and small triangles (angles; dark red more important than light red).
Figure 6.
Visualizations analogous to figure 5 for PCA based classification.