Skip to main content
Browse Subject Areas

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Automatic extraction of endocranial surfaces from CT images of crania


The authors present a method for extracting polygon data of endocranial surfaces from CT images of human crania. Based on the fact that the endocast is the largest empty space in the crania, we automate a procedure for endocast extraction by integrating several image processing techniques. Given CT images of human crania, the proposed method extracts endocranial surfaces by the following three steps. The first step is binarization in order to fill void structures, such as diploic space and cracks in the skull. We use a void detection method based on mathematical morphology. The second step is watershed-based segmentation of the endocranial part from the binary image of the CT image. Here, we introduce an automatic initial seed assignment method for the endocranial region using the distance field of the binary image. The final step is partial polygonization of the CT images using the segmentation results as mask images. The resulting polygons represent only the endocranial part, and the closed manifold surfaces are computed even though the endocast is not isolated in the cranium. Since only the isovalue threshold and the size of void structures are required, the procedure is not dependent on the experience of the user. The present paper also demonstrates that the proposed method can extract polygon data of endocasts from CT images of various crania.


Understanding the causes and process of brain evolution in human lineage is a central problem in the field of physical anthropology. However, since soft tissues such as brain are not fossilized, endocasts must be analyzed in order to infer brain morphology enclosed in fossil crania. Although manually-replicated endocasts have long been used for the materials [1], recent developments in virtual anthropology have allowed to handle 3D CT images directly [2]. The primary objective of the present research is to extract endocasts as polygons.

The use of CT scanning technology is a promising approach for acquiring geometric data of crania. X-ray CT scanners can obtain cross-sectional images of the target objects, and 3D images of the target objects can be obtained by stacking the 3D images. The surface structure of the cranium obtained from CT images can be computed by isosurface extraction methods such as the Marching cubes algorithm [3].

Once the endocranial polygons are extracted, they can be used for various anthropological applications. The primary advantage of CT scanning of fossil crania is that the method provides non-destructive measurement. Thus, researchers can analyze crania in virtual space without the need for physical models. Virtual assembly of crania have been reported by several researchers [49]. Moreover, Amano et al. [10] reported decomposition and reassembly of the Neanderthal Amud 1 cranium in virtual space. Endocranial surfaces may also be used to identify cortical features from sulcus patterns imprinted on the surfaces [1, 1113]. Endocranial models can also be used for variation analysis of brain morphology (e.g., [9, 14, 15]). Moreover, attempts have recently been made to reconstruct the brain morphology of Neanderthals by warping brains of modern humans based on endocranial morphology [16].

One of the primary issues in endocranial polygon extraction from CT images of crania is the requirement of manual operation. Since the endocranial surfaces exist inside the crania, other surfaces, such as exocranial surfaces, must be manually removed from their isosurfaces. This is a tremendous task because these surfaces are close to each other and manually removing exocranial surfaces sometimes results in the inadvertent removal of endocranial surfaces. An alternative approach is slice-by-slice contouring, which is relatively easy. However, tremendous tasks still exist, since the number of slices is usually large. Thus, endocast extraction becomes a bottleneck in digital anthropological research.

Extracting a meaningful region from the geometric data is known as segmentation. This is a classic problem in image processing and geometric modeling, and a number of segmentation methods have been investigated. Commonly used segmentation methods find a discontinuity in the intensity values or geometric features (e.g., curvature) using energy minimization problems such as active contours [17], the level set method [18], graph cut algorithms [19], and variants thereof. These methods work well for scanned images when proper parameter settings or energy functions are designed. However, the parameter settings used in these methods are complicated. For example, Liu et al. introduced a method for extracting human bones based on level set functions [20]. However, this method is designed only for extracting pole-like bones and is not efficient for extracting endocranial polygons. Michikawa et al. introduced a method for extracting vocal tracts based on mathematical morphology [21]. However, this method assumes that the extracted region is almost closed and extracting endocranial spaces that are largely open is difficult.

The present paper describes an automatic method for computing endocast shapes as polygonal data from CT images of human skulls. The proposed method is based on the observation that the endocranial region is the largest space in the skull. Based on this observation, the algorithm used in the proposed method is designed so that the background voxels are classified into the primary endocranial space and other smaller spaces. The proposed method consists of three major steps: binarization, segmentation, and polygonization. We first classify the CT images into the skull (foreground) and the background. Next, we extract the endocranial region from the input data based on watershed segmentation [22]. In this step, we first compute Euclidean distance fields from the binary images of the input data. The initial seed voxels of the endocranial region and other regions are assigned to the voxels with larger distance values. Segmentation is then performed by expanding the initial seed voxels based on the distance fields. Polygonization of the endocranial region is achieved by commonly used isosurface extraction methods [3] for the extracted region only.

One of the primary advantages of the proposed method is the automation of endocast extraction from human crania. The user needs only two parameters: the size of void structures (e.g. diploic space and cracks) and the isovalue threshold for binarization. This means that the extracted results are not dependent on the user’s experience. In addition, the proposed method can extract closed and manifold surfaces of endocranial polygons. This is efficient for various applications, including volume estimation and 3D printing, whereas manual operation requires time-consuming tasks such as hole filling and topological cleaning for generating completely closed manifold surfaces.

In the present study, we implemented the proposed method and applied the method to CT images of various types of crania, including fossil crania. The results demonstrate that the proposed method can automatically extract endocranial polygons from CT images.


Given CT images of crania, the proposed method computes endocranial polygons of crania. Fig 1 shows an overview of the proposed method. The proposed method consists of three major steps: binarization (Fig 1A–1C), segmentation(Fig 1C–1F), and polygonization (Fig 1F–1H).

Fig 1. Overview of the proposed method.

Note that the 2D images shown herein are cross-sections of 3D images.

The binarization step classifies the input CT image into voxels representing the cranium (foreground) and background voxels. We use a simple binarization with an appropriate threshold t, although other methods (e.g., automatic estimation by Otsu [23]) can also be used. Next, we apply the cavity detection method proposed in [24] based on a black (bottom) hat operator in mathematical morphology [25, 26] in order to remove small cavities in the cranium.

The segmentation step classifies background voxels in the binary image into endocast voxels and voxels of other types. The proposed method uses the watershed-based method [22] using distance field [27] of the binary images computed in the previous step. Given the binary image and its distance field, we first assign the initial seeds for the endocast voxels and other voxels (Fig 1E). Since the endocranial region is the largest empty space in the cranium, the center voxels of the endocast must have larger distance values. However, since the voxels with the largest distance values usually exist outside of the cranium, we first assign the “other” label to the edge voxels bi (orange lines in Fig 1E) so that the center point of the endocast will be the voxel with maximum distance value. In addition, for each boundary voxel bi, we also assign the “other” label to its neighboring voxel vj that satisfies ||bivj|| ≤ sd(bi), where d(bi) denotes the distance at bi and s denotes a scaling factor (s < 1) for assigning the label to the voxels with larger distance values. Since the voxel with the greatest distance from the rest of the voxels must be the center of the endocast voxels, we assign the “endocast” label to this voxel e and nearby voxels vj that satisfy ||evj|| ≤ sd(e). Watershed segmentation is then applied to the binary images by expanding the initial seed voxels based on the distance field. When the expansion is stopped, background voxels are decomposed into the endocast voxels and other voxels.

The final step is polygonization of endocranial surfaces from CT images using an extended version of the partial polygonization method using mask images introduced in [21]. Prior to polygonization, We fill voxels with “other” label by the threshold value t used in the binarization step. This manipulation results in closed isosurfaces being obtained by the original Marching cubes algorithm.

Results and discussion


We implemented the above algorithm as a Windows binary in C++. We used Eigen library [28] for linear algebra computation in our implementation, and other parts are developed from scratch. We also applied the algorithm to various types of crania, as summarized in Table 1.

Figs 2 through 7 show the results of the endocast extraction. Although imprints of sulci and gyri are not usually identifiable on the endocasts extracted from adult human crania, they are known to be more pronounced in macaques [13]. Our results demonstrated that identification of cortical features from the endocast morphology may be possible for macaques (Figs 5 and 6) but not for adult humans (Figs 3 and 4).


In the present study, we developed an automatic method for extracting the endocranial surfaces from CT images, in order to facilitate morphological analyses of fossil endocasts. As shown in Figs 2 through 7, the proposed method can extract only the endocranial surfaces from the CT images of human crania while other parts are not polygonized. In particular, surface bumps on the endocranial surfaces are well preserved for all examples. This is because the polygonization method used in the present study inherits sub-voxel accuracy from the marching cubes algorithm. Since other reconstruction methods also use this for polygonization, the result surfaces by other methods must be same, if the same threshold is given. In addition, the surface is guaranteed to be a closed two-manifold surface. These properties enable very efficient quantitative analysis and post-processing, because commonly used geometry processing tools assume that the input shape is manifold. Due to these two properties, the proposed method is easily combined with other geometry processing (e.g., mesh simplification [29]).

Note also that the proposed method can also handle tilted models. For example, the M15 model (Fig 5) is largely tilted. Although conventional slice-by-slice contouring is difficult, the proposed method can extract the endocast of such a tilted model because the computations are done in 3D.

We compared the results obtained using the proposed method with those obtained by manual operations. Fig 8 shows the results for KUMA3147 obtained by manual operations [15] (Fig 8A) and by the proposed method (Fig 8B). According to [15], the results shown in Fig 8A was created using medical imaging software (Analyze 9.0; Mayo Clinic, Biomedical Imaging Resource, Rochester, MN, USA) and reverse engineering software (RapidForm 2006; INUS Technology, Seoul, Korea). The total working time for polygonization was two hours. Note that the polygonal models by [15] are pose-normalized, and we applied a shape registration method to them for evaluation of geometric difference. We confirmed that no significant difference could be found in either of the models. except for the filled regions such as foramen magnum (Fig 8C and 8D). The quantitative difference is very small (0.14 [mm] on average), and the maximum difference appears foramen magnum because these holes are filled in different criteria. Fig 9 shows cross-sections of the CT image and the polygon models. These images show that our segmentation method sometimes expands outside around larger holes. This depends on distance field used in watershed computation. On the other hand, expansion can also be observed in the polygonal models by manual operation as shown in Fig 9D. These differences show that clear criterion for filling these holes do not exist. However, our method provides consistent criterion for segmentation, hence any operators can automatically compute equivalent results from CT images.

Fig 8. Manually obtained mesh and mesh obtained using the proposed method.

The color maps in C and D show the differences between the manually obtained mesh A and the mesh obtained using the proposed method B (Blue: 0 [mm], Green: 0.5 [mm], Red: ≥ 1.0 [mm]).

Fig 9. 2D comparison of cross-section of CT images, polygonal models by manual operation (green) and our result (yellow).

Yellow arrows show significant differences of polygons.

We also compared the present results with those obtained by a level set method using itk-SNAP [30], a popular segmentation tool in medical imaging. Fig 10 shows a result for KUMA 3008 model. As the figure shows, the extracted region protruded from the foramen magnum (yellow dotted circles) because the CT values of the background and the endocast region are similar. In order to overcome this problem, initial seed points must be carefully determined, but appropriate determination of the seeds is not easy and usually time-consuming.

Fig 10. Result of endocast extraction using a level set method.

The cranium and extracted endocasts are drawn in gray and green, respectively.

The proposed method requires two major parameters in order to extract endocast models. The first parameter is the structural element size, or the radius of the sphere, used in the morphological closing in the binarization step. This is required for filling the small cavities in the cranium and the extent of the bottleneck will be a guide for parameter tuning. We used r = 6 [voxels] for all experiments. The other parameter is the isovalue for the endocranial surface. The isovalue is a common parameter for creating polygon data from CT images. The best threshold can be easily estimated by volume rendering software.

In addition, the endocranial surface is robust to the variation of isovalues. Fig 11 shows the results for KUMA3008 and KUMA3147 obtained using different CT values, namely, -400, 0, and 400. No clear geometrical differences were observed between these results. The geometric differences between these models are approximately 1 voxel pitch of the CT images (0.28 ± 0.37 [mm] (KUMA3008) and 0.23 ± 0.43 [mm] (KUMA3147)).

Fig 11. Results for KUMA3008 and KUMA3147 obtained using different thresholds.

The proposed method can also be applied to non-human primates. Figs 5 and 6 showed the results for the crab-eating monkey (Macaca fascicularis) models. These models have smaller endocasts, and automatic initial seed assignment failed in both experiments. Thus, the endocast is not always the largest empty space in the crania of non-human primates. For these examples, we provide an alternative approach to the initial seed assignment. Given the binary image and its distance field, we binarize the distance field by the threshold . The objective of binarization is to obtain two connected components used for “other” and “endocast” labels. The guideline for is to fill all the bottlenecks connecting to the endocast space. We used [mm] for the M15 data and [mm] for the M16 data. Note that this is not necessary for the extraction of human crania because the endocranial region is the largest empty space in the cranium.

The computation times of the experiments are summarized in Table 2. The experiments were conducted using a Windows PC with an Intel Corei7-3930K (3.2 GHz) processor, 64 GB of RAM, and an NVIDIA Quadro 4000 graphics processing unit. Although our implementation has not been yet fully optimized, the computation time was less than ten minutes for all examples. We expect the computation time will be improved by, for example, optimizing the graphics processing unit. Although the computation time directly depends on the resolution of the CT images, we believe that the computation time for other samples will not exceed those in the experiments because the cranium models are usually scanned using medical CT scanners and the sizes of the CT images must be similar.

The proposed method has three major limitations. First, the quality of the polygons largely depend on the results of binarization. Since the CT values of the thin part of the skull will be smaller than expected, it is hard to determine a good threshold and the binarization results may create unexpected voids. Although the proposed polygonization scheme may fill such defects as it is, the scheme should be improved in the future. The second limitation is how to define the boundary surfaces of canal structure. The last limitation is that the proposed method may fail when the assumption that the endocast is the largest empty space in the CT images does not hold. In such cases, other empty regions may be extracted.


We have presented a method for computing endocranial surfaces from CT images of crania. One of the primary contributions of the present study is to automate endocast extraction using volumetric image analysis technology. The experimental results revealed that the proposed method could extract endocranial polygons from CT images of human crania within ten minutes using a common desktop PC. We expect that the proposed method will accelerate morphological analyses of fossil crania, such as the analysis of individual differences and inference of brain shapes.

The proposed method has the potential to extract other cavity structures in the human body (e.g., sinuses). In the future, we would like to extend the proposed method so that other anatomical features can be extracted. As such, we need to introduce other criteria in order to extract target features. In addition, we would like to address some limitations discussed in the previous section in order to allow accurate extraction of cranial surfaces.


The CT scan data of the Mladec1 was obtained from the digital archive of fossil hominoids, the University of Vienna, Austria. The other CT images used in the experiment are from University of North Carolina (CT Head), Laboratory of Physical Anthropology at Kyoto University (KUMA3008, KUMA3147) and National Defense Medical College (M15, M16). The authors wish to express our sincere gratitude to Takeru Akazawa for giving us an opportunity to participate in this research. The authors would also like to thank Hideki Amano and Yusuke Morita for their assistance throughout the course of the present study, and Hajime Ishida, Ryosuke Kimura, Daisuke Kubo and Yutaka Ohtake for valuable comments.

Author Contributions

  1. Conceptualization: TM HS NO.
  2. Data curation: TM.
  3. Formal analysis: TM MM HS NO OK YK.
  4. Funding acquisition: NO.
  5. Investigation: TM MM HS NO OK YK.
  6. Methodology: TM MM HS NO OK YK.
  7. Project administration: HS NO.
  8. Resources: NO OK YK.
  9. Software: TM.
  10. Supervision: HS NO OK YK.
  11. Validation: TM MM HS NO OK YK.
  12. Visualization: TM MM HS.
  13. Writing – original draft: TM NO.
  14. Writing – review & editing: TM MM HS NO OK YK.


  1. 1. Holloway RL, Broadfield DC, Yuan MS. The Human Fossil Record: Brain Endocasts, The Paleoneurological Evidence, Volume Three. John Wiley & Sons, Inc.; 2004.
  2. 2. Zollikofer CPE, Ponce de León MS. Virtual Reconstruction: A Primer in Computer-Assisted Paleontology and Biomedicine. Wiley; 2005.
  3. 3. Lorensen WE, Cline HE. Marching Cubes: A High Resolution 3D Surface Construction Algorithm. SIGGRAPH Computer Graphics. 1987 Aug;21(4):163–169.
  4. 4. Kalvin AD, Dean D, Hublin JJ. Reconstruction of human fossils. IEEE Computer Graphics and Applications. 1995 Jan;15(1):12–15.
  5. 5. Zollikofer CPE, Ponce de León MS, Martin RD, Stucki P. Neanderthal computer skulls. Nature. 1995 May;375(6529):283–285. pmid:7753190
  6. 6. Gunz P, Mitteroecker P, Bookstein FL. Semilandmarks in Three Dimensions. In: Slice DE, editor. Modern Morphometrics in Physical Anthropology. Developments in Primatology: Progress and Prospects. Springer US; 2005. p. 73–98.
  7. 7. Zollikofer CPE, Ponce de León MS, Lieberman DE, Guy F, Pilbeam D, Likius A, et al. Virtual cranial reconstruction of Sahelanthropus tchadensis. Nature. 2005 Apr;434(7034):755–759. pmid:15815628
  8. 8. Suwa G, Asfaw B, Kono RT, Kubo D, Lovejoy CO, White TD. The Ardipithecus ramidus Skull and Its Implications for Hominid Origins. Science. 2009 Oct;326(5949):68–68e7. pmid:19810194
  9. 9. Falk D. Hominin paleoneurology: where are we now? Progress in Brain Research. 2012;195:255–272. pmid:22230631
  10. 10. Amano H, Kikuchi T, Morita Y, Kondo O, Suzuki H, Ponce de León MS, et al. Virtual reconstruction of the Neanderthal Amud 1 cranium. American Journal of Physical Anthropology. 2015;158(2):185–197. pmid:26249757
  11. 11. Holloway RL. The Human Brain Evolving: A Personal Retrospective. Annual Review of Anthropology. 2008;37(1):1–19.
  12. 12. Zollikofer CPE, Ponce de León MS. Pandora’s growing box: Inferring the evolution and development of hominin brains from endocasts. Evolutionary Anthropology: Issues, News, and Reviews. 2013 Jan;22(1):20–33. pmid:23436646
  13. 13. Kobayashi Y, Matsui T, Haizuka Y, Ogihara N, Hirai N, Matsumura G. Cerebral Sulci and Gyri Observed on Macaque Endocasts. In: Akazawa T, Ogihara N, Tanabe HC, Terashima H, editors. Dynamics of Learning in Neanderthals and Modern Humans Volume 2. Replacement of Neanderthals by Modern Humans Series. Springer Japan; 2014. p. 131–137.
  14. 14. Bruner E. Geometric morphometrics and paleoneurology: brain shape evolution in the genus Homo. Journal of Human Evolution. 2004 Nov;47(5):279–303. pmid:15530349
  15. 15. Morita Y, Amano H, Ogihara N. Three-dimensional endocranial shape variation in the modern Japanese population. Anthropological Science. 2015;123(3):185–191.
  16. 16. Ogihara N, Amano H, Kikuchi T, Morita Y, Hasegawa K, Kochiyama T, et al. Towards digital reconstruction of fossil crania and brain morphology. Anthropological Science. 2015;123:57–68.
  17. 17. Kass M, Witkin A, Terzopoulos D. Snakes: Active contour models. International Journal of Computer Vision. 1988 Jan;1(4):321–331.
  18. 18. Sethian JA. Level set methods and fast marching methods: evolving interfaces in computational geometry, fluid mechanics, computer vision, and materials science. 3. Cambridge University Press; 1999.
  19. 19. Boykov Y, Kolmogorov V. An experimental comparison of min-cut/max- flow algorithms for energy minimization in vision. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2004 Sep;26(9):1124–1137. pmid:15742889
  20. 20. Liu X, Suzuki H, Ohtake Y, Michikawa T. Outermost Bone Surface Extraction from CT Images for Designing Customized Medical Equipment. In: Proceedings of 2010 Asian Conference on Design and Digital Engineering; 2010. p. 797–803.
  21. 21. Michikawa T, Suzuki H, Kimura R. Digital Shape Reconstruction of Vocal Tracts from MRI Images. In: First International Symposium on Computing and Networking (CANDAR); 2013. p. 312–315.
  22. 22. Vincent L, Soille P. Watersheds in Digital Spaces: An Efficient Algorithm Based on Immersion Simulations. IEEE Transactions on Pattern Analysis and Machince Intelligence. 1991 Jun;13(6):583–598.
  23. 23. Otsu N. A Threshold Selection Method from Gray-Level Histograms. IEEE Transactions on Systems, Man, and Cybernetics. 1979 Jan;9(1):62–66.
  24. 24. Michikawa T, Suzuki H, Hishida H, Inagaki K, Nakamura T. Efficient Void Extraction from CT images. In: Book of abstracts (Posters) in Tomography of Materials and Structures; 2013. p. 175–178.
  25. 25. Meyer F. Contrast feature extraction. Quantitative Analysis of Microstructures in Material Sciences, Biology, and Medicine, Special Issue of Practical Metallography. 1977; p.374–380.
  26. 26. Bradski G, Kaehler A. Learning OpenCV: Computer Vision in C++ with the OpenCV Library. 2nd ed. O’Reilly Media, Inc.; 2013.
  27. 27. Maurer CR Jr, Qi R, Raghavan V. A Linear Time Algorithm for Computing Exact Euclidean Distance Transforms of Binary Images in Arbitrary Dimensions. IEEE Transactions on Pattern Analysis and Machince Intelligence. 2003 Feb;25(2):265–270.
  28. 28. Guennebaud G, Jacob B, others. Eigen v3. 2010 [cited 10 June 2016] Avaiable:
  29. 29. Garland M, Heckbert PS. Surface Simplification Using Quadric Error Metrics. In: Proceedings of the 24th Annual Conference on Computer Graphics and Interactive Techniques. SIGGRAPH’97. New York, NY, USA: ACM Press/Addison-Wesley Publishing Co.; 1997. p. 209–216.
  30. 30. Yushkevich PA, Piven J, Hazlett HC, Smith RG, Ho S, Gee JC, and Gerig G. User-guided 3D active contour segmentation of anatomical structures: Significantly improved efficiency and reliability. Neuroimage. 2006 Jul;31(3):1116–1128. pmid:16545965