Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Rapid, without focus stacking, 3D photogrammetric digitization of cockroaches

Abstract

Natural history collections are seen as treasure troves we need to both preserve and study. Campaigns of 2D and 3D digitization have emerged in numerous institutions as an opportunity to maximize specimen’s diffusion while limiting the risk associated to their manipulation. 2D and especially 3D models can be used for various scientific purposes. Because of different obstacles (time, technical limitations, cost, etc.), the digitization of small and numerous objects, like insect specimens, remains to be improved. Among the existing options, photogrammetry is generally less expensive than µCT-scan, two of the main methods for digitizing objects, but it remains time-consuming for small objects because focus staking—which involves a multiplication of shots—is strongly recommended to increase the depth of field. Here, we present a fast and inexpensive photogrammetric pipeline that generates 3D models of cockroaches of sufficient quality for morphometric geometric analyses. By focusing on a region of interest in the specimens—identified according to the goal of the digitization—the depth of field is reduced by comparison with the one encompassing the whole specimen. Thus, we eliminated the need for focus stacking. We produced 3D models for 62 species and compared 13 of the photogrammetric 3D models qualitatively and quantitatively with those obtained from µCT-scans of the same 13 species. We conclude that the 3D models produced with our pipeline are of sufficient quality to perform geometric morphometric analyses, which will be published elsewhere in a companion paper. Despite a few limitations, we hope that our pipeline will generate opportunities for the study of small objects like insects, one of the most species-rich group on Earth and in natural history collections.

Introduction

The digitization of natural history collections (NHC) is of major interest for the preservation of specimens and the dissemination of scientific knowledge [13]. NHC provide a vast record of the Earth’s biodiversity, which has been and continues to be used to address fundamental questions in ecology, systematics, or evolution [2,4]. Despite the current biodiversity crisis imposing practical and ethical restrictions, the importance of NHC cannot be overstated. Their importance has even been revitalized by, for example, the development of new tools and methods in sequencing or bioinformatics [57]. Among NHC, insects stand out for their diversity and wealth of species and specimens [5]. Insects are also of relatively small size, a well-known challenge for 3D digitization [8]. By providing three-dimensional (3D) models of insect specimens, digitization helps preserve fragile specimens by limiting unnecessary handling, while making them available to those who cannot physically access them [9,10]. These models can be used for morpho-anatomical observations and analyses so that digitization contributes not only to sharing biodiversity data but also to entomological research and technological advances in 3D imaging and analysis [11,12].

Several techniques allow digitizing insect specimens, the main ones being X-ray micro-computed tomography (µCT-scan) and photogrammetry (also known as structure from motion photogrammetry) [13,14]. Aside from manipulation risks, both are relatively non-invasive, which is crucial to ensuring the durability and usefulness of NHC. However, both have their strengths and limitations. Photogrammetry relies on external images to create realistic 3D models of specimens, true to both forms and colors [1317]. It is a relatively non-expensive method [18], valuable not only for visualization purposes, but also for digital preservation of specimens in collections—which can be revisited over time— or for precise and non-invasive morphometric analyses [10,13,19]. It is, however, useless when internal structures need to be examined [14]. X-ray tomography, on the other hand, enables in-depth exploration of internal structures, including details of internal organs and soft parts [20]. But µCT-scans require a dedicated infrastructure, a lot of storage space and computing resources. In addition, pinned specimens may require special preparation or conditions to avoid artifacts caused by the pin: while it is best to remove the pin—which can be a delicate and thorny undertaking with old and fragile type specimens—it is also possible to adjust the intensity of the X-rays or use special algorithms after acquisition. Choosing between these techniques depends on the specific purpose of the digitization and the trade-offs between cost, resolution, and equipment availability [13,21].

As cost is a major constraint in academia, photogrammetry is a widespread solution, especially as many steps can be automated using specialized software [22,23]. It offers an efficient, non-invasive solution for obtaining accurate information about the shape and structure of objects from photographs [2426]. Briefly, photogrammetry can be delineated in five steps [24,26,27]: 1) Image acquisition: Photographs of the object are taken from different angles. This is the only ‘physical’ step, the subsequent steps being carried out using dedicated software for the model reconstruction. 2) Identification of common points: This means identifying the positions of the camera relative to the position of the specimen. It can be achieved by using known landmarks (control points) or common features automatically detected in the images. 3) 3D point extraction: software uses the parallax information in the images to calculate the three-dimensional position of the object’s landmarks. These points are then used to create a 3D point cloud representing the geometry of the object. 4) 3D Modeling: From the point cloud or the depth-maps, a three-dimensional surface is reconstructed to form the 3D model of the object. 5) Texturing: The original images are projected onto the surface of the 3D model, giving it a realistic texture. The resulting model is a mesh of dimension faces, or mesh surface. The accuracy of the results depends primarily on the quality of the images and their overlap.

However, for small organisms such as insects, photogrammetric digitization often requires imposing setups and pipelines, creating many obstacles to rapid and inexpensive digitization [22,28]. The low depth of field of high-magnification lenses is a major constraint, which is overcome by focus stacking to acquire an extended depth of field (EDOF) [27,2932]. But this approach comes at the cost of rapidity as focus stacking requires multiple pictures to be taken at different focal points, which significantly increases acquisition time. Here, we present a comparatively rapid and cost-efficient photogrammetric approach for insects. By focusing on a single part of the body (i.e., the region of interest; in this case, cockroach pronotum), the depth of field that needs to be covered to obtain a sharp image is reduced by comparison with the one encompassing the whole specimen. This reduction eliminates the need for focus stacking while still capturing the rest of the insect, albeit at a lower quality. Importantly, this pipeline can produce high-quality 3D models of the structure of interest that realistically capture the complex shape of these small organisms and, because of its relative rapidity, models that can be used for comparative analyses on large samples. To validate this pipeline, we compared qualitatively and quantitatively the 3D models obtained by our approach with those obtained by µCT-scans for a few specimens.

Materials and methods

Biological context and region of interest

Our final goal, as reported in a ecomorphological study [33], is to investigate morphological adaptations to fossorial habits in cockroaches. We focused on the pronotum, the first dorsal thoracic segment, because the pronotum is of systematic importance, is a functionally versatile structure in cockroaches, and can have a complex shape (Fig 1). The pronotum is notably used in some species to dig in the ground or to make galleries in wood [34,35]. Studying the pronotum of cockroaches in 3D, an unprecedented undertaking, allows a richer and more precise characterization of its shape than is usually done in systematics [3638].

thumbnail
Fig 1. The pronotum, the first dorsal thoracic segment, is a functionally versatile structure in cockroaches.

In numerous species, it covers most of, if not all, the head. From left to right: Rhabdoblatta sp. and Laxta sp. (Australia), Angustonicus sp. (New Caledonia). Credits: Frédéric Legendre.

https://doi.org/10.1371/journal.pone.0336893.g001

We selected 62 species of cockroaches from the collections of the Muséum national d’Histoire naturelle (MNHN, Paris), sampling the three known cockroach superfamilies: Blaberoidea, Blattoidea and Corydioidea (Table 1). Species were selected according to their habitats and availability in the MNHN collection [33]. The size of the specimens ranged from 5 to 45 mm, with four species smaller than 10 mm. Sixteen species had more or less distinct and numerous bristles, especially around the pronotum, while 26 species were reflective, 15 of which were black. The presence of bristles, reflection and a black coloration constitute three challenging conditions for obtaining 3D models by photogrammetry [13,18,21,25,27,31,39].

thumbnail
Table 1. Number of species sampled within each superfamily to generate photogrammetric 3D models (N = 62). The sizes of the specimens as well as their familial and subfamilial attributions are provided. When three species or more were sampled in a subfamily, only the smallest (min) and largest (max) sizes are reported. Each * indicates a microtomographic acquisition for one species (N = 13).

https://doi.org/10.1371/journal.pone.0336893.t001

Microtomographic acquisitions were obtained for 13 of the 62 species, for comparison purposes, using the EasyTom 150 microtomograph (X ray source between 40–150 kV, power up to 75 W, resolution up to 5 µm) at the platform Montpellier, Ressources, Imagerie (MRI). The size of the specimens varied from 5 to 15 mm, and the settings varied accordingly. The focal spot diameter ranged from 8 to 20 µm, the power from 10 to 30 W, and the voxel size ranged from 6 to 19 µm. 3D models were reconstructed using the X-Act software (RX Solution, Chavanod, France), configured with 720 projections over 360°. Some of the photogrammetry-challenging specimens (black, reflective or with bristles) were among these 13 species (Table 1).

Photogrammetry set-up installation

All pictures were taken with a Nikon D7500 DSLR (Digital Single Lens Reflex) camera equipped with an AF-S DX Micro NIKKOR 40 mm f/2.8G lens, all mounted on a tripod (Fig 2). Depending on the size of the specimen, one or more Kenko extension tubes (12, 20, 36 mm) were added to the lens to increase magnification. We used a fixed focal length and manual focus. A Hoya circular polarizing filter was attached to the lens to reduce reflections on the specimens. The photographs were taken in a Godox LST40 mini photo studio (60 x 60 x 60 cm), equipped for lighting with three LED strips with a color temperature of 5600 K, adjustable in intensity, and a Dörr DLP1000 LED panel, adjustable in temperature and intensity [13]. These LEDs provided a uniform lighting and, to reduce cast shadows, all the lamps were covered with white paper or fabric. The whole set-up is portable.

thumbnail
Fig 2. Experimental setup and photogrammetric acquisition principle.

(1) Set-up installation with the camera (a) mounted on a tripod (k) and connected to a computer and its dedicated software (b), and the specimen (h) placed on a sample holder (i) inside the mini-photo studio (g) and above the center of a turntable (l) controlled by Bluetooth; c) laser, d) LED panel, e) LED strip, f) lens, j) scale, m) USB cable; (2) Photograph of the mini-studio installation (3) Schematic diagram illustrating photogrammetric acquisition using three shooting angles (90°, 130° and 160°) between the vertical axis of the sample and the camera.

https://doi.org/10.1371/journal.pone.0336893.g002

Specimens were mounted on holders—we designed and produced three different sizes of holders to accommodate differences in specimen size—and placed on a Foldio 360 turntable, which was controlled and connected to a smartphone via Bluetooth. A millimeter scale was added to the sample holder. Once mounted, the exact position of the specimen is adjusted to its center of rotation, allowing an accurate 360° view of the entire specimen. The rotation axis of the specimen is identified by a laser pointer, positioned vertically (at 90°) from the center of the turntable. The digital camera was set up in portrait mode because of the elongated form of these insects—and thus maximizing the number of pixels representing the specimens—and connected via USB cable to Nikon Camera Control Pro 2 version 4.0.11 (Nikon Corporation).

The cost of the full set-up is itemized in Table 2. It amounts to ca. 3.5 k$ and can be reduced by using free software [17,40].

thumbnail
Table 2. The detailed cost of our 3D insect modeling system.

https://doi.org/10.1371/journal.pone.0336893.t002

Step 1: Image acquisition process.

Once the insect position has been adjusted to its center of rotation, image acquisition can begin. The camera was focused on the pronotum of cockroaches, which is the region of interest. Before each series of pictures, the mounting and center of rotation of the specimen were checked, as well as the reflection of light on each specimen. The optimum aperture, shutter speed, white balance and sensitivity settings were determined using the digital camera’s options and refined using Nikon Camera Control software: usually an aperture of f22 to allow a wide depth of field, a shutter speed comprised between 1/10s and 1/2s to compensate for aperture, and a sensitivity of 64–100 iso for a good definition and noise reduction. Mirror up mode (Mup) was also activated to reduce blur. The exposure was manually adjusted according to the morphological features of the specimen.

To optimize the alignment process during reconstruction and guarantee a detailed 3D model, good overlap between images is required. After a few tests with two to four series of photographs at different angles, three series were found to be necessary and sufficient for accurate modelling in our case [see also 41]. The choice of angles depends on the size of the specimen, the distance at which the image is taken, the digitization equipment used, and the region of interest. Here, three series of photographs were taken at three different angles between the vertical axis of the cockroach and the lens: 90°, 130° and 160° (Fig 3). Each series took photographs of the specimen over a full rotation (360°). During a full rotation, for the first two angles (90° and 130°), the turntable rotated 8 degrees between each shot; for the third angle (at 160° because we focus on the pronotum, in the upper part of the specimen), the turntable rotated 15 degrees between each shot—15 degrees were a good compromise here between quality and time; it has also been identified elsewhere as the optimal lowest value [41]. It resulted in a total of 114 pictures taken over a period of 25 minutes. Pictures were saved in Nikon’s raw (.NEF) and compressed JPEG Fine formats. Photographs in JPEG format were used to control the images obtained during image acquisition, while those in RAW format were used for post-processing. To maximize reconstruction quality, a 90° photograph of the specimen in dorsal view (RAW format) was selected and, in post-production, we proceeded with sharpness, brightness adjustment and fill-in light using RAWTherapee 5.8 free software (Horváth and RawTherapee Development Team). The retouched profile was then applied to the full set of photos of the same specimen. All photographs in RAW format were exported in JPEG format for 3D model reconstruction.

thumbnail
Fig 3. Workflow overview using a specimen of the species Galiblatta cribrosa Hebard, 1926 with the software used and an estimation of the time for each step.

Time estimations were averaged from the metadata of the files generated at each step. 1. Detail of the physical setup shown in Fig 2; 2. Creation of a mask on a post-processed JPEG photo; 3b. Alignment of the camera, represented by blue rectangles for the three viewing angles; 3c. 3D mesh from dense point cloud; 3d. Model texture generation from photo color; 3e. Scale for subsequent measurements; 3f. Homogenization of the number of faces in the dataset (reduced to 700,000 faces); 3g. Isolation of the region of interest for future analysis (here, the pronotum).

https://doi.org/10.1371/journal.pone.0336893.g003

Step 2: Creating masks.

The integration of masks in 3D photogrammetric reconstruction provides several significant advantages [22,41,42]. The purpose of a mask is to hide the background and isolate the subject by accurately delimiting the areas of interest in each image, thereby excluding any unwanted elements and enhancing reconstruction accuracy. Masks also simplify the identification of common points step because unwanted elements are excluded, thus optimizing the photo alignment process. Overall, using masks in 3D photogrammetric reconstruction reduces memory and computing power requirements, speeds up the reconstruction process, and improves accuracy by eliminating superfluous information.

To create a mask for a given specimen, we imported five photographs from different angles and with the highest sharpness into Adobe Photoshop Element 2021 (Adobe Systems, USA). On each photo, the specimen was selected with high precision using the “magnetic lasso” tool. A fusion mask was then created from this selection and exported in binary mode (black & white) in JPEG format. Finally, the mask files were imported into Metashape using the mask import function.

Step 3: 3D reconstruction and isolation of the region of interest.

Although free alternatives exist [e.g., 17,43,44], 3D reconstructions were performed with Agisoft Metashape Professional version 2.0, largely used by the scientific community [e.g., 17,45], following the workflow of [28]:

Step 3a: For each sample, the set of photographs and corresponding masks are imported into the photogrammetric processing software Agisoft Metashape Professional. Note that there was no specific calibration step, as the camera and its focal were automatically recognized by Metashape.

Step 3b: In order to position and orientate the photographs in space and thus visualize an initial point cloud (sparse cloud or tie point cloud) representing the sample, an alignment is performed after importing and associating the masks with the dataset. The internal and external orientation (IO & EO) parameters of the images were calculated automatically in Metashape. Indeed, during the photo alignment procedure, Metashape estimates the interior and exterior camera orientation parameters, including nonlinear radial distortions. To successfully estimate the camera orientation parameters, the information on approximate focal length (pix) is required. This information is extracted automatically from the EXIF metadata.

The alignment is performed with a medium precision parameter, which speeds up the process (Fig 3). The model is then filtered to minimize reconstruction uncertainty and correct projection errors. Camera positions are optimized, exported and re-imported with the “highest” accuracy option. As the camera positions have already been determined, this method is much faster than aligning directly with a high accuracy setting. Once the alignment is complete, the calculated camera positions and a tie point cloud are displayed.

Step 3c: A dense point cloud is then generated, representing the 3D geometry of the sample. The quality and filter parameters are set to “Ultra High” and “Mild”, respectively. The “Mild” setting is required for subsequent mesh reconstruction based on depth maps—each pixel contains a depth value representing the distance between the camera and the object surface. These depth maps are then transformed into partial point clouds for each image and merged into a single point cloud. In this way, Metashape can construct a mesh of numerous triangles connecting the points in the dense cloud. Using depth maps as a data source allows more efficient use of 2D image information and requires fewer resources compared to point cloud-based reconstruction. This mesh is created using the “Build mesh” option with the following parameters: strict volumetric masks enabled, quality set to “Ultra High”, and limited to 1,000,000 faces.

Step 3d (optional): A model texture is constructed using the original photographs. This step creates a textured model that retains the visual details of the scene.

Step 3e: Photography, and by extension photogrammetry, does not provide information about the scale of the object or scene being digitized. But it is pivotal for geometric morphometric analyses. Thus, with the “Add marker” function, we manually defined the scale by positioning two control markers on four to six photographs of the series at 90°, which helps to minimize parallax problems. These markers are placed on the millimeter scale of the photograph. The distance between the two markers is entered in the interface and the scale information is updated. Now, the model integrates its true scale, which was validated by measuring specimens.

Step 3f: The 3D model is then exported in PLY format and decimated to 700,000 faces—this figure was previously tested and found to be a good compromise between mesh size and fidelity of the 3D model—using the 3D mesh processing Geomagic Wrap 2021 software (Geomagic Company, United States). The purpose of this step is to standardize all the data and to speed up model landmarking by generating smaller files.

Step 3g: Focusing the camera specifically on the pronotum, we ensured that it was captured with optimum sharpness. We then isolated pronotums for downstream analyses. To do this, the pronotums were cut out using Geomagic Wrap, parts of the 3D model that do not belong to the pronotum being manually eliminated.

For all the 3D reconstructions, we used a computer with two processors at 3.8 GHz, 96 GB RAM and a Radeon Pro WX 5100 graphic card.

Model landmarking

Although 3D models of whole specimens were generated, 3D geometric morphometric analyses [46] were conducted only in the pronotum, the region of interest. For a given pronotum, we used two 3D anatomical landmarks and 14 sliding semi-landmarks of curves to represent its contour [47], which was combined with a mesh of 311 sliding surface semi-landmarks to capture its shape [48]. First, we built a template to be used as a reference for setting landmarks on the 26 3D pronotums obtained (13 from each acquisition method). The template was based on the specimen of Galiblatta cribrosa Hebard, 1926, whose pronotum shape is the closest to the average shape of the samples at hand. The landmarks were always placed in the same order (as in the template) to preserve the correspondence landmarks from one specimen to the next. The template surface sliding semi-landmarks were then projected and slid onto the other models [49] using the R packages Morpho and rgl [50,51]. Note that a potential operator bias was previously estimated with a repeatability analysis (N = 5) performed on three species (Angustonicus sp. Grandcolas, 1997, Hemelytroblatta cerverae (Bolívar, 1886) and Salganea raggei Roth, 1979). It was found to be negligible compared to the biological variability.

Quality of the 3D models: Quantitative comparison using geometric morphometrics

Of the 62 digitized specimens, 13 specimens were acquired using both photogrammetry and computed microtomography (µCt-scan). For these specimens, we performed geometric morphometric analyses on the region of interest to assess whether the method of acquisition (photogrammetry or µCt-scan) could affect the downstream analyses. In other words, we evaluated the quality of the 3D models generated by our photogrammetric pipeline through visual comparisons of model pairs and a quantitative approach. This allowed us to assess the potential of our insect photogrammetric models for interspecific studies of evolutionary morphology [24,52].

To quantify the differences between the two approaches, pairs of 3D models were aligned in MeshLab (version 2020.07) [53]. During this process, control points are defined on both models to be aligned, and an alignment algorithm uses these points to achieve precise alignment. MeshLab’s 3D comparison function was then used to calculate the distances between the models, generating a deviation map. Subsequently, the “Distance from reference mesh” filter was used to calculate the distance per vertex between a target mesh and a reference mesh. This distance was stored in the quality of the vertices, which were then colored according to this distance.

To assess the (potential) bias due to the two methods of acquisition, we carried out a Principal Component Analysis (PCA) on the Procrustes coordinates obtained using the gpagen function in the geomorph package [54]. For each species, we therefore visualized the variability between the two modes of acquisition, as well as the interspecific variability. A MANOVA (multivariate analysis of variance; [55]) was finally used to assess the relationship between geometric shape and digitizing techniques.

Results and discussion

We produced a set of 71 pronotum models in 3D (58 by photogrammetry and 13 by µCt-scan) for 62 species from the collections of the MNHN, and representing 9 of the 13 Blattodea families. A gallery of all the 3D models obtained by photogrammetry and µCt-scans are available at 10.6084/m9.figshare.27325605. This link also contains a spreadsheet file with several metrics including post-adjustment statistics associated to each model.

As already underlined, the photographic set-up is portable, inexpensive and provides a fast workflow [18]. The hardware installation requires relatively little physical space, and the full set-up—computer not included—costs around 3,500 USD (including 650 USD for software, whereas free alternatives do exist [17,40]; Table 2). It is also fast. The reconstruction of each 3D model in ultra-high quality with our computer configuration and Metashape took an average of 4 hours and 41 minutes (steps 3b-3d), and much of the software chain is automated (dense cloud, mesh and texture creation), requiring only minimal supervision. Model file sizes averaged 23 MB and ranged from 13.06 to 74.82 MB, which is worth considering when storage capacity and online sharing are limited [13]. The number of vertices in the reconstructed raw models averaged 1,289,763, ranging from 475,600–2,518,759.

3D entomology models without focus stacking for comparative morphology

Focus stacking is typically used in photogrammetry for small objects [21,31,39,56]. It consists of combining several images taken from the same position but with different focus in order to increase the depth of field. Its main drawback, however, is the increased acquisition time, as taking pictures with different focal points is time-consuming. In addition, focus stacking requires specialized equipment [56], as some automated photogrammetry systems may not be compatible with focus stacking. Furthermore, the process usually involves post-processing to merge the images, requiring additional time.

By opting for a photogrammetry method that excludes focus stacking, particularly suited to small and complex subjects, and focusing on a precise area, we achieve a satisfactory compromise between quality and convenience. The quality was assessed visually, as well as quantitatively by comparing 3D models produced by photogrammetry and µCt-scan for 13 species [24,52].

Visual inspection of the 3D models suggests that some specimens proved more challenging to reconstruct by photogrammetry than others (Fig 4). For instance, the 3D models of species with bristles on the edges of the pronotum (e.g., Heterogamisca kruegeri (Salfi, 1927), Hemelytroblatta africana Linnaeus, 1758) show some irregularities. The surface is granular, and the contours are not precise enough for landmarking and 3D morphometric analyses. The color of the pronotum can also be troublesome [13], as in Phortioeca peruana Saussure, 1862, whose pronotum is dark brown except on the anterior margin where it is light brown or even transparent. Its pronotal surface is in addition punctuated. This results in a 3D pronotum model with poorly defined edges and an irregular surface lacking smoothness. A similar granular, irregular surface was obtained for Compsagis lesnei Chopard, 1952, a smaller species (total length of 14.1 mm) with a punctuated pronotum.

thumbnail
Fig 4. Pronotums of eight species digitized by photogrammetry and microtomography (µCt-scan).

A) Four challenging species for photogrammetry: Phortioeca peruana (Blaberidae), Compsagis leisni (Blaberidae), Cryptocercus punctulatus (Cryptocercidae) and Hemelytroblatta africana (Corydiidae). B) Four species with 3D models of comparable quality using both methods: Panesthia australis (Blaberidae), Heterogamisca bolivari (Corydiidae), Cryptocercus parvus (Cryptocercidae), Heterogamisca kruegeri (Corydiidae). Scale bars: 2 mm.

https://doi.org/10.1371/journal.pone.0336893.g004

This visual impression was confirmed by comparing the mesh distances calculated in MeshLab between pairs of models (Figs 5 and S1; S1 Table). For the 13 species, the average mesh distance ranged from 0.006 mm to 0.352 mm (with an across species average of 0.124 mm). When considering only the six least challenging species, this across species average dropped to 0,057 mm, which is in line with values obtained in another study for 19 bat skulls (0.054 mm) [52]. The lowest distances (e.g., 0.006 mm and 0.012 mm in Cryptocercus parvus and Alloblatta nugax, respectively) suggest that the 3D models generated through photogrammetry are of high quality, especially for non-challenging species.

thumbnail
Fig 5. Histograms (left) of the calculated distances between a target mesh (photogrammetric model) and a reference mesh (µCT-scan model).

These color-coded distances were used to color the pronotum (right), here of Cryptocercus parvus. The scale on the left shows that, even in the very limited region shown in warm colors, the difference between the two models is extremely small. Scale bar = 1 mm.

https://doi.org/10.1371/journal.pone.0336893.g005

To further assess the quality of the photogrammetric models and their usefulness for comparative morphometrics, we performed a PCA (Fig 6) where each species is plotted twice, one for each acquisition technique. The first two axes represented 80% of the variability. For most species, the pairs of models were very close to each other, although a few appeared to be further apart. Unsurprisingly, the latter corresponded to challenging species for photogrammetry (e.g., Phortioeca peruana, Compsagis lesnei, Cryptocercus punctulatus Scudder, 1862). Note that for species like Hemelytroblatta africana, bristles are not considered during µCt-scan acquisition, which largely contributes to the difference between the two acquisition techniques. Nonetheless, the technique of acquisition was found non-significant with this set of 13 species that included several photogrammetric challenging species (F = 1.1444; p-value MANOVA = 0.6408). This suggests that the differences induced by the use of two methods to reconstruct 3D models are negligible, and the variability observed between specimens has biological causes and is not due to a methodological bias. We conclude that, especially for non-challenging species, 3D models obtained by our photogrammetric pipeline are of good enough quality to be used in comparative morphometric analyses.

thumbnail
Fig 6. PCA computed to compare the two digitization techniques, µCt-scan and photogrammetry.

For each species, the distance between the two models is small, in most cases smaller than the interspecific distances, suggesting a negligible effect of the acquisition method.

https://doi.org/10.1371/journal.pone.0336893.g006

Limitations: A heterogeneous model quality

The main advantage of this method, its speed, has its drawback: the quality of the model is heterogeneous. By focusing on a specific area of the specimen for downstream geometric morphometrics analyses [33], the quality is here maximized on the pronotum. Quality deteriorates further away from the pronotum, which is particularly problematic for delicate structures like bristles, antennae and legs. While the overall 3D models produced are satisfactory for dissemination purposes or for studying conspicuous morphological features, they would probably not be of sufficient quality for geometric morphometric analyses outside the region of interest.

In addition, our pipeline does not solve common problems in photogrammetry. For instance, modeling wings by photogrammetry can be complex, when they are transparent or weakly contrasted, while reflective or deep-black surfaces can cause problems by interfering with the detection of key points by photogrammetric algorithms [57]. Likewise, hard-to-reach areas can be poorly modeled if the camera cannot directly capture these areas, which might then require additional acquisitions [13]. Our pipeline was not meant to address these issues.

Conclusion

3D models have opened up a wide range of potential analyses and applications, among which the possibility to conduct geometric morphometric analyses. But, regarding photogrammetry, the examples are much more common for organisms bigger than insects [e.g., 44,5861]. The main reason for this is probably that, for small objects, it takes a long time to obtain several decent models because of the need for focus stacking. We developed a methodology allowing to generate 3D models fast enough to gather a comparative dataset useful for ecomorphological study of insects [33], which was so far mainly accessible for other organisms or structures [e.g., 52,62,63].

By comparing photogrammetric and microtomographic 3D models, we showed that our inexpensive and time-saving pipeline not only produces models in large quantity but also in a quality good enough to perform geometric morphometric analyses. The negligible difference between 3D models generated by both techniques suggests that dataset combining models from these two acquisition modes can be used in morphometric analyses [40]. Given the interest in these analyses—consider for instance the ca. 2,000 citations of the package geomorph [54]—we hope that our pipeline will generate opportunities for the study of small objects like insects, one of the most species-rich group on Earth [64]. Finally, because 3D photogrammetric models are textured, making them available could contribute to various dissemination purposes and allow taxonomists to access specimens remotely.

Supporting information

S1 Fig. Color-coded representation of the distances calculated between a target mesh (photogrammetric model) and a reference mesh (µCT-scan model) for the pronotums of 13 cockroach species.

SD = standard deviation; scale bars = 1 mm.

https://doi.org/10.1371/journal.pone.0336893.s001

(TIF)

S1 Table. Distances (in millimeters) between a target mesh (photogrammetric model) and a reference mesh (µCT-scan model) computed for each pair of models.

Species are ordered according to the absolute value of average distance between meshes.

https://doi.org/10.1371/journal.pone.0336893.s002

(DOCX)

S2 Table. Summary of 3D reconstruction parameters obtained using two methods: photogrammetry and micro-computed tomography (µCT-scan).

Together, these data summarize the imaging and processing parameters used to generate the 3D models analyzed in this study.

https://doi.org/10.1371/journal.pone.0336893.s003

(XLSX)

Acknowledgments

We acknowledge Renaud Lebrun of the MRI platform member of the national infrastructure France-BioImaging. We thank Sylvain Gerber and Romain Garrouste (ISYEB, MNHN) for fruitful discussions about our project and photogrammetry. We also thank Steven Ahoua for his internship under JB and FL supervision, during which the first photogrammetric tests were conducted.

References

  1. 1. Page LM, MacFadden BJ, Fortes JA, Soltis PS, Riccardi G. Digitization of Biodiversity Collections Reveals Biggest Data on Biodiversity. BioScience. 2015;65(9):841–2.
  2. 2. Hedrick BP, Heberling JM, Meineke EK, Turner KG, Grassa CJ, Park DS, et al. Digitization and the Future of Natural History Collections. BioScience. 2020;70(3):243–51.
  3. 3. Holmes MW, Hammond TT, Wogan GOU, Walsh RE, LaBarbera K, Wommack EA, et al. Natural history collections as windows on evolutionary processes. Mol Ecol. 2016;25(4):864–81. pmid:26757135
  4. 4. Troudet J, Vignes-Lebbe R, Grandcolas P, Legendre F. The Increasing Disconnection of Primary Biodiversity Data from Specimens: How Does It Happen and How to Handle It? Syst Biol. 2018;67(6):1110–9. pmid:29893962
  5. 5. Weirauch C, Cranston PS, Simonsen TJ, Winterton SL. New ways for old specimens – museomics is transforming the field of systematic entomology. Antenna. 2020;44(1):22–4.
  6. 6. Buerki S, Baker WJ. Collections-based research in the genomic era. Biol J Linn Soc. 2015;117(1):5–10.
  7. 7. Hunt R, Reyes-Hernández JL, Shaw JJ, Solodovnikov A, Pedersen KS. Integrating Deep Learning Derived Morphological Traits and Molecular Data for Total-Evidence Phylogenetics: Lessons from Digitized Collections. Syst Biol. 2025;74(3):453–68. pmid:39826140
  8. 8. Guerra MG, Galantucci LM, Lavecchia F, De Chiffre L. Reconstruction of small components using photogrammetry: a quantitative analysis of the depth of field influence using a miniature step gauge. Metrol Measure Syst. 2021;28(2):323–323.
  9. 9. Short AEZ, Dikow T, Moreau CS. Entomological Collections in the Age of Big Data. Annu Rev Entomol. 2018;63:513–30. pmid:29058981
  10. 10. Mantle BL, Salle JL, Fisher N. Whole-drawer imaging for digital management and curation of a large entomological collection. Zookeys. 2012;(209):147–63. pmid:22859885
  11. 11. Bai M, Yang X. A review of three-dimensional (3D) geometric morphometrics and its application in entomology. Acta Entomol Sinica. 2014;57(9):1105–11.
  12. 12. Tatsuta H, Takahashi KH, Sakamaki Y. Geometric morphometrics in entomology: Basics and applications. Entomol Sci. 2017;21(2):164–84.
  13. 13. Brecko J, Mathys A. Handbook of best practice and standards for 2D+ and 3D imaging of natural history collections. European J Taxonomy. 2020;623:1–115.
  14. 14. Ijiri T, Todo H, Hirabayashi A, Kohiyama K, Dobashi Y. Digitization of natural objects with micro CT and photographs. PLoS One. 2018;13(4):e0195852. pmid:29649299
  15. 15. Remondino F, El‐Hakim S. Image‐based 3D Modelling: A Review. Photogram Record. 2006;21(115):269–91.
  16. 16. Luhmann T, Robson S, Kyle S, Boehm J. Close-range photogrammetry and 3D imaging. Photogramm Eng Remote Sens. 2015;81(4):273–4.
  17. 17. Mallison H, Wings O. Photogrammetry in paleontology – a practical guide. J Paleontol Tech. 2014;12:1–31.
  18. 18. Mathys A, Brecko J, Semal P. Cost Analyse of 3D Digitisation Techniques. Digital Presen Preserv Cult Sci Heritage. 2014; 4: 206–12.
  19. 19. Mathys A, Pollet Y, Gressin A, Muth X, Brecko J, Dekoninck W, et al. Sphaeroptica: A tool for pseudo-3D visualization and 3D measurements on arthropods. PLoS One. 2024;19(10):e0311887. pmid:39441808
  20. 20. Keklikoglou K, Faulwetter S, Chatzinikolaou E, Wils P, Brecko J, Kvaček J, et al. Micro-computed tomography for natural history specimens: a handbook of best practice protocols. European J Taxonom. 2019;(522):1–55.
  21. 21. Mathys A, Brecko J, Van den Spiegel D, Semal P. 3D and challenging materials: Guidelines for Different 3D Digitisation Methods for Museum Collections with Varying Material Optical Properties, Digital Heritage. 2015. p. 19–26. https://doi.org/10.1109/DigitalHeritage.2015.7413827
  22. 22. Plum F, Labonte D. scAnt-an open-source platform for the creation of 3D models of arthropods (and other small objects). PeerJ. 2021;9:e11155. pmid:33954036
  23. 23. Muthu S, Wedutenko T, Tong J, Nguyen C, Petersson L. Towards end-to-end automatic insect handling and insect scanning. In: 2023 International Conference on Digital Image Computing: Techniques and Applications (DICTA). IEEE; 2023. p. 168–175. https://doi.org/10.1109/DICTA60407.2023.00031
  24. 24. Evin A, Souter T, Hulme-Beaman A, Ameen C, Allen R, Viacava P, et al. The use of close-range photogrammetry in zooarchaeology: Creating accurate 3D models of wolf crania to study dog domestication. J Archaeol Sci Rep. 2016;9:87–93.
  25. 25. Roscian M, Herrel A, Cornette R, Delapré A, Cherel Y, Rouget I. Underwater photogrammetry for close-range 3D imaging of dry-sensitive objects: The case study of cephalopod beaks. Ecol Evol. 2021;11(12):7730–42. pmid:34188847
  26. 26. Leménager M, Burkiewicz J, Schoen DJ, Joly S. Studying flowers in 3D using photogrammetry. New Phytol. 2023;237(5):1922–33. pmid:36263728
  27. 27. Nguyen CV, Lovell DR, Adcock M, La Salle J. Capturing natural-colour 3D models of insects for species discovery and diagnostics. PLoS One. 2014;9(4):e94346. pmid:24759838
  28. 28. Ströbel B, Schmelzle S, Blüthgen N, Heethoff M. An automated device for the digitization and 3D modelling of insects, combining extended-depth-of-field and all-side multi-view imaging. Zookeys. 2018;(759):1–27. pmid:29853774
  29. 29. Nguyen C, Lovell D, Oberprieler R, Jennings D, Adcock M, Gates-Stuart E, et al. Virtual 3D Models of Insects for Accelerated Quarantine Control. In: Proceedings of the IEEE International Conference on Computer Vision Workshops. 2013. p. 161–7.
  30. 30. Adcock M, Nguyen C, Lovell D, La Salle J. Accelerating entomology with Web3D insects, in: Proceedings of the 19th International ACM Conference on 3D Web Technologies, Web3D ’14. New York, NY, USA: Association for Computing Machinery; 2014. p. 143. https://doi.org/10.1145/2628588.2635851
  31. 31. Brecko J, Mathys A, Dekoninck W, Leponce M, VandenSpiegel D, Semal P. Focus stacking: Comparing commercial top-end set-ups with a semi-automatic low budget approach. A possible solution for mass digitization of type specimens. Zookeys. 2014;(464):1–23. pmid:25589866
  32. 32. Qian J, Dang S, Wang Z, Zhou X, Dan D, Yao B, et al. Large-scale 3D imaging of insects with natural color. Opt Express. 2019;27(4):4845–57. pmid:30876094
  33. 33. Bignon B, Berger J, Delapré A, Grandcolas P, Robillard T, Bornberg-Bauer E, et al. In good form: morphological adaptations to burrowing lifestyles in cockroaches (Dictyoptera: Blattodea). Forthcoming.
  34. 34. Bell WJ, Roth LM, Nalepa CA. Cockroaches: Ecology, Behavior, and Natural History. JHU Press; 2007.
  35. 35. Djernæs M. Biodiversity of Blattodea–the cockroaches and termites. Insect Biodivers Sci Soc. 2018;2:359–87.
  36. 36. González MM, del Carmen Valverde A, Iglesias MS, Crespo FA. Morphometrics confirms the conspecific between Blaptica dubia (Serville) and B. interior Hebard (Blattodea: Blaberidae). Zool Syst. 2019;44(2):111–22.
  37. 37. Zhang M, Ruan Y, Wan X, Tong Y, Yang X, Bai M. Geometric morphometric analysis of the pronotum and elytron in stag beetles: insight into its diversity and evolution. Zookeys. 2019;833:21–40. pmid:31015774
  38. 38. Doğan sarikaya A, Okutaner AY, Sarikaya Ö. Geometric morphometric analysis of pronotum shape in two isolated populations of Dorcadion anatolicum Pic, 1900 (Coleoptera: Cerambycidae) in Turkey. Turkish J Entomol. 2019;43:263–70.
  39. 39. Li H, Nguyen C. Perspective-consistent multifocus multiview 3D reconstruction of small objects. 2019 Digital Image Computing: Techniques and Applications (DICTA). Perth, WA, Australia: IEEE; 2019. p. 1–8. https://doi.org/10.1109/DICTA47822.2019.8946006
  40. 40. Zhang C, Maga AM. An Open-Source Photogrammetry Workflow for Reconstructing 3D Models. Integr Org Biol. 2023;5(1):obad024. pmid:37465202
  41. 41. Bisson-Larrivée A, LeMoine J-B. Photogrammetry and the impact of camera placement and angular intervals between images on model reconstruction. Digital Appl Archaeol Cul Heritage. 2022;26:e00224.
  42. 42. Sathirasethawong C, Sun C, Lambert A, Tahtali M. Foreground Object Image Masking via EPI and Edge Detection for Photogrammetry with Static Background. In: Bebis G, Boyle R, Parvin B, Koracin D, Ushizima D, Chai S, et al., editors. Advances in Visual Computing: 14th International Symposium on Visual Computing, ISVC 2019, Lake Tahoe, NV, USA, October 7–9, 2019, Proceedings, Part II. Cham: Springer International Publishing; 2019. p. 345–57. https://doi.org/10.1007/978-3-030-33723-0
  43. 43. Falkingham P. Acquisition of high resolution three-dimensional models using free, open-source, photogrammetric software. Palaeontol Electron. 2012.
  44. 44. Medina JJ, Maley JM, Sannapareddy S, Medina NN, Gilman CM, McCormack JE. A rapid and cost-effective pipeline for digitization of museum specimens with 3D photogrammetry. PLoS One. 2020;15(8):e0236417. pmid:32790700
  45. 45. James N, Adkinson A, Mast A. Rapid imaging in the field followed by photogrammetry digitally captures the otherwise lost dimensions of plant specimens. Appl Plant Sci. 2023;11(5):e11547. pmid:37915433
  46. 46. James Rohlf F, Marcus LF. A revolution morphometrics. Trends Ecol Evol. 1993;8(4):129–32.
  47. 47. Gunz P, Mitteroecker P, Bookstein FL. Semilandmarks in Three Dimensions. In: Slice DE, editor. Modern Morphometrics in Physical Anthropology. Boston, MA: Springer US; 2005. p. 73–98. https://doi.org/10.1007/0-387-27614-9_3
  48. 48. Gunz P, Mitteroecker P. Semilandmarks: a method for quantifying curves and surfaces. Hystrix It J Mamm. 2013;24:103–9.
  49. 49. Varón-González C, Whelan S, Klingenberg CP. Estimating Phylogenies from Shape and Similar Multidimensional Data: Why It Is Not Reliable. Syst Biol. 2020;69(5):863–83. pmid:31985800
  50. 50. Adler D, Murdoch D, Nenadic O, Urbanek S, Chen M, Gebhardt A. rgl: 3D visualization device system (OpenGL). 2013.
  51. 51. Schlager S, Jefferis G, Ian D, Schlager MS. Package ‘Morpho’. 2019.
  52. 52. Giacomini G, Scaravelli D, Herrel A, Veneziano A, Russo D, Brown RP, et al. 3D Photogrammetry of Bat Skulls: Perspectives for Macro-evolutionary Analyses. Evol Biol. 2019;46(3):249–59.
  53. 53. Cignoni P, Callieri M, Corsini M, Dellepiane M, Ganovelli F, Ranzuglia G. MeshLab: an Open-Source Mesh Processing Tool. In: In Eurographics Italian Chapter Conference; 2008. p. 129–36.
  54. 54. Adams DC, Otárola‐Castillo E. geomorph: anrpackage for the collection and analysis of geometric morphometric shape data. Methods Ecol Evol. 2013;4(4):393–9.
  55. 55. French A, Macedo M, Poulsen J, Waterson T, Yu A. Multivariate Analysis of Variance (MANOVA). San Francisco State University.
  56. 56. Doan T-N, Nguyen CV. A low-cost digital 3D insect scanner. Inf Process Agricult. 2024;11(3):337–55.
  57. 57. Chitsaz N, Marian R, Chahl J. Experimental method for 3D reconstruction of Odonata wings (methodology and dataset). PLoS One. 2020;15(4):e0232193. pmid:32348334
  58. 58. Remondino F, Rizzi A, Girardi S, Petti FM, Avanzini M. 3D Ichnology—recovering digital 3D models of dinosaur footprints. Photogramm Record. 2010;25(131):266–82.
  59. 59. Lallensack JN, van Heteren AH, Wings O. Geometric morphometric analysis of intratrackway variability: a case study on theropod and ornithopod dinosaur trackways from Münchehagen (Lower Cretaceous, Germany). PeerJ. 2016;4:e2059. pmid:27330855
  60. 60. Muñoz-Muñoz F, Quinto-Sánchez M, González-José R. Photogrammetry: a useful tool for three-dimensional morphometric analysis of small mammals. J Zool Syst Evol Res. 2016;54(4):318–25.
  61. 61. Marcy AE, Fruciano C, Phillips MJ, Mardon K, Weisbecker V. Low resolution scans can provide a sufficiently accurate, cost- and time-effective alternative to high resolution scans for 3D shape analyses. PeerJ. 2018;6:e5032. pmid:29942695
  62. 62. Beltran RS, Ruscher-Hill B, Kirkham AL, Burns JM. An evaluation of three-dimensional photogrammetric and morphometric techniques for estimating volume and mass in Weddell seals Leptonychotes weddellii. PLoS One. 2018;13(1):e0189865. pmid:29320573
  63. 63. Giacomini G, Herrel A, Chaverri G, Brown RP, Russo D, Scaravelli D, et al. Functional correlates of skull shape in Chiroptera: feeding and echolocation adaptations. Integr Zool. 2022;17(3):430–42. pmid:34047457
  64. 64. Aberlenc H-P, Albouy V, Barthélémy D, Beaucournu J-C, Blandin P, Cliquennois N, et al. Les Insectes du Monde. Biodiversité. Classification. Clés de détermination des familles. Versailles, Montpellier & Plaissan, Quae & Museo éditions, Tome 1, 1192 p.; Tome 2, 656 p. 2020.