Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Segmentation study of nanoparticle topological structures based on synthetic data

Abstract

Nanoparticles exhibit broad applications in materials mechanics, medicine, energy and other fields. The ordered arrangement of nanoparticles is very important to fully understand their properties and functionalities. However, in materials science, the acquisition of training images requires a large number of professionals and the labor cost is extremely high, so there are usually very few training samples in the field of materials. In this study, a segmentation method of nanoparticle topological structure based on synthetic data (SD) is proposed, which aims to solve the issue of small data in the field of materials. Our findings reveal that the combination of SD generated by rendering software with merely 15% Authentic Data (AD) shows better performance in training deep learning model. The trained U-Net model shows that Miou of 0.8476, accuracy of 0.9970, Kappa of 0.8207, and Dice of 0.9103, respectively. Compared with data enhancement alone, our approach yields a 1% improvement in the Miou metric. These results show that our proposed strategy can achieve better prediction performance without increasing the cost of data acquisition.

Introduction

Nanomaterials exhibit distinctive characteristics and possess a broad spectrum of applications, showcasing their impact even at the minutest scales within realms such as cosmetics, textiles, and food industries. Furthermore, their significance extends across a diverse array of technologies, encompassing domains like medicine, electronics, and energy, where they assume pivotal roles [15]. Precise comprehension and manipulation of nanomaterial structures are imperative for harnessing their unique attributes effectively. The attributes of nanoparticles, including their dimensions, morphology, and surface chemistry, not only influence product quality [6, 7] but also hold paramount importance in assessing their interactions with molecules, cells, and broader biological systems. These attributes are integral for conducting comprehensive evaluations of environmental and human health risks associated with nanomaterials [8]. Serving as the fundamental constituents of nanomaterials, nanoparticles demand thorough structural analysis to unravel their properties and functionalities effectively.

Nanoparticles serve as the fundamental constituents of nanomaterials, and scrutinizing their structure is imperative for elucidating the properties and functionalities of such materials. Electron Microscopy (EM) [9] stands as the predominant technique for characterizing particle structure, including Transmission Electron Microscopy (TEM) [10], Scanning Electron Microscopy (SEM) [11], and Atomic Force Microscopy (AFM) [12]. Various methods [1319] have been proposed for automating image analysis by SEM and TEM. Batuhan Yildirim et al. [20] introduced an automated Bayesian deep learning approach for electron microscope image analysis, facilitating the extraction of quantitative metrics such as particle size. Khuram Faraz et al. [21] devised a deep learning-based method coupled with computer vision for tracking nanoparticles in ambient transmission electron microscopy (ETEM) sequences, enabling objective and robust analysis of dynamic events, particularly relevant to heterogeneous catalytic reactions. Paul Monchot et al. [22] addressed particle size characterization in scanning electron microscope (SEM) images through the use of the Mask-RCNN algorithm in deep learning, surpassing limitations of conventional image processing methods and offering a high-performance solution for automated processing chains. Zhijian Sun et al. [23] achieved precise nanoparticle segmentation through a lightweight deep learning network (NSNet), facilitating rapid and accurate statistical analysis of nanoparticle morphology in complex SEM/TEM images. D.J. Groom et al. [24] explored the implementation of an automatic particle pickup device based on the variance-mixed-mean local thresholding method, effectively enhancing nanoparticle segmentation accuracy in transmission electron microscope (TEM) images by reducing false detections and omissions. Bastian Rühle et al. [25] demonstrated automatic segmentation of agglomerated non-spherical nanoparticles in scanning electron microscope images, eliminating the necessity for large-scale manually labeled training datasets.

In recent years, owing to the continuous evolution of deep learning and machine learning methodologies, there has emerged the capability to precisely extract feature information from images for nanoparticle analysis utilizing these sophisticated techniques [11, 26, 27]. Nevertheless, as the majority of these methodologies rely on supervised learning paradigms, a substantial amount of human effort is necessitated for data preparation, which is crucial for model training. The primary challenge lies in acquiring a representative dataset of nanoparticle images. While approaches such as “exact learning” [28], “transfer learning” [29], and data augmentation techniques mitigate the need for extensive training data, they still entail significant human intervention in data curation. Moreover, these methods often entail errors, consume considerable time, and incur high costs. To alleviate this predicament, a recent trend involves the utilization of SEM and TEM images as training data for deep learning-driven nanoparticle analysis. For instance, Binbin Lin et al. [30] utilized the Mask R-CNN algorithm, particularly for segmentation, employing Geodict software to synthesize a considerable quantity of nanowires. Meanwhile, Leonid Mill et al. [31] leveraged rendering software to create lifelike synthetic training data, crucial for training cutting-edge deep neural networks. This innovation enables automated and high-throughput particle detection across various imaging techniques. Simultaneously, Antón Cid-Mejías et al. [32] accomplished the successful detection, segmentation, orientation inference, and three-dimensional reconstruction of nanoparticles within microscope images by employing artificially synthesized image datasets resembling authentic nanoparticle photographs. This methodology presents a groundbreaking approach for swift and precise nanoparticle characterization. In another study, Lehan Yao et al. [33] Seamlessly integrated liquid-phase transmission electron microscopy with a U-Net neural network-based analysis framework, thereby automating the efficient analysis of nanoparticle behavior in liquid-phase TEM videos by simulating training data from TEM images. This approach divulges pivotal insights into the dynamics of synthetic and biological nanomaterials at the nanoscale. Additionally, Simon Müller et al. [34] employed a 3D U-Net architecture to reliably segment volumetric images of electrodes, remedying the deficiencies of traditional methods in scenarios with inadequate contrast. The network’s performance was enhanced by synthesizing learning data, successfully applied to segment X-ray chromatography microscopy images of graphite-silicon composite electrodes, enabling statistical analysis of microstructural evolution during battery operation. Moreover, Leonid Mill et al. [35] introduced SYNTA as an innovative approach to providing training data for deep learning systems by generating synthetic, lifelike biomedical images. Demonstrating versatility in muscle fiber and histological section analysis, they showcased robust segmentation tasks achievable on previously unseen AD using solely synthetic training data, potentially expediting biomedical image analysis. Furthermore, Boyuan Ma et al. [36] proposed a transfer learning strategy addressing the challenge of limited or simulated data by integrating real and simulated data and expanding training through data mining. In a grain image segmentation task, a model trained with only 35% AD alongside acquired SD achieved segmentation performance comparable to a model trained with all AD.

In summary, current nanoparticle research predominantly emphasizes the segmentation analysis of nanoparticles themselves, with less attention directed towards the segmentation study of nanoparticle structure. While SD offers a swift and efficient means of exploring nanoparticle structure, existing synthetic datasets often rely on expensive professional material modeling software to accurately replicate real TEM images. In our study, we eschew costly professional material modeling software and instead utilize the most basic 3D modeling software to generate synthetic images. The experimental findings indicate that a mere 15% of AD suffices for improved nanoparticle segmentation outcomes, achieving a Dice coefficient of up to 0.91. This experiment underscores the feasibility of reasonably evaluating the performance of a model or experimental system based on real-life scenes by leveraging a small amount of AD. Concurrently, employing abundant SD enables the extension of dataset scale and diversity, covering a broader array of scenarios and feature combinations, thereby enhancing the generalizability of research findings. This integrated approach, leveraging both real and SD, presents an effective and cost-efficient method for investigating nanoparticle structures, facilitating the attainment of desired experimental outcomes and the advancement of scientific research.

Materials and methods

Our objective is to derive diverse insights into nanoparticle structures from high-resolution microscopic images acquired via electron or ion microscopy, necessitating a representative quantity of training data. However, prevailing hardware constraints pose a significant challenge as accessible microscope image data typically falls short in supporting robust training of deep convolutional neural networks (CNNs). A solution advocated in this study involves amalgamating a limited amount of AD with a substantial volume of SD to attain the requisite sample size and diversity for experimental purposes. This strategy not only mitigates research expenses but also safeguards the credibility and robustness of experiments.

Dataset establishment

The genuine dataset utilized in this investigation originates from BOIKO’s [37] ordered dataset 1, acquired via electron microscopy and comprising 750 images. These images depict nanoparticles adhering to a carbon surface in an organized fashion, revealing various geometrical patterns. Observations include the presence of curved and straight line formations, individual nanoparticles detached from surrounding structures, contrasting gradations from light to dark, and the presence of large luminous particles, often forming extensive arrays or contaminants. These characteristics are exemplified by the AD depicted in Fig 1. In our study, our primary focus was on segmenting nanoparticle structures with circular formations. To streamline the analysis process and minimize ambiguity, we deliberately selected images exhibiting clearly defined circular rings, as the inclusion of mixed image types would complicate the analysis. This selection criterion is illustrated in Fig 1(a).

For the SD, we employed K-3D [38], the fundamental 3D model design software, as a rendering tool to generate images portraying nanoparticle structures closely resembling their real counterparts. As illustrated in the two synthesized data forms depicted in Fig 1(b), the first image meticulously mimics AD to provide an authentic reflection of the real-world scene, whereas the second image is synthesized with the objective of optimizing nanoparticle circle segmentation by simplifying image components, thereby yielding more precise results.

In this study, the real and synthetic datasets undergo initial labeling utilizing the labelme tool, followed by extensive data augmentation procedures. These enhancements encompass diverse transformations, including random horizontal flipping, vertical adjustments, mirror symmetry manipulations, affine alterations, rotations, Gaussian noise addition, contrast modifications, scale transformations, panning, and more. Subsequently, from the augmented datasets, 400 samples are designated for training purposes, while an additional 100 samples are reserved for testing, thus constituting the experimental dataset. The resolution stands at 321*321 pixels.

Within the training dataset, genuine and synthesized data are randomly sampled from pools comprising 510 and 850 instances, respectively, post data augmentation, in accordance with specific proportions. The selection of 100 authentic data points within the test set was conducted at random from a pool of 510 samples. Illustrative instances featuring real and synthetic data images alongside their corresponding labeled representations are depicted in Fig 2. Specifically, Fig 2(a) presents the unprocessed images of genuine and synthesized data, while Fig 2(b) showcases their labeled counterparts.

thumbnail
Fig 2. Illustrative comparison between AD and synthetic dataset labelled.

(a) Original figure; (b) Marked images.

https://doi.org/10.1371/journal.pone.0311228.g002

Model selection

For the purpose of target segmentation and scene reconstruction, the detection of nanoparticles is imperative. We opted to employ the U-Net [39] model as the framework responsible for detecting the circular structure of each nanoparticle (e.g., see Fig 3), thereby facilitating a comparison with data enhancement algorithms. U-Net stands for a supervised learning approach widely employed in the realms of materials and medical image processing [11, 40, 41]. Operating as an encoder-decoder network, U-Net’s encoder segment conducts multiple convolution and pooling operations on the input image to extract and condense feature information, while the decoder segment primarily conducts up-sampling and deconvolution on the encoded feature information to gradually restore the image’s spatial resolution. A pivotal connection point exists at the midpoint between the encoder and decoder, known as the bottleneck layer, where information from the lower layers is extracted and shared as output, ensuring the preservation of crucial information. To prevent the loss of lower-level detail during encoding, U-Net employs skip connections, which also facilitate feature transfer from downsampling to upsampling, enabling direct transmission of bottom-layer information to the top layer and thus enhancing the network’s pixel localization accuracy. During the training phase, the models underwent co-training using batch gradient descent on a small batch comprising 10 images, with the ratio of real to SD not fixed. To ensure fairness, all models underwent training utilizing the same methodology, the details of which are outlined in the “Training setup” section.

Evaluation indicators

Nanoparticle circular structures exemplify a segmentation task wherein a proficient algorithm must precisely detect and segment these circular structures in each image. Subsequent to segmentation, researchers extract and scrutinize the properties of the segmented nanoparticle circular structures to discern the correlation between microstructure and macroscopic material properties. In practical applications, various forms of noise may be introduced during sample preparation, significantly impacting the segmentation of nanoparticle circular structures. To effectively assess the algorithm’s performance, we employed several metrics, including Mean Intersection Over Union (MIoU) [42, 43], Kappa coefficient (Kappa) [44], Accuracy (Acc) [43], and Dice similarity index [45].

(1)MIoU [42, 43] (Mean Intersection Over Union) is a standard metric for semantic segmentation that calculates the average of the ratio of intersection to union across all categories. Below is its mathematical expression: (1)

In this context, “i” represents the true value, “j” represents the predicted value, and “pij” denotes the number of pixels that predict “i” to “j”. Thus, the numerator denotes the intersection between the true label and the predicted result, while the denominator signifies their union.

(2)The Kappa coefficient, as described by reference [44], serves as a metric for evaluating consistency and can additionally gauge the efficacy of classification. Consistency refers to the alignment between model predictions and actual classification outcomes. The Kappa coefficient ranges from -1 to 1, with 1 denoting perfect consistency, 0 indicating agreement consistent with a stochastic model, and -1 representing complete inconsistency. Generally, a Kappa coefficient falling between 0.4 and 0.6 signifies moderate agreement for most tasks, while values exceeding 0.6 indicate superior agreement. The formula for the Kappa coefficient is provided below: (2) Here, “p0” denotes the observed proportion of perfect consistency, signifying the fraction of the total sample size where the model’s predictions precisely align with the actual observations. Conversely, “pe” represents the proportion of stochastic consistency, indicating the degree of agreement between the model’s predictions and actual observations that would be anticipated under complete randomness. In a dichotomous scenario, “pe” can be computed by summing the marginal probabilities of the two categories and multiplying them together.

(3)Acc referenced in [43], serves as a metric for assessing the efficacy of a classification model, representing the percentage of correct predictions made by the model out of the total number of predictions. Greater accuracy indicates a more proficient classifier. The formula for calculating accuracy is provided below: (3) In this context, “pred” represents the count of samples correctly classified by the model, while “all” denotes the total number of sample predictions made by the model.

(4)The Dice similarity index, referenced in [45], functions as a measure of set similarity, employed to quantify the similarity between two samples. Frequently utilized for assessing the efficacy of segmentation algorithms, this index yields a score ranging from 1 (indicating optimal segmentation) to 0 (representing poor segmentation). Below is its mathematical expression: (4) In this context, “pred” refers to the set of predicted values, while “true” represents the set of true values. The numerator signifies the intersection between “pred” and “true,” which is then multiplied by 2 to account for the double counting of common elements between the two sets in the denominator. The denominator, on the other hand, represents the union of “pred” and “true.”

Results and discussion

Throughout the training regimen, the data underwent random sampling. The training iterations spanned 100,000 cycles, with an initial learning rate established at 0.01. Stochastic Gradient Descent (SGD) served as the optimizer, while the DiceLoss function was adopted as the loss function, and the batch size was set to 16. The deep learning platform employed for this experiment was Baidu Flying Paddle, and detailed specifications of the hardware environment utilized are documented in Table 1. The construction, training, and testing of the network exclusively relied on the PaddleSeg framework [46] throughout the experiment. This framework provided comprehensive support, enhancing the efficiency and controllability of the experimental procedure.

To ascertain the ideal proportion of authentic to SD within the training dataset, initial experiments were performed exclusively on a dataset comprising solely AD, serving as a control condition for this investigation. Following this, trials were undertaken using a blend of AD and SD in varying ratios to ascertain the optimal fusion of the two. Subsequently, a suite of semantic segmentation models founded upon nanoparticle architectures underwent scrutiny, culminating in the selection of the U-Net model as the primary training framework. This iterative process was aimed at identifying the most efficacious data amalgamation for training, ensuring the proficient acquisition of knowledge pertaining to nanoparticle structures.

The findings from five experimental iterations conducted on a dataset comprising 400 AD are showcased in Table 2. The ensuing metrics, derived from averaging, encompass 0.8362 for Miou, 0.9967 for Acc, 0.8046 for Kappa, and 0.9023 for the Dice coefficient.

Table 3 illustrates the outcomes of experiments conducted on a proportional blend of AD and SD. Examination of the table reveals four experimental cohorts whose Miou metrics surpass the Miou of the control group, set at 0.8362. These four cohorts are as follows: 60 AD, 340 SD, with Miou recorded at 0.8517Acc at 0.9971, Kappa at 0.8264, and Dice coefficient at 0.9132; 85 AD, 315 SD, yielding Miou of 0.8391, Acc of 0.9967, Kappa of 0.8089, and Dice of 0.9045; 90 AD 310 SD, resulting in Miou of 0.8460, Acc of 0.9970, Kappa of 0.8186, and Dice of 0.9093; and finally, 100 AD, 300 SD, with Miou recorded at 0.8394, Acc at 0.9968, Kappa at 0.8093, and Dice coefficient at 0.9047.

Table 4 presents four experimental configurations that surpassed the control Miou. Five experiments were conducted for each of these configurations, and the resulting metrics were averaged. Notably, in the case of the experiment employing 60 instances of AD paired with 340 instances of SD, the Miou reached 0.8476, Acc attained 0.9970, Kappa achieved 0.8207, and Dice coefficient reached 0.9103. This particular experimental cohort exhibited the most superior performance, displaying a Miou increase of 0.0114 compared to the control group while utilizing the least amount of AD. Consequently, it was concluded that the optimal segmentation of the nanoparticle circular structure was achieved through the utilization of 60 instances of AD paired with 340 instances of SD.

thumbnail
Table 4. Results of five experiments for the four best results groups.

https://doi.org/10.1371/journal.pone.0311228.t004

Fig 4 depicts the line graph corresponding to the attainment of a Miou value of 0.8533 with 60 instances of AD and 340 instances of SD. Examination of the graph reveals the convergence of the Miou metric. The light blue line delineates the results recorded at intervals of 1000 iterations, while the dark blue line represents the smoothed curve derived from these results.

thumbnail
Fig 4. AD 60 sheets, SD 340 sheets assessed miou as 0.8533 fold plot.

https://doi.org/10.1371/journal.pone.0311228.g004

Fig 5 presents the prediction maps generated from an evaluation employing 60 AD instances and 340 SD instances, yielding a Miou score of 0.8533. Within the results, we have chosen four distinct images showcasing nanoparticle circular structures for analysis. Upon visual inspection, image a, characterized by a relatively uncomplicated background and simplistic line segments, exhibits a prediction that appears less optimal compared to image b. This discrepancy in prediction quality could stem from two potential factors: firstly, the possibility of nanoparticle adhesion, and secondly, despite the background simplicity, the presence of numerous scattered nanoparticles surrounding the central circle in image a. Conversely, image b portrays a background featuring intricate lines formed by nanoparticles, stark light-dark contrast, and prominent large particles. Despite the minor prominence of the nanoparticle circles within the overall composition, the predicted results exhibit minimal noise. Image c accentuates heightened nanoparticle adhesion compared to the surrounding area, evidenced by nearly every nanoparticle adhering to its neighbors. Meanwhile, image d presents a scenario where two circular structures overlap, with the larger circle enclosing the smaller one. Notably, our segmentation model adeptly distinguishes and delineates these structures. In summary, while image a’s background simplicity hints at potential factors influencing differences in prediction outcomes such as nanoparticle adhesion and fragmented background particles, images b and d underscore the model’s capacity to accurately predict and delineate nanoparticle structures even amidst complex backgrounds. Image c further highlights the prevalence of robust nanoparticle adhesion.

thumbnail
Fig 5. Prediction plot for AD 60 sheets, SD 340 sheets evaluating miou of 0.8533.

https://doi.org/10.1371/journal.pone.0311228.g005

Table 5 presents ten semantic segmentation models (including Deeplabv3 [47], PspNet [48], CCNet [49], FastFCN [50], PFPNNet [51], GINet [52], ENCNet [53], BiseNet [54], etc.) after determining the ratio of real and SD, DaNet [55] and U2Net [56], etc.) were evaluated and the experimental results are shown in Table 5.

thumbnail
Table 5. Evaluation results of ten semantic segmentation models.

https://doi.org/10.1371/journal.pone.0311228.t005

Conclusion

In the realm of nanomaterials, researchers often rely on microscopy techniques to capture images of nanoparticle structures. However, this approach [8, 30, 32] presents several drawbacks, including its cumbersome nature, time-intensive process, and high associated costs. Particularly when dealing with a substantial volume of experimental data, the feasibility of employing microscopy techniques becomes significantly constrained. Hence, we propose a nanoparticle structure segmentation methodology grounded in SD. This approach entails constructing a dataset comprising a modest quantity of AD supplemented by a substantial volume of SD, thereby ensuring the requisite sample size and diversity essential for segmenting nanoparticle structures. By leveraging a limited quantity of authentic data, we can effectively assess the performance of the model or experimental system within realistic scenarios. Simultaneously, the integration of copious SD enhances dataset size and diversity, encompassing a broader array of scenarios and feature combinations, consequently rendering the results more universally applicable and robust. Among the 20 experimental iterations conducted, our findings indicate that employing 60 instances of authentic data and 340 instances of SD yields optimal results, surpassing those of the control group (comprising exclusively AD) across all assessment metrics. Lastly, we subjected an additional ten models to scrutiny for comparative analysis, ultimately selecting the U-Net model as the most suitable for nanoparticle structure segmentation, boasting Miou, Acc, Kappa, and Dice coefficients of 0.8476, 0.9970, 0.8207, and 0.9103, respectively.

The integration of authentic and synthetic datasets offers a potent and cost-efficient avenue for investigating nanoparticle structures, facilitating the attainment of desired experimental outcomes and the progression of scientific inquiry. Leveraging synthetic datasets for nanoparticle structure analysis circumvents numerous drawbacks, enhancing productivity, while deep learning methodologies exhibit superior adaptability, refining the techniques and processes employed in TEM nanoparticle image segmentation research.

Supporting information

S1 Data. Availability of data and material.

https://doi.org/10.1371/journal.pone.0311228.s001

(ZIP)

References

  1. 1. Dreaden EC, Alkilany AM, Huang X, Murphy CJ, El-Sayed MA. The golden age: gold nanoparticles for biomedicine. Chem Soc Rev. 2012;41(7):2740–79. Epub 2011/11/24. pmid:22109657
  2. 2. Lohse SE, Murphy CJ. Applications of colloidal inorganic nanoparticles: from medicine to energy. J Am Chem Soc. 2012;134(38):15607–20. Epub 2012/09/01. pmid:22934680
  3. 3. Sun Q, Wang YA, Li LS, Wang D, Zhu T, Xu J, et al. Bright, multicoloured light-emitting diodes based on quantum dots. Nature Photonics. 2007;1(12):717–22.
  4. 4. Vance ME, Kuiken T, Vejerano EP, McGinnis SP, Hochella MF Jr., Rejeski D, et al. Nanotechnology in the real world: Redeveloping the nanomaterial consumer products inventory. Beilstein J Nanotechnol. 2015;6:1769–80. Epub 2015/10/02. pmid:26425429
  5. 5. Zhang Z, Wang J, Nie X, Wen T, Ji Y, Wu X, et al. Near infrared laser-induced targeted cancer therapy using thermoresponsive polymer encapsulated gold nanorods. J Am Chem Soc. 2014;136(20):7317–26. Epub 2014/04/30. pmid:24773323
  6. 6. Kongkanand Anusorn, Tvrdy Kevin, Takechi Kensuke, Kuno Masaru, and Kamat Prashant V. Quantum Dot Solar Cells: Tuning Photoresponse through Size and Shape Control of CdSe-TiO2 Architecture. Journal of the American Chemical Society, 2008, 130(12), 4007–4015. pmid:18311974
  7. 7. Mackey MA, Ali MR, Austin LA, Near RD, El-Sayed MA. The most effective gold nanorod size for plasmonic photothermal therapy: theory and in vitro experiments. J Phys Chem B. 2014;118(5):1319–26. pmid:24433049
  8. 8. Mulhopt S, Diabate S, Dilger M, Adelhelm C, Anderlohr C, Bergfeldt T, et al. Characterization of Nanoparticle Batch-To-Batch Variability. Nanomaterials (Basel). 2018;8(5). pmid:29738461
  9. 9. Pu Y, Niu Y, Wang Y, Liu S, Zhang B. Statistical morphological identification of low-dimensional nanomaterials by using TEM. Particuology. 2022;61:11–7.
  10. 10. Horwath JP, Zakharov DN, Mégret R, Stach EA. Understanding important features of deep learning models for segmentation of high-resolution transmission electron microscopy images. npj Computational Materials, 2020, 6, 108.
  11. 11. Bals J, Epple M. Deep learning for automated size and shape analysis of nanoparticles in scanning electron microscopy. RSC Adv. 2023;13(5):2795–802. Epub 2023/02/10. pmid:36756420
  12. 12. Bai H, Wu S. Deep-learning-based nanowire detection in AFM images for automated nanomanipulation. Nanotechnology and Precision Engineering. 2021;4(1).
  13. 13. Lee B, Yoon S, Lee JW, Kim Y, Chang J, Yun J, et al. Statistical Characterization of the Morphologies of Nanoparticles through Machine Learning Based Electron Microscopy Image Analysis. ACS Nano. 2020;14(12):17125–33. Epub 2020/11/25. pmid:33231065
  14. 14. Meng Y, Zhang Z, Yin H, Ma T. Automatic detection of particle size distribution by image analysis based on local adaptive canny edge detection and modified circular Hough transform. Micron. 2018;106:34–41. Epub 2018/01/06. pmid:29304431
  15. 15. Masubuchi S, Watanabe E, Seo Y, Okazaki S, Sasagawa T, Watanabe K, et al. Deep-learning-based image segmentation integrated with optical microscopy for automatically searching for two-dimensional materials. npj 2D Materials and Applications. 2020;4(1).
  16. 16. Hojat N., Gentile P., Ferreira A.M. et al. Automatic pore size measurements from scanning electron microscopy images of porous scaffolds. Porous Mater 30, 2023, 93–101.
  17. 17. Lin R, Zhang R, Wang C, Yang XQ, Xin HL. TEMImageNet training library and AtomSegNet deep-learning models for high-precision atom segmentation, localization, denoising, and deblurring of atomic-resolution images. Sci Rep. 2021;11(1):5386. Epub 2021/03/10. pmid:33686158
  18. 18. Li W, Field KG, Morgan D. Automated defect analysis in electron microscopic images. npj Computational Materials. 2018;4(1).
  19. 19. Grulke EA, Wu X, Ji Y, Buhr E, Yamamoto K, Song NW, et al. Differentiating gold nanorod samples using particle size and shape distributions from transmission electron microscope images. Metrologia. 2018;55(2):254–67. Epub 2018/02/28. pmid:32410745
  20. 20. Yildirim B, Cole JM. Bayesian Particle Instance Segmentation for Electron Microscopy Image Quantification. J Chem Inf Model. 2021;61(3):1136–49. pmid:33682402
  21. 21. Faraz K, Grenier T, Ducottet C, Epicier T. Deep learning detection of nanoparticles and multiple object tracking of their dynamic evolution during in situ ETEM studies. Sci Rep. 2022;12(1):2484. Epub 2022/02/17. pmid:35169206
  22. 22. Monchot P, Coquelin L, Guerroudj K, Feltin N, Delvallee A, Crouzier L, et al. Deep Learning Based Instance Segmentation of Titanium Dioxide Particles in the Form of Agglomerates in Scanning Electron Microscopy. Nanomaterials (Basel). 2021;11(4). Epub 2021/05/01. pmid:33918779
  23. 23. Sun Z, Shi J, Wang J, Jiang M, Wang Z, Bai X, et al. A deep learning-based framework for automatic analysis of the nanoparticle morphology in SEM/TEM images. Nanoscale. 2022;14(30):10761–72. pmid:35790114
  24. 24. Groom DJ, Yu K, Rasouli S, Polarinakis J, Bovik AC, Ferreira PJ. Automatic segmentation of inorganic nanoparticles in BF TEM micrographs. Ultramicroscopy. 2018;194:25–34. pmid:30056278
  25. 25. Ruhle B, Krumrey JF, Hodoroaba VD. Workflow towards automated segmentation of agglomerated, non-spherical particles from electron microscopy images using artificial neural networks. Sci Rep. 2021;11(1):4942. pmid:33654161
  26. 26. Lin Z, Chou WC, Cheng YH, He C, Monteiro-Riviere NA, Riviere JE. Predicting Nanoparticle Delivery to Tumors Using Machine Learning and Artificial Intelligence Approaches. Int J Nanomedicine. 2022;17:1365–79. pmid:35360005
  27. 27. Ke W, Crist RM, Clogston JD, Stern ST, Dobrovolskaia MA, Grodzinski P, Jensen MA. Trends and patterns in cancer nanotechnology research: A survey of NCI’s caNanoLab and nanotechnology characterization laboratory. Adv Drug Deliv Rev. 2022 Dec;191:114591. Epub 2022 Nov 1. pmid:36332724
  28. 28. Maier AK, Syben C, Stimpel B, Wurfl T, Hoffmann M, Schebesch F, et al. Learning with Known Operators reduces Maximum Training Error Bounds. Nat Mach Intell. 2019;1(8):373–80. pmid:31406960
  29. 29. Pan S. J., & Yang Q. A Survey on Transfer Learning. IEEE Transactions on Knowledge and Data Engineering, 2010, 22(10), 1345.
  30. 30. Lin B, Emami N, Santos DA, Luo Y, Banerjee S, Xu B-X. A deep learned nanowire segmentation model using synthetic data augmentation. npj Computational Materials. 2022;8(1).
  31. 31. Mill L, Wolff D, Gerrits N, Philipp P, Kling L, Vollnhals F, et al. Synthetic Image Rendering Solves Annotation Problem in Deep Learning Nanoparticle Segmentation. Small Methods. 2021;5(7):e2100223. pmid:34927995
  32. 32. Cid-Mejias A, Alonso-Calvo R, Gavilan H, Crespo J, Maojo V. A deep learning approach using synthetic images for segmenting and estimating 3D orientation of nanoparticles in EM images. Comput Methods Programs Biomed. 2021;202:105958. Epub 2021/02/16. pmid:33588253
  33. 33. Yao L, Ou Z, Luo B, Xu C, Chen Q. Machine Learning to Reveal Nanoparticle Dynamics from Liquid-Phase TEM Videos. ACS Cent Sci. 2020;6(8):1421–30. pmid:32875083
  34. 34. Muller S, Sauter C, Shunmugasundaram R, Wenzler N, De Andrade V, De Carlo F, et al. Deep learning-based segmentation of lithium-ion battery microstructures enhanced by artificially generated electrodes. Nat Commun. 2021;12(1):6205. pmid:34707110
  35. 35. Mill, L., Aust, O., Ackermann, J. A., Burger, P., Pascual, M., Palumbo-Zerr, K., et al. SYNTA: A novel approach for deep learning-based image analysis in muscle histopathology using photo-realistic synthetic data. arXiv preprint, 2022, arXiv:2207.14650. https://doi.org/10.48550/arXiv.2207.14650
  36. 36. Ma B, Wei X, Liu C, Ban X, Huang H, Wang H, et al. Data augmentation in microscopic images for material data mining. npj Computational Materials. 2020;6(1).
  37. 37. Boiko DA, Pentsak EO, Cherepanova VA, Ananikov VP. Electron microscopy dataset for the recognition of nanoscale ordering effects and location of nanoparticles. Sci Data. 2020;7(1):101. pmid:32214102
  38. 38. K-3D.0.8.0.1. April 2010. sourceforge. https://sourceforge.net/projects/k3d/postdownload. 20/6/2023.
  39. 39. Ronneberger O, Fischer P, Brox T. U-Net: Convolutional Networks for Biomedical Image Segmentation. Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015. Lecture Notes in Computer Science2015. p. 234-41.
  40. 40. Wu J, Xu D, Yang C, Gui W. Ingot oxide slag detection using two-stage UNet network based on mixed supervised learning. Neural Computing and Applications. 2023;35(25):18277–92.
  41. 41. Ibtehaz N, Rahman MS. MultiResUNet: Rethinking the U-Net architecture for multimodal biomedical image segmentation. Neural Netw. 2020;121:74–87. Epub 2019/09/20. pmid:31536901
  42. 42. Minaee S, Boykov Y, Porikli F, Plaza A, Kehtarnavaz N, Terzopoulos D. Image Segmentation Using Deep Learning: A Survey. IEEE Trans Pattern Anal Mach Intell. 2022;44(7):3523–42. pmid:33596172
  43. 43. Ulku I, Akagündüz E. A Survey on Deep Learning-based Architectures for Semantic Segmentation on 2D Images. Applied Artificial Intelligence. 2022;36(1).
  44. 44. McHugh MLJBM. Interrater reliability: the kappa statistic. 2012;22:276–82.
  45. 45. Wang R, Lei T, Cui R, Zhang B, Meng H, Nandi AK. Medical image segmentation using deep learning: A survey. IET Image Processing. 2022;16(5):1243–67.
  46. 46. Yi Liu LC, Guowei Chen, Zewu Wu, Zeyu Chen, Baohua Lai, Yuying Hao. Paddleseg: A high-efficient development toolkit for image segmentation. 2021. https://doi.org/10.48550/arXiv.2101.06175.
  47. 47. Adam L-CCGPFSH, Inc. G. Rethinking Atrous Convolution for Semantic Image Segmentation. 2017. arXiv:1706.05587.
  48. 48. Hengshuang Zhao JS, Xiaojuan Qi, Xiaogang Wang, Jiaya Jia. Pyramid Scene Parsing Network. CVPR 2017. 2017:2881-90. https://doi.org/10.48550/arXiv.1612.01105.
  49. 49. Zilong Huang XW, Yunchao Wei, Lichao Huang, Humphrey Shi, Wenyu Liu, Thomas S. Huang. CCNet: Criss-Cross Attention for Semantic Segmentation. IEEE Xplore. 2019:603-12. https://doi.org/10.48550/arXiv.1811.11721.
  50. 50. Wang H, Miao F. Building extraction from remote sensing images using deep residual U-Net. European Journal of Remote Sensing. 2022;55(1):71–85.
  51. 51. Kim S-W, Kook H-K, Sun J-Y, Kang M-C, Ko S-J. Parallel Feature Pyramid Network for Object Detection. Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 234-250
  52. 52. Tianyi Wu YL, Yu Zhu, Chuang Zhang, Ming Wu, Zhanyu Ma, Guodong Guo. GINet: Graph Interaction Network for Scene Parsing. Computer Vision–ECCV 2020: 16th European Conference. 2020:34-51. https://doi.org/10.48550/arXiv.2009.06160.
  53. 53. Cheng G, Lai P, Gao D, Han J. Class attention network for image recognition. Science China Information Sciences. 2023;66(3).
  54. 54. Changqian Yu JW, Chao Peng, Changxin Gao, Gang Yu, Nong Sang. BiSeNet: Bilateral Segmentation Network for Real-time Semantic Segmentation. Springer-Verlag. 2018:334–49. https://doi.org/10.48550/arXiv.1808.00897.
  55. 55. Haolan Xue, Chang Liu, Fang Wan, Jianbin Jiao, Xiangyang Ji, Qixiang Ye. DANet: Divergent Activation for Weakly Supervised Object Localization. Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 6589-6598
  56. 56. Qin Xuebin Z Z, Huang Chenyang, Dehghan Masood, Zaiane Osmar R. and Jagersand Martin University of Alberta, Canada. U2-Net: Going Deeper with Nested U-Structure for Salient Object Detection. Pattern Recognition. 2020;106:107404.