Skip to main content
Advertisement
  • Loading metrics

Edge Artificial Intelligence (AI) for real-time automatic quantification of filariasis in mobile microscopy

  • Lin Lin,

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Software, Validation, Visualization, Writing – original draft, Writing – review & editing

    Affiliations Spotlab, Madrid, Spain, Biomedical Image Technologies, ETSI Telecomunicación, Universidad Politécnica de Madrid, Madrid, Spain, CIBER de Bioingeniería, Biomateriales y Nanomedicina, Instituto de Salud Carlos III, Madrid, Spain

  • Elena Dacal,

    Roles Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Validation, Visualization, Writing – original draft, Writing – review & editing

    Affiliation Spotlab, Madrid, Spain

  • Nuria Díez,

    Roles Data curation, Project administration, Writing – review & editing

    Affiliation Spotlab, Madrid, Spain

  • Claudia Carmona,

    Roles Data curation, Investigation, Resources, Validation, Writing – review & editing

    Affiliation Malaria and Emerging Parasitic Diseases Laboratory, National Microbiology Centre, Instituto de Salud Carlos III—Madrid, Madrid, Spain

  • Alexandra Martin Ramirez,

    Roles Data curation, Resources, Validation, Writing – review & editing

    Affiliations Malaria and Emerging Parasitic Diseases Laboratory, National Microbiology Centre, Instituto de Salud Carlos III—Madrid, Madrid, Spain, Centro de Investigación Biomédica en Red de Enfermedades Infecciosas (CIBERINFEC) Instituto de Salud Carlos III—Madrid, Madrid, Spain

  • Lourdes Barón Argos,

    Roles Data curation, Investigation, Resources, Validation, Writing – review & editing

    Affiliation Malaria and Emerging Parasitic Diseases Laboratory, National Microbiology Centre, Instituto de Salud Carlos III—Madrid, Madrid, Spain

  • David Bermejo-Peláez,

    Roles Conceptualization, Data curation, Investigation, Software, Visualization, Writing – review & editing

    Affiliation Spotlab, Madrid, Spain

  • Carla Caballero,

    Roles Software, Writing – review & editing

    Affiliation Spotlab, Madrid, Spain

  • Daniel Cuadrado,

    Roles Resources, Software, Writing – review & editing

    Affiliation Spotlab, Madrid, Spain

  • Oscar Darias-Plasencia,

    Roles Resources, Software, Writing – review & editing

    Affiliation Spotlab, Madrid, Spain

  • Jaime García-Villena,

    Roles Resources, Software, Writing – review & editing

    Affiliation Spotlab, Madrid, Spain

  • Alexander Bakardjiev,

    Roles Software, Writing – review & editing

    Affiliation Spotlab, Madrid, Spain

  • Maria Postigo,

    Roles Conceptualization, Funding acquisition, Project administration, Writing – review & editing

    Affiliation Spotlab, Madrid, Spain

  • Ethan Recalde-Jaramillo,

    Roles Software, Supervision, Writing – review & editing

    Affiliations Spotlab, Madrid, Spain, Biomedical Image Technologies, ETSI Telecomunicación, Universidad Politécnica de Madrid, Madrid, Spain, CIBER de Bioingeniería, Biomateriales y Nanomedicina, Instituto de Salud Carlos III, Madrid, Spain

  • Maria Flores-Chavez,

    Roles Resources, Writing – review & editing

    Affiliations Malaria and Emerging Parasitic Diseases Laboratory, National Microbiology Centre, Instituto de Salud Carlos III—Madrid, Madrid, Spain, Fundación Mundo Sano, Madrid, Spain

  • Andrés Santos,

    Roles Funding acquisition, Supervision, Writing – review & editing

    Affiliations Biomedical Image Technologies, ETSI Telecomunicación, Universidad Politécnica de Madrid, Madrid, Spain, CIBER de Bioingeniería, Biomateriales y Nanomedicina, Instituto de Salud Carlos III, Madrid, Spain

  • María Jesús Ledesma-Carbayo ,

    Roles Conceptualization, Funding acquisition, Investigation, Methodology, Supervision, Writing – review & editing

    mj.ledesma@upm.es (MJL-C); jmrubio@isciii.es (JMR); miguel@spotlab.ai (ML-O)

    Affiliations Biomedical Image Technologies, ETSI Telecomunicación, Universidad Politécnica de Madrid, Madrid, Spain, CIBER de Bioingeniería, Biomateriales y Nanomedicina, Instituto de Salud Carlos III, Madrid, Spain

  • José M. Rubio ,

    Roles Conceptualization, Funding acquisition, Investigation, Methodology, Supervision, Validation, Writing – review & editing

    mj.ledesma@upm.es (MJL-C); jmrubio@isciii.es (JMR); miguel@spotlab.ai (ML-O)

    Affiliations Malaria and Emerging Parasitic Diseases Laboratory, National Microbiology Centre, Instituto de Salud Carlos III—Madrid, Madrid, Spain, Centro de Investigación Biomédica en Red de Enfermedades Infecciosas (CIBERINFEC) Instituto de Salud Carlos III—Madrid, Madrid, Spain

  •  [ ... ],
  • Miguel Luengo-Oroz

    Roles Conceptualization, Funding acquisition, Investigation, Methodology, Project administration, Supervision, Validation, Writing – review & editing

    mj.ledesma@upm.es (MJL-C); jmrubio@isciii.es (JMR); miguel@spotlab.ai (ML-O)

    Affiliation Spotlab, Madrid, Spain

  • [ view all ]
  • [ view less ]

Abstract

Filariasis, a neglected tropical disease caused by roundworms, is a significant public health concern in many tropical countries. Microscopic examination of blood samples can detect and differentiate parasite species, but it is time consuming and requires expert microscopists, a resource that is not always available. In this context, artificial intelligence (AI) can assist in the diagnosis of this disease by automatically detecting and differentiating microfilariae. In line with the target product profile for lymphatic filariasis as defined by the World Health Organization, we developed an edge AI system running on a smartphone whose camera is aligned with the ocular of an optical microscope that detects and differentiates filarias species in real time without the internet connection. Our object detection algorithm that uses the Single-Shot Detection (SSD) MobileNet V2 detection model was developed with 115 cases, 85 cases with 1903 fields of view and 3342 labels for model training, and 30 cases with 484 fields of view and 873 labels for model validation before clinical validation, is able to detect microfilariae at 10x magnification and distinguishes four species of them at 40x magnification: Loa loa, Mansonella perstans, Wuchereria bancrofti, and Brugia malayi. We validated our augmented microscopy system in the clinical environment by replicating the diagnostic workflow encompassed examinations at 10x and 40x with the assistance of the AI models analyzing 18 samples with the AI running on a middle range smartphone. It achieved an overall precision of 94.14%, recall of 91.90% and F1 score of 93.01% for the screening algorithm and 95.46%, 97.81% and 96.62% for the species differentiation algorithm respectively. This innovative solution has the potential to support filariasis diagnosis and monitoring, particularly in resource-limited settings where access to expert technicians and laboratory equipment is scarce.

Author summary

Filariasis is a common tropical infectious disease. Depending on the parasite, it causes lymphoedema, elephantiasis, itching, blindness, etc. It is estimated that more than 1 billion people require preventive chemotherapy to stop the spread of this infection. The diagnosis of this disease is made through microscopical examination of a blood smear by a human expert, which is not always available. In this study we propose an edge Artificial Intelligence (AI) system that detects and quantifies four species of microfilariae (Loa loa, Mansonella perstans, Wuchereria bancrofti and Brugia malayi) using the camera of a smartphone attached to an optical microscope with a 3D printed adapter. The system works in real time and does not need internet connectivity as the AI models are run locally in a medium range smartphone. We have replicated the diagnostic workflow that is typically performed by an expert microscopist augmented by the support of the AI system.

1. Introduction

Filariasis is a tropical infectious disease caused by roundworms (Phylum Nematoda). There are at least eight filarial worms that are hosted in humans. These are the causative agents of four types of diseases: lymphatic filariasis, which is caused by Wuchereria bancrofti, Brugia malayi, and Brugia timori; Onchocerciasis, caused by Onchocerca volvulus; loiasis, caused by Loa loa; and mansonellosis, caused by Mansonella perstans, Mansonella ozzardi, and Mansonella streptocerca. Among these, lymphatic filariasis and onchocerciasis have significant clinical and public health implications and are included in the World Health Organization (WHO) list of Neglected Tropical Diseases, while loiasis and mansonellosis have historically received much less attention [13].

In 2000, the WHO launched the Global Programme for the Elimination of Lymphatic Filariasis (GPELF), which set the goal of eliminating lymphatic filariasis as a public health problem in 58 countries by 2030 [4]. The program achieved a considerable reduction, but there are still 863 million people in 50 countries who require preventive chemotherapy (PC) [5]. Similarly, onchocerciasis affects over 20.9 million people, with at least 220 million in need of PC [6]. However, L. loa infection is hindering the elimination of lymphatic filariasis and onchocerciasis, as these diseases use ivermectin in massive drug administration (MDA), but ivermectin causes severe adverse effects when the individual has elevated levels of L. loa in the blood [7,8].

Studies have reported that M. perstans is the most prevalent filariasis in Africa, with more than 100 million people estimated to be infected and 600 million living in 33 high-risk countries [9], yet it is one of the most neglected filariasis [2,3,10], and there are no control programs for it.

The correct diagnosis and appropriate treatment are paramount for the effective control and elimination of parasites and their approach depends on the filarias type. In addition to the ongoing elimination programme for lymphatic filariasis and onchocerciasis, there have been increasing calls for the treatment and control programme for mansonellosis and loasis in recent years [1113]. WHO recommends utilizing the Alere Filariasis Test Strip (FTS) for all areas endemic for W. bancrofti and Brugia Rapid Test for all areas endemic for Brugia spp. However, these tests are species-specific and do not account for co-infections [14]. Molecular diagnosis methods have also been applied in surveillance studies with good results but without possibilities to perform on site [15]. Microscopy remains the most widely used technique for all filarial species, enabling the detection of microfilariae through blood smears or skin snips. The routine examination is the screening at low magnification (10x magnification) and then uses higher magnification to identify the species (e.g., 40x). The sample should be scanned completely at 10x magnification to report the sample as negative [16]. Nonetheless, the diagnosis by microscopy is time-consuming and requires experienced microscopists, whose availability is not always assured [17,18]. In that sense, different studies revealed the importance of mobile health (mHealth) to bring diagnostics to the point of care and scale access in low and middle-income countries (LMICs) [1921]. Notably, several investigations reported the use of mobile microscopy for parasite detection, such as LoaScope, which is a point-of-care microscope that detects L. loa microfilaria in blood smears automatically in video [22,23] or SchistoScope, a mobile phone microscope for the screening of Schistosoma haematobium [24].

A possible tool to address the lack of trained specialists is the detection of parasites in microscopic images using Artificial Intelligence (AI). AI is revolutionizing the medical field and can be applied in different medical subfields [25,26]. The development of AI algorithms for microscopy depends on the digitization samples, which can be done using digital microscopes that have embedded cameras, converting a conventional optical microscope to a digital microscope using mobile phones or other image acquisition modules.

In a recent review by Fan et al. focusing on AI applications for peripheral blood films, 95 studies addressed malaria, 81 leukaemia, 72 leukocytes, 25 mixed cell types, 15 erythrocytes, and 1 Myelodysplastic syndrome. Beyond the scope of peripheral blood films, limited attention was given to babesiosis, leishmania, trypanosomiasis, etc. However, no work specifically addressing filariasis was identified in this review [27]. Beside that, our research found numerous studies reporting the detection of parasites in microscopical images, revealing the potential of AI in this task. Quinn et al. created one of the first deep learning algorithms that consists of a four layer convolutional neural network (CNN) for malaria image classification from scratch. For that, they used a 3D printed adapter that aligns the mobile phone camera to the microscope eyepiece [28]. Davidson et al. presented a 3 phases analysis to detect and count malaria parasites and its life cycle stage. The first phase detects red blood cells using the Faster RCNN object detection algorithm, then crops the detected cell and feeds it to a ResNet50 to classify if the detected cell is infected, and finally classify the life cycle stage of the infected cell using ResNet-34. They achieved 98.5% average precision in detecting RBCs and 99.8% in classifying the detected cells into infected or uninfected, and mean square error of 0.23 in the stage classification. Images in this study were acquired by manually aligning the mobile phone camera to the microscope eyepiece [29]. Similarly, Holmström et al. presented a deep learning algorithm for the detection of soil-transmitted helminths (STH) and Schistosoma haematobium with a custom microscope scanner and the commercially available image analysis software platform WebMicroscope [30]. Dacal et al. presented an object detection algorithm using Single-Shot multibox Detection (SSD) for STH that runs on a smartphone [31]. Oyibo et al. presented an automated microscope with an image segmentation algorithm using a u-net architecture for Schistosoma haematobium [32]. Dedhiya et al. introduced the first study that uses machine learning over thermal imaging to predict the viability of onchocerca worms. In this work, they used five separated random forest classifiers and the final classification was obtained using a voting mechanism [33]. D’Ambrosio et al. presented an algorithm which detects L. loa microfilaria in video by subtracting subsequent frames of the video, then generating a single difference image and using a local peak-finding routine to find microfilariae. They correlated the automated counts with manual counts and achieved 94% specificity and 100% sensitivity [22]. Elvana et al. presented a lymphatic filariasis detection system using CNN, achieving an accuracy of 70% [34]. As far as we know, there have been very few attempts to deploy edge-AI systems using deep learning which are able to support in real-time and without connectivity the analysis of optical microscopy images for filaria detection with species differentiation and more broadly NTDs diagnostics.

The objective of this study is to propose, develop, and pilot a system for real-time, automatic detection and quantification of filariasis using an edge AI model. The proposed system aims to assist the screening and species differentiation of four worm species (L. loa, M. perstans, W. bancrofti and B. malayi) in blood smears for filariasis. For that, we proposed a pipeline with the following modules: the digitization of smear samples with smartphones coupled to a microscope through a 3D-printed device; sample analysis and data labeling in a telemedicine platform for training of an AI algorithm; integration of the trained algorithm on the smartphone to assist the diagnosis and validation of the model in a clinical environment.

2. Materials and methods

2.1 Ethics statement

Ethical approval was obtained from the Research Ethics Committee (REC) Instituto de Salud Carlos III, Spain (CEI PI 74_2020).

2.2 Overview of the methodology

The study was conducted in two distinct phases. The initial phase involved digitizing blood smear samples to construct the database for the development of AI algorithms with 115 samples. In the subsequent phase, the AI model was integrated on the smartphone and a pilot study was conducted to evaluate the AI’s performance on real world settings with a new dataset of 18 samples. The study design schema is presented in Fig 1.

All preparations included in the study were appropriately stained, positive and with well-preserved parasite morphology. Samples with varying levels of parasitemia and species were chosen based on results obtained from polymerase chain reaction (PCR) and/or conventional microscopy, ensuring the collection of both positive and negative fields of view. Additionally, the staining type and sample preparation details were systematically compiled.

2.3 Creating a Filariasis differentiation AI model

2.3.1 Digitalize samples.

In the initial phase, a total of 115 sample smears from 115 different subjects were collected from the sample collection of the Malaria and Emerging Protozoa Unit of the Instituto de Salud Carlos III (Spain). In addition, all preparations have been previously anonymized without the possibility of reverse coding. 112 of them were stained with Giemsa and 3 of them with Panopticon. Case distributions were presented in Table 1.

thumbnail
Table 1. Cases included in the first phase and the training-validation split.

https://doi.org/10.1371/journal.pntd.0012117.t001

Images were digitized simulating the real diagnostic workflow, with a system previously described in Dacal et al. [31]. Briefly, this system uses a 3D printed device that allows coupling a mobile phone with a conventional optical microscope by aligning the smartphone camera with the objective of the microscope to acquire images, and that converts any conventional microscope into a digital microscope. Following the conventional workflow, the analyst scanned the samples at 10x magnification and captured photos of fields containing structures compatible with filarial parasites. Subsequently, the objective was switched to 40x magnification, and photos of each detected parasite were taken. Slides were digitized using four different smartphone models (Huawei Ascend G7; (n = 95 cases), Redmi Note7 (n = 13 cases), Samsung Galaxy A32; (n = 5 cases), LG X Power K220; (n = 1 case), Huawei Nova 5T (n = 1 case)). In total, 873 FoV (images) of 10x and 1514 FoV (images) of 40x were captured.

To evaluate the AI model’s capacity to generalize and address the issue of overfitting while ensuring accurate performance report, a case-level split was employed. This approach ensures that all images from the same case belong to the same dataset, whether used for training the AI model or validating its performance. The split is carried out after labelling the images to guarantee that all species are presented in both the training and validation sets. The cases are distributed randomly, striving to achieve an 80%-20% split between the two sets.

2.3.2 Labeling data.

All acquired images were transferred from the smartphone to a telemedicine platform via mobile network, so that the images are stored and presented in an easy-to-use dashboard that allows their visualization, management, and labeling (Fig 2). In this web platform, standard clinical and analysis protocols were translated into digital tasks that were adapted to the clinical case and disease under study.

thumbnail
Fig 2. The telemedicine platform facilitates image visualization, management, and labeling.

When an AI algorithm is deployed, analysts have the option to review the predictions rather than starting the labeling process from scratch.

https://doi.org/10.1371/journal.pntd.0012117.g002

The annotation protocol was based on the placement of bounding boxes around the identified parasites. All visible parasites in the image were labeled by two analysts and reviewed by an expert. At 10x magnification, as the species can’t be identified, all detected parasites belong to the microfilariae class. A total of 2293 parasites were located from 873 images. At 40x magnification, the parasite species were annotated with their corresponding class. A total of 1651 parasites were tagged from 1514 images. In addition, some artifacts that have a similar appearance to the parasite were labeled, which serves as a hard negative for the algorithm training.

The labeled data was divided into a training set for model development and a validation set for selecting the best model, as shown in Table 2. The training set for 10x images consists of 1965 microfilariae from 700 images, while the validation set for 10x images contains 328 microfilariae from 173 images (FoVs). In the training set for 40x images, there are 906 L. loa, 378 M. perstans, 35 W. bancrofti, and 58 B. malayi parasites from 1203 images, while the validation set includes 138 L. loa, 102 M. perstans, 29 W. bancrofti, and 5 B. malayi parasites from 311 images belonging to 30 cases.

thumbnail
Table 2. Label distribution of microfilaria species in the training and validation sets.

https://doi.org/10.1371/journal.pntd.0012117.t002

2.3.3 Creating the AI model.

A requirement for our AI model is that it can work offline or in limited bandwidth settings. To fulfill this requirement, we selected a lightweight model that can be run on a smartphone in real-time without internet connection. Given the multifaceted nature of the task, encompassing object localization, classification and counting, an object detection algorithm would be an appropriate solution. Specifically, we employed the Single-Shot Detection (SSD) MobileNet V2 detection model with a feature pyramid network as feature extractor, shared box prediction and focal loss [3537].

The SSD is a real-time object detection and localization algorithm comprising two fundamental components: feature map extraction and the application of convolutional filters to detect objects. A simplified representation of SSD is illustrated in Fig 3. The feature extraction process leverages MobileNet V2, which encompasses a total of 52 convolutional layers. MobileNet V2 is structured around 16 bottleneck residual blocks, with each block having two 2D convolutional layers and one depthwise convolution layer. The output of MobileNet V2 then undergoes refinement through five additional feature blocks. These supplementary layers are designed to combine features from earlier layers, characterized by a low level of semantic information but a high spatial resolution, with later layers that possess high semantic information but a reduced spatial resolution. This fusion of features is facilitated through lateral connections, ultimately enhancing object detection accuracy. Finally, the processed data is directed through the convolutional box predictor to generate both bounding box predictions and class predictions.

thumbnail
Fig 3. Simplified SSD MobileNet v2 detection architecture.

MobileNet V2 initiates with one convolutional layer, succeeded by a depthwise convolutional layer and another convolutional layer. It is subsequently followed by 16 bottleneck residual blocks (green), each comprising two 2D convolutional layers and one depthwise convolutional layer, concluding with an additional convolutional layer. The SSD feature extractor is further enhanced by integrating five additional feature extractor blocks (purple). The resultant features are passed through a convolutional box predictor block (gray), which is responsible for predicting both the location and class of each detection.

https://doi.org/10.1371/journal.pntd.0012117.g003

In the context of object detection, CNN often generates thousands of candidate regions. However, only a few regions actually contain objects of interest, while the majority represent background elements. This class imbalance presents a significant challenge, as it can lead to training inefficiency. Notably, easy negatives, which correspond to background regions, constitute a substantial proportion of the total candidate regions, potentially overwhelming the loss function used during training. To mitigate this class imbalance issue, focal loss emerges as an enhanced alternative to the conventional cross-entropy loss. Focal loss addresses class imbalance by assigning higher weights to hard-to-classify examples and down-weighting easier examples. This strategic adjustment helps focus the learning process on challenging cases, thereby improving the efficiency and effectiveness of object detection models.

Tensorflow object detection application programming interface (API) was used for model training because tensorflow has natively optimized the model to be executed in mobile phones and edge devices, whose code is publicly available on github [38,39]. Given the relatively small size of our dataset, we used a pre-trained model that was trained with the COCO image database [40] and fine-tuned for this use case. The models were trained on Amazon Sagemaker, using the NVIDIA T4 GPU with 16 GB memory.

Two distinct algorithms were developed. The first algorithm, designed for screening at 10x magnification, focuses solely on detecting the presence of microfilariae. The second algorithm, developed for microfilaria species differentiation at 40x magnification, aims to classify the detected microfilariae into four species: L. loa, M. perstans, W. bancrofti, and B. malayi.

Given the alignment of the smartphone with the microscope eyepiece, the area visualized by the mobile phone is limited to a circular area, as depicted in Fig 2. In order to exclude non-informative regions (e.g., black areas), and to present other relevant information on the mobile phone screen (e.g., label count, AI activation, etc), we decided to use square images instead of rectangular images.

For the species differentiation algorithm that works with 40x magnification, we initially identify the circular region within the image and extract a square image encompassing the entire field of view, as illustrated on Fig 4A. Subsequently, the cropped region was resized to 640x640 pixels. The reviewed data was splitted into two sets at case level as described above.

thumbnail
Fig 4.

(a): example input of the species differentiation algorithm. (b): green rectangle represents the sliding windows size.

https://doi.org/10.1371/journal.pntd.0012117.g004

As Table 2 reflects, this dataset contains 1044 L. loa, 480 M. perstans, but only 64 W. bancrofti and 63 B. malayi in total, to address the imbalance nature of the dataset, oversampling of the minority classes (W. bancrofti and B. malay, and some L. Loa) was employed by generating mosaic images, which consists of cropping a 320x320 pixel patches that contains at least one parasite, and blending 4 images to create a new image of 640x640 pixels (see Fig 5). After augmentation, the training set contains 1116 L. loa, 378 M. perstans, 480 W. bancrofti and 533 B. malayi. Additional image augmentation, including random horizontal and vertical flip, 90 degree rotation with 50% of probability, random crop ensuring that the cropped image has a minimum area of the 80% of the original image, random brightness adjustment with a maximum range of 30%, random hue adjustment with a maximum range of 10%, and random saturation adjustment with a saturation factor between 0.8 and 1.25, was applied during training to enhance the model’s robustness. The model was trained with a batch size of 2. Training employed the momentum optimizer, initialized with a learning rate of 0.01. To enhance training dynamics, a cosine decay strategy was employed, spanning a total of 50,000 training steps. This training required approximately 5 hours to complete.

thumbnail
Fig 5. Mosaic augmentation.

The original image is 640x640 pixels, for each image, we randomly select an area of 320x320 pixels that contains at least one parasite (green rectangle), using 4 cropped areas we compose a new mosaic image.

https://doi.org/10.1371/journal.pntd.0012117.g005

In the screening algorithm that works with 10x magnification, a distinct strategy was implemented for image cropping in comparison to the species differentiation algorithm. Given the relatively small size of the parasite in 10x magnification, its visualization and detection pose challenges for both human analysts and AI systems, necessitating the use of zoom. To optimize the visibility of the parasite and maximize the size of the image, we decided to crop the original image to the square inscribed within the circle as depicted in Fig 4B. As it can be appreciated, a single crop of the inner square leaves some valuable information out. To overcome this limitation, we employed the sliding window technique, where 4 patches were generated for each image, ensuring that all the information within the field of view is represented. Then, patches were resized to 640x640 to fit the requirements set by the network used. The same data augmentation was applied as in the species differentiation algorithm. After data augmentation, the number of microfilariae in the training set increased from 328 to 10847, whereas the validation set was unmodified. The model was trained with a batch size of 2. Training employed the momentum optimizer, initialized with a learning rate of 0.01. To enhance training dynamics, a cosine decay strategy was employed, spanning a total of 20,000 training steps. This training required approximately 2 hours to complete.

2.4. Validation of the AI model in a lab setting

To assess the usability and performance of the proposed system within the clinical workflow, a lab validation was conducted. For that, we first deployed the model on the smartphone, and then piloted the AI assisted diagnosis workflow.

2.4.1 Deployment and integration of technology.

The AI mobile model was optimized using post-training quantization, which is a conversion technique that reduces model size while also improving CPU and hardware accelerator latency, with little degradation on model accuracy. The model is exported as a tflite format, with a size of approximately 13 megabytes, which can be run on a middle range smartphone in real time. The execution time on BQ Aquaris X2 on CPU is 1400 milliseconds for a single image, while the Samsung S9, utilizing the GPU, accomplished the task in 610 milliseconds.

To facilitate the process of digitization and AI-assisted analysis, a customized Android application was developed. The application is not specifically for this research, but a proprietary platform that can be downloaded in Google play store. This application records both clinical data and images. While the user visualizes the image on the mobile phone screen, the selected AI algorithm, screening or species differentiation is running in real time depending on the magnification used (10x or 40x), generating predictions for the corresponding frame, and outlining the detected parasites within bounding boxes. When the user takes a photo, both the images and the prediction are saved, and the parasite detection counter is incremented no matter if the prediction is correct or not. In the case that the user finds parasites not detected by the AI algorithm, they can tap on the button of the corresponding label to increase the count of this parasite. Once the analysis is finished, this information is uploaded to the telemedicine platform, allowing users to review and correct the prediction and share information. Fig 6 explains how the smartphone is attached to the conventional microscope and the screening and species differentiation algorithm running on the smartphone. With AI running in real time, the analyst moves the sample and analyzes it with AI assistance [41,42].

thumbnail
Fig 6.

(a) smartphone attached to the conventional microscope with a 3D printed adapter. (b) screening algorithm working on the smartphone. (c) species differentiation working on the smartphone.

https://doi.org/10.1371/journal.pntd.0012117.g006

2.4.2 Experiment- Pilot replicating diagnostic workflow.

To assess the performance of the AI models, a real-time pilot study was conducted to evaluate the effectiveness of the edge AI system in assisting parasite detection through the mobile application.

To pilot the proposed system, the algorithm developed in the first phase was integrated on the mobile phone and the telemedicine platform to be validated. For all selected samples (N = 18), Fig 7 represents the ideal workflow: with the AI algorithm operating in real time, the analyst examines the complete sample using a 10x magnification objective. Depending on whether the algorithm detects a parasite, different actions are taken. When a parasite is identified by the AI algorithm, the analyst captures a photo and the detected parasites are automatically counted by the app, and switches to a 40x magnification objective. At this point, the species differentiation algorithm is activated to discern the specific species of the detected parasites, a photo is taken to count the parasite. In cases where parasites are present on the screen but not detected by the screening AI, the analyst manually adds the count by tapping on the corresponding label. Since the mobile application did not allow modification of the incorrect prediction, both images and the mobile prediction were uploaded to the telemedicine platform for further correction and validation. The results were independently reviewed by two analysts: analyst A, a junior researcher in parasitology, who analyzed images on real time using the mobile application; and analyst B, an expert in microscopy of infectious diseases and who only reviewed the digitized image on the telemedicine platform.

thumbnail
Fig 7. Schema represents the validation workflow of AI assisted filariae detection.

At least 3 images of negative fields were acquired for each sample.

https://doi.org/10.1371/journal.pntd.0012117.g007

The evaluation of the algorithm’s performance was based on precision (P), which measures the proportion of correctly identified objects among all the objects predicted by the model; recall (R), which measures the proportion of the correctly identified objects among all the ground truth objects; and F1 score, a combined metric that takes into account both precision and recall to provide a single value that represents the overall performance. Object detection algorithms have capabilities that go beyond classification algorithms, being able to detect multiple objects as well as their location and size within the image, in the form of bounding boxes. Therefore, to compute those metrics, additional considerations must be put in place. For each proposed bounding box with confidence score greater than 50%, it is considered as a true positive (TP) if the intersection over union with the ground truth is greater than 0.5 and the class is correct. Conversely, if the predicted area corresponds to artifacts or other parasites class then it is considered as false positive (FP). Furthermore, ground truth boxes that were not proposed by the algorithm were categorized as false negatives (FN). True negative (TN) were not computed as all areas without predictions are considered TN.

(1)(2)(3)

3. Results

3.1 Evaluation of the AI model performance

The performance assessment of the model was conducted on the validation set with 30 cases as described in Table 1. The screening algorithm, designed to work with 10x magnification, achieved a precision of 88.17%, recall of 91.62%, and an F1 score of 89.85%. On the other hand, the species differentiation algorithm achieved a weighted precision of 84.08%, recall of 95.33%, and an F1 score of 94.70%. Breaking down the results per class, the precision rates were 94.85% for L. loa, 97.03% for M. perstans, 94.00% for W. bancrofti, and 66.67% for B. malayi. The corresponding recall rates were 93.48%, 96.08%, 97.92%, and 92.31% respectively.

The resulting confusion matrix of the species differentiation algorithm that works with 40x magnification is presented in Table 3. It should be noted that the AI algorithm was not specifically trained with artifact labels. To avoid the confusion with artifacts, areas on the image that may look like a parasite (e.g., mycelium, fibers) were included in the training dataset as negative examples without specifying the class. However, in order to increase performance, it could be possible to create additional classes for the different structures that might lead to false positives, like hair or mycelium.

thumbnail
Table 3. Confusion matrix of the species differentiation algorithm (40x) on the validation set, each row represents the ground truth, and each column represents the prediction.

The model may predict the artifacts as a parasite (false positive), but the analysts did not label all artifacts.

https://doi.org/10.1371/journal.pntd.0012117.t003

3.2 Validation of the AI-assisted mobile app

For the pilot study, a total of independent 18 samples from different subjects with respect to the ones used for training and validation were analyzed with AI assistance on the mobile phone by analyst A. 452 field of views of 10x magnification and 624 field of views of 40x magnification were analyzed on the mobile phone, uploaded to the telemedicine platform, and reviewed by another analyst, generating the ground truth to evaluate the model performance in real time.

To assess the potential benefits of reviewing AI analysis and its impact on inter-observer variability and time, we shuffled and split the uploaded images into four groups, 10x magnification with AI and without AI assistance (232 images with AI and 220 images without AI), and 40x magnification with and without AI assistance (320 images with AI and 304 images without AI). Both analysts analyze the two groups without AI assistance -10x without AI and 40x without AI—from scratch and analyze the two groups with AI assistance -10x with AI and 40x with AI—by reviewing the prediction generated by the AI on the smartphone. This allowed us to investigate the potential time-saving benefits and the potential reduction in inter-observer variability provided by AI assistance.

3.2.1 Real-time AI-Performance.

The analysis conducted by analyst B, who has greater expertise compared to analyst A, was considered as the ground truth for our evaluation. According to analyst B, at 10x magnification, out of the 452 digitized images examined, 280 were identified as positive, indicating the presence of at least one parasite, while 172 were determined to be negative. The parasite count reported by analyst B and the AI for each image is significantly correlated, with a pearson correlation coefficient of 0.984. At parasite level, the screening algorithm achieved an overall performance of 94.14% precision, 91.90% recall, and 93.01% F1-score. In the context of differentiating between parasite species, according to analyst B, out of the 624 images assessed, 511 were classified as positive and 113 as negative. The parasite count reported by analyst B and the AI for each image is also significantly correlated for species differentiation algorithm, with a pearson correlation coefficient of 0.953. The AI algorithm demonstrated an overall precision of 95.46%, recall of 97.81%, and F1-score of 96.62% in this regard. The per-class precision values were determined as 98.80% for L. loa, 60.00% for M. perstans, 100.00% for W. bancrofti, and 58.97% for B. malayi. The corresponding recall rates were calculated as 98.50%, 100.00%, 76.00%, and 100.00%, respectively. Table 4 presents the confusion matrix of the AI model in relation to analyst B’s analysis.

thumbnail
Table 4. Performance of the AI algorithm on pilot study using mobile phone.

Each row represents the ground truth and each column represents the AI prediction.

https://doi.org/10.1371/journal.pntd.0012117.t004

3.2.2 Inter-observer variability.

To assess inter-observer variability, we compared the total number of parasites detected in each sample by analyst B and analyst A, both with and without AI assistance, and the results are presented in Table 5. Two-tailed t-test is used to analyze statistical significance. Additionally, we compared the parasite count of each image generated by the AI system with the count provided by analyst B (considered as the ground truth). This comparison allowed us to analyze the performance and agreement between the analysts and the AI system in identifying and quantifying parasites.

thumbnail
Table 5. Inter-observer agreement of detected parasites when analyzing with and without AI assistance of 2 experts and of the AI.

Two-tailed t-test indicates that the analysis of analist A and B are significantly correlated.* p-value<0.05.

https://doi.org/10.1371/journal.pntd.0012117.t005

The results reported by both analysts are strongly correlated. The Pearson correlation coefficient between analyst A and analyst B when analyzing without AI is 0.990 for the screening algorithm and 0.992 for species differentiation algorithm. Similarly, when analyzing with AI assistance, the correlation coefficients are 0.994 and 0.997 respectively. Notably, the correlation coefficients are slightly higher when analysts utilize AI assistance during their assessments.

Furthermore, it is noteworthy that there is a high correlation between the parasite counts reported by the AI model and analyst B. The minimum Pearson correlation coefficient observed in this comparison was 0.928, further indicating a strong correlation between their reported counts.

3.2.3 Analysis time.

In addition to evaluating the performance of the edge AI system, we also analyzed the potential time-saving effect of AI assistance on the telemedicine platform. We compared the time required for analysts to review AI predictions versus the time needed for labeling from scratch. Both analysts were asked to review 524 images without AI assistance, and then 524 different images with AI assistance. As shown in Table 6, for Analyst A, the analysis time significantly decreased from an average of 23.5 seconds per image to just 3.5 seconds per image when utilizing AI assistance. However, it should be noted that the analysis time for Analyst B remained unchanged.

thumbnail
Table 6. Average analysis time in seconds for both analysts with and without AI assistance for each image.

https://doi.org/10.1371/journal.pntd.0012117.t006

4. Discussion and conclusion

This study introduces the first real-time edge AI deployment on smartphones to assist in the screening and species differentiation for filarial samples in mobile microscopy and validated it in a clinical setting. To create and validate the AI powered mobile application, we proposed a methodology that encompasses an image digitization system, a telemedicine platform to visualize and annotate images, a training and deployment pipeline, and an Android application to deploy AI models.

Diagnosis is an essential part of the monitorization of the effect of MDA, which is a recommended strategy to control or eliminate several neglected tropical diseases, including filariasis. Microscopy is a widely used technique for filariasis diagnosis, as it can distinguish parasite species, but it requires expert microscopists, and is time-consuming. Numerous studies have incorporated AI to aid in the diagnosis of microscopic images, targeting mostly malaria image analysis [43,44]—a recent review has identified 95 publications for malaria [27] -, and more recently also appeared works that deals with STH and schistosomiasis [24,32,4548], leishmaniasis [49], Chagas diseases [50,51], etc. The number of studies in this topic is very limited, but have yielded promising outcomes, aiming to facilitate the diagnosis of these diseases in LMICs. For example, Yu et al. evaluated a malaria screener employing a custom CNN to detect Plasmodium falciparum using a smartphone on both thin and thick blood smears. Developed with 150 patients and 50 healthy subjects, the model achieved an accuracy of 74.1% compared to expert microscopy on a test set of 190 patients, meeting the WHO level 3 requirement for parasite detection [43,52,53]. Armstrong et al. proposed an object detection algorithm using ResNet101 to detect Schistosoma eggs on Google Pixel 4. Developed with 205 patients, the model achieved a sensitivity of 91%, a specificity of 85%, and an inference time of 6s [24]. Li et al. presented a study with 1122 patients with a total of 22,444 images. The model trained with 15,700 images from 785 patients, detects visible components on human feces using an object detection algorithm with ResNet 152 as backbone, achieving a mean average precision of 92.16% and mean average recall of 93.56% on a test set with 6740 images from 337 patients [46]. Gonçalves et al. proposed the use of a u-net to segment human visceral leishmaniasis parasites on bone marrow samples. Developed with 150 images with 559 parasites (70% for training, 10% for validation and 20% for testing), the model achieved a Dice coefficient of 80.4% [49]. Morais et al. proposed the detection of chagas parasites using graph based segmentation algorithm and random forest. Developed with 33 mice samples with 1314 parasites (80% for training and 20% for testing), achieving a precision of 87.6% and a recall of 90.5% [50]. Very few studies attempted to automate filarias parasite detection detecting microfilariae, without distinguishing species [22,34]. The only preliminary proof of concept study that uses AI to detect microfilariae is proposed by Elvana et. al, who used a small database with 210 images and a custom CNN with 8 convolutional layers, achieving 70% accuracy [34].

With respect to these prior works, our proposal allows us to replicate the full diagnostic workflow including 10x and 40x examinations, successfully distinguishing between different microfilariae species, making it particularly valuable in co-endemic areas where multiple species are prevalent. Our system also operates in real-time (610 milliseconds on Samsung S9) without the internet connection, enabling its deployment at the point of care and not relying on expensive or hard to find hardware as it can be utilized with any conventional microscope and low- to middle-end mobile phones, making it accessible and affordable. The system is easily scalable, as it is deployed on smartphones.

The AI system that we propose follows the conventional workflow, screening the sample at 10x magnification and differentiating species at 40x magnification. Hence, two algorithms were deployed for each use case using 85 samples, which were first validated on 30 samples to assess the model performance and then deployed to the clinical environment to evaluate the whole system usability. The validation in the clinical environment was conducted by analyzing 18 samples with the AI model running on mobile phone in real time, achieving an overall precision of 94.14%, recall of 91.90% and F1 score of 93.01% for the screening algorithm and 95.46%, 97.81% and 96.62% for the species differentiation algorithm respectively.

In the inter-observer variability and analysis time comparison, we found that with AI assistance the correlation between two analysts increased slightly, and the analysis time reduced for the junior researcher in parasitology while it remained unchanged for the expert in infectious diseases microscopy.

It is important to highlight that our AI algorithm didn’t incorporate all filarias species detectable in blood, B. timori and M. ozzardi were not available. The former is important to be monitored in order to decide when to stop the existing mass drug administration program for lymphatic filariasis [54]. Regarding the latter, it shares a geographical distribution overlap with M. perstans [55]. Since our model didn’t include M. ozzardi, it can not differentiate between M. ozzardi from M. perstans, but the screening algorithm should detect the microfilaria even if it is M ozzardi. With the increasing calls for the mansonellosis treatment and control program [11,12], the inclusion of those species will further improve the utility of our AI model. It should be also noted that our study has a limited sample size in general, especially for W. bancrofti, and B. malayi (35, 58 labels for training respectively). Despite that, our algorithm achieved high precision and recall, even though the performance fluctuates a lot for minority classes. Additionally, the fact that all samples come from one research center may introduce bias and reduce generalizability of our algorithm, performing worse in samples from other centers, due to the sample preparation, etc. To address these limitations, future research will approach a multi-centric study, including training and validating samples from different research centers, involving more analysts, and including B. timori and M. ozzardi species. Such an extensive validation process would help to assess the robustness and generalizability of the AI system across various real-world settings and conditions to guarantee readiness for deployment to the local health centres.

In conclusion, the presented system can assist the diagnosis of filariasis in resource-constrained settings, particularly when healthcare workers are scarce, by transforming any optical microscope into an intelligent point-of-care device. The system is easily scalable, as it is deployed on smartphones. This approach could reduce the dependency of highly specialized personnel as we can empower community health workers to contribute to filariasis control. Additionally, the system’s telemedicine platform provides the opportunity for seeking second opinions and quality control in cases of diagnostic uncertainty, enhancing overall accuracy. The platform also can be used as an epidemiological surveillance platform, contributing to the tracking and the monitoring of the prevalence and distribution of filariasis. Furthermore, our system can be expanded to other neglected tropical diseases by collecting samples of other diseases, with the vision of creating a universal AI model for parasite detection. We also believe that future AI supporting systems will be multi-modal, incorporating a wide range of clinical inputs from diverse data sources beyond imaging, such as medical text or speech, enhancing the accuracy and generating comprehensible diagnostic interfaces reports [56,57]. The current AI revolution in medicine should also be viewed as an opportunity for NTDs.

References

  1. 1. Metzger WG, Mordmüller B. Loa loa-does it deserve to be neglected? Lancet Infect Dis. 2014;14: 353–357. pmid:24332895
  2. 2. Raccurt CP, Brasseur P, Boncy J. Mansonelliasis, a neglected parasitic disease in Haiti. Mem Inst Oswaldo Cruz. 2014;109: 709–711. pmid:25317697
  3. 3. Lima NF, Veggiani Aybar CA, Dantur Juri MJ, Ferreira MU. Mansonella ozzardi: a neglected New World filarial nematode. Pathog Glob Health. 2016;110: 97–107. pmid:27376501
  4. 4. World Health Organization, editor. Ending the neglect to attain the Sustainable Development Goals: a road map for neglected tropical diseases 2021–2030. World Health Organization; 2021.
  5. 5. Local Burden of Disease 2019 Neglected Tropical Diseases Collaborators. The global distribution of lymphatic filariasis, 2000–18: a geospatial analysis. Lancet Glob Health. 2020;8: e1186–e1194. pmid:32827480
  6. 6. GBD 2017 Disease and Injury Incidence and Prevalence Collaborators. Global, regional, and national incidence, prevalence, and years lived with disability for 354 diseases and injuries for 195 countries and territories, 1990–2017: a systematic analysis for the Global Burden of Disease Study 2017. Lancet. 2018;392: 1789–1858. pmid:30496104
  7. 7. Gardon J, Gardon-Wendel N, Demanga-Ngangue, Kamgno J, Chippaux JP, Boussinesq M. Serious reactions after mass treatment of onchocerciasis with ivermectin in an area endemic for Loa loa infection. Lancet. 1997;350: 18–22. pmid:9217715
  8. 8. Beng AA, Esum ME, Deribe K, Njouendou AJ, Ndongmo PWC, Abong RA, et al. Mapping lymphatic filariasis in Loa loa endemic health districts naïve for ivermectin mass administration and situated in the forested zone of Cameroon. BMC Infect Dis. 2020;20: 284. pmid:32299374
  9. 9. Simonsen PE, Onapa AW, Asio SM. Mansonella perstans filariasis in Africa. Acta Trop. 2011;120 Suppl 1: S109-20. pmid:20152790
  10. 10. Ta-Tang T-H, Crainey JL, Post RJ, Luz SL, Rubio JM. Mansonellosis: current perspectives. Res Rep Trop Med. 2018;9: 9–24. pmid:30050351
  11. 11. Ferreira MU, Crainey JL, Gobbi FG. The search for better treatment strategies for mansonellosis: an expert perspective. Expert Opin Pharmacother. 2023;24: 1685–1692. pmid:37477269
  12. 12. Ta-Tang T-H, Luz SLB, Crainey JL, Rubio JM. An overview of the management of mansonellosis. Res Rep Trop Med. 2021;12: 93–105. pmid:34079424
  13. 13. Jacobsen KH, Andress BC, Bhagwat EA, Bryant CA, Chandrapu VR, Desmonts CG, et al. A call for loiasis to be added to the WHO list of neglected tropical diseases. Lancet Infect Dis. 2022;22: e299–e302. pmid:35500592
  14. 14. World Health Organization, editor. Diagnostic Test For Surveillance Of Lymphatic Filariasis. World Health Organization; 2021. p. 16.
  15. 15. Moya L, Herrador Z, Ta-Tang TH, Rubio JM, Perteguer MJ, Hernandez-González A, et al. Evidence for suppression of onchocerciasis transmission in bioko island, equatorial guinea. PLoS Negl Trop Dis. 2016;10: e0004829. pmid:27448085
  16. 16. Mathison BA, Couturier MR, Pritt BS. Diagnostic identification and differentiation of microfilariae. J Clin Microbiol. 2019;57. pmid:31340993
  17. 17. Petti CA, Polage CR, Quinn TC, Ronald AR, Sande MA. Laboratory medicine in Africa: a barrier to effective health care. Clin Infect Dis. 2006;42: 377–382. pmid:16392084
  18. 18. Global strategy on human resources for health: Workforce 2030. 2020.
  19. 19. McCool J, Dobson R, Whittaker R, Paton C. Mobile Health (mHealth) in Low- and Middle-Income Countries. Annu Rev Public Health. 2022;43: 525–539. pmid:34648368
  20. 20. Saeed MA, Jabbar A. “smart diagnosis” of parasitic diseases by use of smartphones. J Clin Microbiol. 2018;56. pmid:29046408
  21. 21. Feroz A, Jabeen R, Saleem S. Using mobile phones to improve community health workers performance in low-and-middle-income countries. BMC Public Health. 2020;20: 49. pmid:31931773
  22. 22. D’Ambrosio MV, Bakalar M, Bennuru S, Reber C, Skandarajah A, Nilsson L, et al. Point-of-care quantification of blood-borne filarial parasites with a mobile phone microscope. Sci Transl Med. 2015;7: 286re4. pmid:25947164
  23. 23. Pion SD, Nana-Djeunga H, Niamsi-Emalio Y, Chesnais CB, Deléglise H, Mackenzie C, et al. Implications for annual retesting after a test-and-not-treat strategy for onchocerciasis elimination in areas co-endemic with Loa loa infection: an observational cohort study. Lancet Infect Dis. 2020;20: 102–109. pmid:31676244
  24. 24. Armstrong M, Harris AR, D’Ambrosio MV, Coulibaly JT, Essien-Baidoo S, Ephraim RKD, et al. Point-of-Care Sample Preparation and Automated Quantitative Detection of Schistosoma haematobium Using Mobile Phone Microscopy. Am J Trop Med Hyg. 2022. pmid:35344927
  25. 25. Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med. 2019;25: 44–56. pmid:30617339
  26. 26. Cai L, Gao J, Zhao D. A review of the application of deep learning in medical image classification and segmentation. Ann Transl Med. 2020;8: 713. pmid:32617333
  27. 27. Fan BE, Yong BSJ, Li R, Wang SSY, Aw MYN, Chia MF, et al. From microscope to micropixels: A rapid review of artificial intelligence for the peripheral blood film. Blood Rev. 2023; 101144. pmid:38016837
  28. 28. Quinn JA, Nakasi R, Mugagga PKB, Byanyima P, Lubega W, Andama A. Deep Convolutional Neural Networks for Microscopy-Based Point of Care Diagnostics. 2016; 1–12.
  29. 29. Davidson MS, Andradi-Brown C, Yahiya S, Chmielewski J, O’Donnell AJ, Gurung P, et al. Automated detection and staging of malaria parasites from cytological smears using convolutional neural networks. Biol Imaging. 2021;1: e2. pmid:35036920
  30. 30. Holmström O, Linder N, Ngasala B, Mårtensson A, Linder E, Lundin M, et al. Point-of-care mobile digital microscopy and deep learning for the detection of soil-transmitted helminths and Schistosoma haematobium. Glob Health Action. 2017;10: 1337325. pmid:28838305
  31. 31. Dacal E, Bermejo-Peláez D, Lin L, Álamo E, Cuadrado D, Martínez Á, et al. Mobile microscopy and telemedicine platform assisted by deep learning for the quantification of Trichuris trichiura infection. PLoS Negl Trop Dis. 2021;15: e0009677. pmid:34492039
  32. 32. Oyibo P, Jujjavarapu S, Meulah B, Agbana T, Braakman I, van Diepen A, et al. Schistoscope: An Automated Microscope with Artificial Intelligence for Detection of Schistosoma haematobium Eggs in Resource-Limited Settings. Micromachines (Basel). 2022;13. pmid:35630110
  33. 33. Dedhiya R, Kakileti ST, Deepu G, Gopinath K, Opoku N, King C, et al. Evaluation of Non-Invasive Thermal Imaging for Detection of Viability of Onchocerciasis Worms. Annu Int Conf IEEE Eng Med Biol Soc. 2022;2022: 3518–3521. pmid:36086671
  34. 34. Elvana A, Suryanto ED. Lymphatic filariasis detection using image analysis. EAI; 2022.
  35. 35. Lin T-Y, Dollar P, Girshick R, He K, Hariharan B, Belongie S. Feature pyramid networks for object detection. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE; 2017. pp. 936–944. https://doi.org/10.1109/CVPR.2017.106
  36. 36. Liu W, Anguelov D, Erhan D, Szegedy C, Reed S, Fu C-Y, et al. SSD: Single shot multibox detector. In: Leibe B, Matas J, Sebe N, Welling M, editors. European Conference on Computer VIsion (ECCV). Cham: Springer International Publishing; 2016. pp. 21–37. https://doi.org/10.1007/978-3-319-46448-0_2
  37. 37. Sandler M, Howard A, Zhu M, Zhmoginov A, Chen L-C. Mobilenetv2: inverted residuals and linear bottlenecks. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE; 2018. pp. 4510–4520. https://doi.org/10.1109/CVPR.2018.00474
  38. 38. Abadi M, Barham P, Chen J, Chen Z, Davis A, Dean J, et al. TensorFlow: A system for large-scale machine learning. 2016; 21.
  39. 39. Tensorflow object detection. [cited 9 Nov 2023]. Available: https://github.com/tensorflow/models/tree/master/research/object_detection
  40. 40. Lin T-Y, Maire M, Belongie S, Hays J, Perona P, Ramanan D, et al. Microsoft COCO: common objects in context. In: Fleet D, Pajdla T, Schiele B, Tuytelaars T, editors. European Conference on Computer Vision (ECCV). Cham: Springer International Publishing; 2014. pp. 740–755. https://doi.org/10.1007/978-3-319-10602-1_48
  41. 41. Spotlab. AI copilot for neglected tropical diseases microscopy diagnosis with limited connectivity: video 1. In: YouTube [Internet]. 7 Feb 2024 [cited 7 Feb 2024]. Available: https://youtu.be/dqEKL5HMK6s?si=aK0rj4YtPFIOV-3r
  42. 42. Spotlab. AI copilot for neglected tropical diseases microscopy diagnosis with limited connectivity: video 2. In: YouTube [Internet]. 7 Feb 2024 [cited 7 Feb 2024]. Available: https://youtu.be/YjXL5FBacA8?si=4Xr7H0rEouTO9TyC
  43. 43. Yu H, Mohammed FO, Abdel Hamid M, Yang F, Kassim YM, Mohamed AO, et al. Patient-level performance evaluation of a smartphone-based malaria diagnostic application. Malar J. 2023;22: 33. pmid:36707822
  44. 44. Horning MP, Delahunt CB, Bachman CM, Luchavez J, Luna C, Hu L, et al. Performance of a fully-automated system on a WHO malaria microscopy evaluation slide set. Malar J. 2021;20: 110. pmid:33632222
  45. 45. Meulah B, Bengtson M, Lieshout LV, Hokke CH, Kreidenweiss A, Diehl J-C, et al. A review on innovative optical devices for the diagnosis of human soil-transmitted helminthiasis and schistosomiasis: from research and development to commercialization. Parasitology. 2022; 1–13. pmid:36683384
  46. 46. Li Q, Li S, Liu X, He Z, Wang T, Xu Y, et al. FecalNet: Automated detection of visible components in human feces using deep learning. Med Phys. 2020;47: 4212–4222. pmid:32583463
  47. 47. Ward P, Dahlberg P, Lagatie O, Larsson J, Tynong A, Vlaminck J, et al. Affordable artificial intelligence-based digital pathology for neglected tropical diseases: A proof-of-concept for the detection of soil-transmitted helminths and Schistosoma mansoni eggs in Kato-Katz stool thick smears. PLoS Negl Trop Dis. 2022;16: e0010500. pmid:35714140
  48. 48. Ward PK, Roose S, Ayana M, Broadfield LA, Dahlberg P, Kabatereine N, et al. A comprehensive evaluation of an artificial intelligence based digital pathology to monitor large-scale deworming programs against soil-transmitted helminths: a study protocol. medRxiv. 2023.
  49. 49. Gonçalves C, Borges A, Dias V, Marques J, Aguiar B, Costa C, et al. Detection of Human Visceral Leishmaniasis Parasites in Microscopy Images from Bone Marrow Parasitological Examination. Appl Sci. 2023;13: 8076.
  50. 50. Morais MCC, Silva D, Milagre MM, de Oliveira MT, Pereira T, Silva JS, et al. Automatic detection of the parasite Trypanosoma cruzi in blood smears using a machine learning approach applied to mobile phone images. PeerJ. 2022;10: e13470. pmid:35651746
  51. 51. Pereira A, Pyrrho A, Vanzan D, Mazza L, Gomes JG. Deep Convolutional Neural Network applied to Chagas Disease Parasitemia Assessment. Anais do 14 Congresso Brasileiro de Inteligência Computacional. ABRICOM; 2020. pp. 1–8. https://doi.org/10.21528/CBIC2019-119
  52. 52. Yang F, Poostchi M, Yu H, Zhou Z, Silamut K, Yu J, et al. Deep Learning for Smartphone-Based Malaria Parasite Detection in Thick Blood Smears. IEEE J Biomed Health Inform. 2020;24: 1427–1438. pmid:31545747
  53. 53. Rajaraman S, Jaeger S, Antani SK. Performance evaluation of deep neural ensembles toward malaria parasite detection in thin-blood smear images. PeerJ. 2019;7: e6977. pmid:31179181
  54. 54. Diagnostic Test For Lymphatic Filariasis To Support Decisions For Stopping Triple-therapy Mass Drug Administration. World Health Organization; 2021. p. 17.
  55. 55. Organization WH. Bench Aids for the Diagnosis of Filarial Infections. World Health Organization; 1997. p. 6.
  56. 56. Moor M, Banerjee O, Abad ZSH, Krumholz HM, Leskovec J, Topol EJ, et al. Foundation models for generalist medical artificial intelligence. Nature. 2023;616: 259–265. pmid:37045921
  57. 57. Rajpurkar P, Lungren MP. The current and future state of AI interpretation of medical images. N Engl J Med. 2023;388: 1981–1990. pmid:37224199