Figures
Abstract
Mosquito-borne diseases cause a huge burden on public health worldwide. The viruses that cause these diseases impact the behavioural traits of mosquitoes, including locomotion and feeding. Understanding these traits can help in improving existing epidemiological models and developing effective mosquito traps. However, it is difficult to understand the flight behaviour of mosquitoes due to their small sizes, complicated poses, and seemingly random moving patterns. Currently, no open-source tool is available that can detect and track resting or flying mosquitoes. Our work presented in this paper provides a detection and trajectory estimation method using the Mask RCNN algorithm and spline interpolation, which can efficiently detect mosquitoes and track their trajectories with higher accuracy. The method does not require special equipment and works excellently even with low-resolution videos. Considering the mosquito size, the proposed method’s detection performance is validated using a tracker error and a custom metric that considers the mean distance between positions (estimated and ground truth), pooled standard deviation, and average accuracy. The results showed that the proposed method could successfully detect and track the flying (≈ 96% accuracy) as well as resting (100% accuracy) mosquitoes. The performance can be impacted in the case of occlusions and background clutters. Overall, this research serves as an efficient open-source tool to facilitate further examination of mosquito behavioural traits.
Citation: Javed N, Paradkar PN, Bhatti A (2023) Flight behaviour monitoring and quantification of aedes aegypti using convolution neural network. PLoS ONE 18(7): e0284819. https://doi.org/10.1371/journal.pone.0284819
Editor: Shrisha Rao, International Institute of Information Technology, INDIA
Received: October 24, 2022; Accepted: April 10, 2023; Published: July 20, 2023
Copyright: © 2023 Javed et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: https://github.com/Nouman-ML/Supp-Data.
Funding: The study was partly funded by CSIRO strategic funding provided to P.N.P. and N.J. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors declare no competing interests.
Introduction
According to the World Health Organisation (WHO), mosquito-borne diseases are the most dangerous diseases among all vector-borne diseases [1], mainly due to the sheer number of people affected. Mosquito-borne diseases such as malaria, dengue, and yellow fever impact human health with high morbidity and mortality. These pathogens also affect the behaviours of mosquitoes [2], including locomotion [3–5], oviposition preferences [6], fertility [7] and feeding [8, 9]. Moreover, recent research has shown that vector-borne viruses can also infect and significantly impact the vector nervous system [10–12]. Monitoring mosquitoes’ flight trajectories can help in understanding and defining their locomotion behaviour, which can ultimately assist in determining their fitness, improving existing epidemiological models [13] and developing effective mosquito traps [14].
Initially, mosquitoes’ behaviours were based on manual observations by researchers [15, 16]. However, this is a very resource-expensive method and limits the number of individuals that can be simultaneously monitored. Moreover, some behavioural investigations require continuous observations, making the monitoring process laborious and time-consuming [17, 18]. Recently, the development of high-quality cameras has made automatic monitoring of objects possible through object techniques [19]. However, the object detection methods assume that the objects of concern in each frame are of significant size and have high contrast relative to the background [20]. In reality, the mosquitoes are smaller in size and have seemingly arbitrary moving patterns with speed variations [21]. In addition to these challenges, mosquitoes also depict different shapes by exhibiting different poses through random motion [22].
In recent times, artificial intelligence (AI) has played a vital role in transforming visualisation [23–26]. AI mimics human intelligence procedures through different algorithms built into a dynamic computing environment [27]. It consists of several subsets, including machine learning (ML), natural language processing (NLP), expert systems, and computer vision [28]. Machine learning focuses on building programs that learn from data and improve their accuracy automatically over time [29]. Machine learning can be divided into unsupervised and supervised learning. In supervised learning, machine programs learn the relations between inputs and outputs through the analysis of defined outputs of interest [30]. In contrast, unsupervised learning learns relations in data without depending on the external association of interest definitions [31].
Deep learning is the subset of machine learning and has set exciting new trends in machine learning over the years. In deep learning, machines are programmed to learn relations based on large quantities of raw data [32]. One important subset of AI is computer vision. Computer vision mimics human visual perception and reasoning capabilities [33]. Modern computer vision techniques heavily rely on machine learning and, specifically, deep learning algorithms. Over the decade, many algorithms and techniques have been developed to detect and monitor using computer vision. Region-based Convolutional Neural Networks (RCNN) models are among them and have played a key role in object detection.
In the past, machine learning-based models have been employed in finding different aspects of mosquitoes, such as detecting breeding grounds [34, 35] and identifying gender [36]. In addition, machine learning applications have also been reported in mosquito control [37]. There are also some commercially available tools for detecting mosquito flight behaviour [38]. However, to the best of the authors’ knowledge, no research has been found where machine learning models were used in tracking the trajectory of tiny flying objects like mosquitoes. These days, machine learning models are being used to detect different small objects, such as cell nuclei [39], showing the capability of machine learning models. Considering the potential of machine learning algorithms, it is hypothesised that machine learning based models can also help in tracking the trajectories of flying mosquitoes.
Taking into consideration the importance of understanding mosquitoes’ behavioural activities and the unavailability of any open-source mosquito detection and tracking tool, a method using the Mask RCNN algorithm and spline interpolation is presented here, which can efficiently detect mosquitoes and track their trajectories with higher accuracy. Additionally, it does not require any special high-quality setup and works excellently, even on low-resolution videos.
Materials and methods
Mosquitoes maintenance
Aedes aegypti mosquito colonies originating from Brisbane (provided by Prof. Ary Hoffman) were kept in the laboratory. Mosquitoes were maintained by artificial blood-feeding with chicken blood. Colony temperature was maintained at 27°C with humidity ranging between 60–70% under diurnal day: night (12h:12h) light cycle.
Cage and feeding
Aedes aegypti females were kept in a transparent plexiglass cage of dimensions 30×30×30 cm for video recording. Mosquitoes were provided with sugar water ad libitum Fig 1A.
(a) The experimental setup consisted of a plexiglass cage, fabric net, sugar water bottle, mosquitoes, and camera. The number of mosquitoes in each recording was different. (b) The trajectory estimation was based on the mask RCNN framework and cubic spline interpolation. The training images data was fed into the Mask RCNN framework. Mask RCNN consists of RoIAlign to preserve spatial information. RoIAlign uses binary interpolation, which creates fix size feature map. RoIAlign layer output is fed into the mask head, which is consisted of two convolutional layers. Through this, masks are generated for each ROI, thus pixel to pixel segmentation of the images. Then video sequence data were processed using the trained model, and coordinates were extracted. Finally, the cubic spline interpolation was applied to fill the missing data smoothly.
Mosquitoes data recording and selection
The recording was started after one-week post-emergence, capturing six videos in total consisting of different mosquito batches and having a duration of around 1 minute each. Images from 2 videos with 5 and 24 mosquitoes were used to extract the training images, while images from 1 video having five mosquitoes were used to get the validation images. The remaining three videos with mosquitoes ranging from 5 to 27 were used for testing purposes. The videos were recorded under lights using the Flea3 camera [40]. From testing videos, three video sequences of around 9 seconds duration (≈ 540 frames) each, consisting of resting and flying mosquitoes, were used for the analysis. Video sequence duration was selected by considering the light consistency and the number of flying mosquitoes and their flight patterns (covering different flight trajectories) as mosquitoes spend the majority of their time in the rest position. The frame sizes of video sequences were 640 in width and 512 in height, while the frame rates were 60 frames per second.
Training and validation data
Training and validation were performed by using 100 images extracted from training and testing videos. Of these 100 images, 80 were used for training, and 20 were used for validation. In total, we trained for 25 epochs, and the detection threshold was kept at 70%, which means the proposals with less than 0.7 confidence were ignored. Training and validation data annotations were created with the help of VGG Image Annotator [41] in the form of.json files.
Groundtruth data collection
The groundtruth values were calculated manually using the cursor position to check pixel values through a GitHub-based image viewer [42].
Interpolation
Considering the simplicity and usefulness, the SRS1 Cubic Spline function (Version 2.5), which is a Microsoft Excel Add-in [43], was used to perform the cubic spline interpolation. A cubic spline interpolates a smooth line that directly passes through all points in the data set. Mainly, cubic spline interpolation tries to make the resultant curve smooth and continuous at each data point by fitting a series of cubic polynomials. This fitting process requires the matching of the first and second derivatives of the polynomials at each data point and imposing boundary conditions at the endpoints of the resultant curve.
Metric based evaluation
Performance metrics are powerful tools used to evaluate the usability of any product. Measuring performance is a key to evaluating how well the algorithm performs its function. Considering the small sizes of mosquitoes, the performance of the proposed system is evaluated by using a custom metric that considers the three indicators: mean of distances between positions, pooled standard deviation, and average accuracy. The mean of distances between positions tells about the mean of differences in pixels between corresponding estimated central positions of mosquitoes and ground truth centroids in each frame. The formula to calculate the mean squared distance is derived from the L2-norm distance, also known as Euclidean distance. In our scenario, L2-norm distance computes the square root of the sum of the squared differences between the position of mosquitoes across the frames (Eq 1), which is then used to calculate the mean squared distance (Eq 2).
Where df is the distance between estimated x-axis pixels (pf), ground truth x-axis pixels (xf), and estimated y-axis pixels (qf), ground truth y-axis pixels (yf). The value of f shows the frame number, and n shows the total number of frames.
Pooled standard deviation is the weighted average of standard deviations between estimated and ground-truth trajectories data for all mosquitoes present in a video sequence, while accuracy is defined as the closest possible trajectory points captured by the mask RCNN algorithm and interpolation to the ground-truth trajectory points. The accuracy tolerance was set at 8 pixels which means if the estimated points were within 8 pixels (absolute value of both x-axis and y-axis) of ground truth centroids, they were considered as part of accuracy, as expressed in expression (3). This was based on the fact that the mosquito is not a single pixel organism, and it could be sitting in any position, so it was not possible to estimate the exact centroids of mosquitoes. The value of 8 pixels was selected by taking the average length of 10 randomly selected mosquitoes which was 1.25% of the x-axis and 1.56% of the y-axis. The logical expression considered for the accuracy is given below.
Custom metric performs the quantitative assessment, and it takes the position of estimated values and compares them with the ground truth values to calculate indicators values.
Tracker error based verification
Tracker error is used to perform an in-depth performance evaluation of any algorithm. In the experiments, tracking error is the point to point difference in pixels between mosquitoes’ estimated centre positions and ground truth centroids in each frame. The tracker error was calculated by using Eq 1. In tracker error-based verification, only flying mosquitoes were considered, as the accuracy for the sitting mosquitoes was 100% in all case scenarios.
Programming and computational system
All programming and computational analyses were performed on a laptop computer, run under a 64-bit Windows 10 Pro environment using Intel i7-10510U (1.80 GHz and 2.30 GHz) processor and 16 Gb DDR4 RAM. The method was implemented in Python 3.7, OpenCV 3.3.1, Jupyter Notebook 6.4.2, and Tensorflow 1.14.0. The details of other libraries and their versions are available in the Requirements File in S1 File.
Trajectory estimation using machine learning algorithm
Mosquitoes’ detection and trajectory estimation was performed with the help of a custom-developed technique, which uses the Mask RCNN algorithm and spline interpolation. Mask RCNN is a deep neural network that helps to extract different objects from an input image or video. Mask RCNN is the extension of Faster RCNN and uses open-source libraries of Keras and Tensorflow. The Mask RCNN model used in the experiment is based on Feature Pyramid Network (FPN) and a ResNet101 backbone [44]. A Feature Pyramid Network is a feature extractor that facilitates the creation of multiple feature map layers with quality information. ResNet101 is a convolutional neural network having 101 layers. These layers help to improve accuracy and performance as each layer can learn complex features such as detecting edges and identifying textures. The trajectory estimation method involving the Mask RCNN framework is shown in Fig 1B.
Matterport Mask RCNN’s existing model (based on the MSCOCO dataset) [45] could not track the mosquitoes; therefore, custom training was performed for mosquito detection (their location), mosquito localisation (their extent), and instance segmentation (boundaries identification at detailed pixel level) of mosquito containing images. Jupyter Notebook was used to run the code, perform validation and load the videos to the algorithm. The mask RCNN available code could process the images and spot the locations of the objects in each image; however, in our case of feeding videos using OpenCV, we were looking to automatically extract the pixel locations of corresponding mosquitoes in each frame to draw the trajectories with the less work. Therefore, the existing code was modified to automatically identify the locations of each mosquito in each frame. The method monitors the connectivity of mosquito pixel locations in consecutive frames as well as the trajectory’s direction to ensure that it is the continuation of the previous trajectory. If a mosquito was detected in a frame, then in the next frame algorithm looked for the same mosquito at the nearest distance by comparing the x-axis and y-axis pixels’ locations of all mosquitoes with the locations of mosquitoes in the previous frame. The algorithm stored the data of each mosquito in the form of text files. For instance, if the detected pixels’ location of mosquito 2 in frame number 400 is 123 width, and 249 height (x-axis and y-axis values, respectively), then the algorithm will compare it with the locations of all mosquitoes in the previous frame and based on the difference in distances will store it in the text file of mosquito 2. The data that the algorithm misses due to background clutters can be filled through interpolation. Interpolation generates the missing data in a smooth form by using known data points.
In our case, the data obtained from the algorithm was missing the location of mosquitoes in the frames where mosquitoes faced the background clutters and light reflection. Mosquitoes follow arbitrary flight patterns; therefore, a nonlinear curve fitting method, cubic spline interpolation, which worked perfectly in our scenario, was used to fill the data in the frames where mosquitoes’ locations were not detected. Cubic spline interpolation is a mathematical technique generally used to generate new data points within the boundaries of known data. In cubic spline interpolation, unique cubic polynomials are fitted between each data point, with the condition that the curve obtained after interpolation be continuous and look smooth. Fig 2 shows the impact of spline interpolation on a mosquito flight trajectory. The accuracy without spline interpolation was 84.57%, while the accuracy with spline interpolation was 98.69%.
In the top 2 charts, the impact of interpolation with and without interpolation is shown individually, while in the bottom graphs, they are shown along with ground truth trajectories. It can be seen from the green dotted circled areas that spline interpolation helped to fill the missing points and achieve continuous tracking with higher accuracy.
Results
This section presents the experimental results obtained after feeding the videos to the algorithm and performing the spline interpolation to the algorithm’s output. The results are obtained from three different video sequences and presented in the form of case scenarios depending on the number of flying mosquitoes and the total number of mosquitoes present in the cage.
Case scenario 1: Two flying mosquitoes and five total mosquitoes in the cage
In the video sequence of scenario one, the total number of mosquitoes was 5, out of which 2 were flying. In Fig 3, the flight trajectory and rest position of different mosquitoes present in video sequence one are given. Mosquito 2 kept on flying in a certain area, while mosquito 1 covered most of the cage area.
Flying mosquitoes’ flight starting points are shown with dots, while the flight endpoints are shown with arrows. The mosquitoes in the ‘rest’ position are presented with filled marker dots. To distinguish the ground truth trajectories from estimated trajectories which will be discussed in the next section, the names of the mosquitoes for estimated trajectories are indicated with asterisk symbols. Different colours are also used to distinguish the mosquitoes from each other.
Case scenario 2: Three flying mosquitoes and six total mosquitoes in the cage
In the video sequence of scenario two, the number of flying mosquitoes was three, while the total number of mosquitoes was 6. In Fig 4, the flight trajectory and rest position of different mosquitoes present in video sequence two are given. In video 2, the mosquito 3 flight covered the maximum area of the cage compared to mosquitoes 1 and 2.
Case scenario 3: One flying mosquito and twenty-seven total mosquitoes in the cage
In the video sequence of scenario three, the number of flying mosquitoes was 1, while the total number of mosquitoes was 27. In Fig 5, the flight trajectory and rest position of different mosquitoes present in video sequence three is given. Mosquito 1 flight started from almost the middle of the cage and flew between most of the sitting mosquitoes.
Performance evaluation
The proposed method was validated by using a custom metric and tracker error analysis. The following subsections present the metric-based verification and tracker error-based analysis.
Metric based evaluation
Case scenario 1: Two flying mosquitoes and five total mosquitoes in the cage.
In video sequence 1, the mean distance between positions (distance between central positions of mosquitoes and ground truth centroids) was 0.66 pixels, while the pooled standard deviation, which is the combined standard deviation of all mosquitoes present in video 1, was 0.79. There were two flying mosquitoes in video sequence 1. Detection accuracies for mosquitoes 1 and 2 were 100% and 98.69%, respectively, while the overall accuracy for flying mosquitoes was 99.35% (Table 1). The accuracy for mosquitoes in the rest position was 100%. The combined accuracy for flying and sitting mosquitoes was 99.73%. Fig 6 shows the comparison between estimated trajectories (for flying mosquitoes) and positions (for sitting mosquitoes) and ground truth trajectories and positions. We can observe that the proposed method could successfully detect and track the flying and sitting mosquitoes as there are minor differences between the trajectories. The areas where there are small differences between the estimated and ground-truth trajectories consist of the data points where the mosquitoes were facing background clutters (cage boundary, feeding bottle, dark patches on background fabric net) and light reflection.
Case scenario 2: Three flying mosquitoes and six total mosquitoes in the cage.
In video sequence 2, the mean distance between positions was 1.83 pixels, while the pooled standard deviation was 4.63. There were three flying mosquitoes in video sequence 2. Detection accuracies for mosquitoes 1, 2, and 3 were 91.34%, 99.31%, and 89.62%, respectively, while the overall accuracy for flying mosquitoes was 93.42% (Table 2). The accuracy for mosquitoes in the rest position was 100%. The combined accuracy for flying and sitting mosquitoes was 96.731%. Fig 7 shows the comparison between estimated flight trajectories and sitting positions and ground truth flight trajectories and sitting positions.
Case scenario 3: One flying mosquito and twenty-seven¬ total mosquitoes in the cage.
In video sequence 3, the mean distance between positions was 1.47 pixels, while the pooled standard deviation was 0.54. There was one flying mosquito in video sequence 3. The detection accuracy for flying mosquitoes was 95.58%. The accuracy for mosquitoes in the rest position was 100%. The combined accuracy for flying and sitting mosquitoes was 99.83% (Table 3). Fig 8 shows the comparison between estimated flight trajectories and sitting positions and ground truth flight trajectories and sitting positions.
Tracker error based verification
Case scenario 1: Two flying mosquitoes and five total mosquitoes in the cage.
In video sequence one, the total number of frames was 538. Fig 9 shows the tracker error for video sequence one where mosquitoes 1 and 2 were flying. For mosquito number 1, the tracking error was very low in all frames, which shows that it very accurately tracked the trajectory of mosquito 1. For mosquito 2, tracking errors are negligible in most areas; however, some minor differences can be observed around frames 304 to 310. The minor differences were due to the flight of mosquito 2 in the background dark patches.
Case scenario 2: Three flying mosquitoes and six total mosquitoes in the cage.
In video sequence two, the total number of frames was 578. For mosquito number 2, the tracking errors were very low, while for mosquitoes 1 and 3, they were high in a few frames. Fig 10 shows the tracker error for flying mosquitoes 1, 2, and 3. For mosquito 1, some differences between estimated and ground truth can be observed around frames 293 to 297 and 337 to 365, while for mosquito 3, differences can be observed around frames 13 to 35 and 57 to 83. Higher tracker error for mosquito 3 around frames 57 to 83 was due to its continuous flight in background dark net folds.
Case scenario 3: One flying mosquito and twenty-seven total mosquitoes in the cage.
In video sequence three, the total number of frames was 541. For mosquito 1, some differences can be observed around frames 113 to 117 and 125 to 129 (Fig 11). These differences were due to its flight around dark lines of cage boundary and feeding sugar bottle. The overall results showed that this method could very precisely track the trajectory of mosquitoes.
Discussions
Though the method has shown excellent results in tracking mosquitoes’ trajectory, however, there are also some limitations of this method. If there are significant gaps between the data points for different reasons, including background distortions and light reflection, cubic spline interpolation makes interpolated values inaccurate by several orders of magnitude. Eventually, making the curve too complex and not helpful in making predictions. In such scenarios, other interpolation methods might be considered, such as linear interpolation or polynomial interpolation of a lower order.
In the case of occlusions (mosquitoes crossing each other), if mosquitoes deviate slightly after the occlusion, they can be detected successfully by looking at the connectivity of mosquito pixel locations in consecutive frames and the direction of the trajectory through the model and interpolation. If the diversion is at a higher angle, then manual observation of the crossing mosquitoes will be required for the frames where they cross each other. However, to make the manual corrections process easy, the feature of locating frame numbers was also added in the code; output text files contain the frame numbers along with axis data; therefore, it is easy to locate the errors and make the manual corrections. The algorithm can also generate more than one file for each mosquito depending on the background distortions and light reflection; however, combining the data of different files will be effortless by looking at the starting and last values of frame numbers and axis values of each file.
This work can benefit mosquito flight behaviour monitoring and quantification related studies as the trained model can perform well with similar kinds of setups or even if there are slight changes in the setup. When the model was applied to another video (see Video 4 in S1 File) having a slightly different setup with smaller cage dimensions 25x25x25, no sugar water bottle, and white non-fabric background, the method still detected all the mosquitoes. However, bespoke training for different setups can improve the results further.
Conclusions
Detection and flight tracking is important in studying the behavioural traits of mosquitoes. Small sizes, complicated poses, and seemingly arbitrary moving patterns create many different challenges for successfully tracking mosquitoes. This paper presents a trajectory extraction method that utilises the Mask RCNN detection algorithm and cubic spline interpolation for standard laboratory environment videos. Three case scenarios covering different flight trajectories were used for the verification. Metric and tracker error-based verification showed that the presented method is an excellent option for mosquito monitoring and could efficiently track the mosquitoes present in a video, even if they have a similar texture compared to the background. The results were comparable to manually calculated ground truth values, and the average accuracy of three case scenarios was 96.62%, 96.71%, and 99.83%, respectively. The performance can be improved further by increasing the number of training images.
This algorithm is the one step towards developing an automatic mosquito behaviour monitoring system. The development of such methods is vital for determining the fitness of infected or modified mosquitoes and will be useful in vector-borne disease modelling and the development of novel mosquito traps.
Acknowledgments
The authors acknowledge the capabilities of the Australian Centre for Disease Preparedness (grid.413322.5) in undertaking this research, including infrastructure by the National Collaborative Research Infrastructure Strategy (NCRIS). We also thank Dr. Julie Gaburro for providing the raw videos of mosquitoes.
References
- 1. World Health Organization. Global technical strategy for malaria 2016–2030,2015.
- 2. Javed N, Bhatti A, Paradkar PN. Advances in Understanding Vector Behavioural Traits after Infection. Pathogens. 2021;10(11):1376. pmid:34832532
- 3. Gaburro J, Bhatti A, Harper J, Jeanne I, Dearnley M, Green D, et al. Neurotropism and behavioral changes associated with Zika infection in the vector Aedes aegypti. Emerg Microbes Infect. 2018;7(1):68. pmid:29691362
- 4. Lima-Camara TN, Bruno RV, Luz PM, Castro MG, Lourenço-de-Oliveira R, Sorgine MHF, et al. Dengue infection increases the locomotor activity of Aedes aegypti females. PloS one. 2011;6(3):e17690–e. pmid:21408119
- 5. Tallon AK, Lorenzo MG, Moreira LA, Martinez Villegas LE, Hill SR, Ignell R. Dengue infection modulates locomotion and host seeking in Aedes aegypti. PLoS Negl Trop Dis. 2020;14(9):e0008531–e. pmid:32911504
- 6. Gaburro J, Paradkar PN, Klein M, Bhatti A, Nahavandi S, Duchemin J-B. Dengue virus infection changes Aedes aegypti oviposition olfactory preferences. Scientific Reports. 2018;8(1):13179. pmid:30181545
- 7. Resck MEB, Padilha KP, Cupolillo AP, Talyuli OAC, Ferreira-de-Brito A, Lourenço-de-Oliveira R, et al. Unlike Zika, Chikungunya virus interferes in the viability of Aedes aegypti eggs, regardless of females’ age. Scientific Reports. 2020;10(1):13642. pmid:32788625
- 8. Platt KB, Linthicum KJ, Myint KSA, Innis BL, Lerdthusnee K, Vaughn DW. Impact of Dengue Virus Infection on Feeding Behavior of Aedes aegypti. The American Journal of Tropical Medicine and Hygiene. 1997;57(2):119–25. pmid:9288801
- 9. Sim S, Ramirez JL, Dimopoulos G. Dengue virus infection of the Aedes aegypti salivary gland and chemosensory apparatus induces genes that modulate infection and blood-feeding behavior. PLoS Pathog. 2012;8(3):e1002631. pmid:22479185
- 10.
Bhatti A, Lee KH, Garmestani H, Lim CP. Emerging Trends in Neuro Engineering and Neural Computation: Springer; 2017.
- 11. Gaburro J, Bhatti A, Sundaramoorthy V, Dearnley M, Green D, Nahavandi S, et al. Zika virus-induced hyper excitation precedes death of mouse primary neuron. Virology journal. 2018;15(1):1–13.
- 12.
Gaburro J, Nahavandi S, Bhatti A. Insects Neural Model: Potential Alternate to Mammals for Electrophysiological Studies. Emerging Trends in Neuro Engineering and Neural Computation: Springer; 2017. p. 119–30.
- 13. Coluzzi M. Heterogeneities of the malaria vectorial system in tropical Africa and their significance in malaria epidemiology and control. Bulletin of the World Health Organization. 1984;62(Suppl):107. pmid:6335681
- 14. Torr S, Della Torre A, Calzetta M, Costantini C, Vale G. Towards a fuller understanding of mosquito behaviour: use of electrocuting grids to compare the odour‐orientated responses of Anopheles arabiensis and An. quadriannulatus in the field. Medical and veterinary entomology. 2008;22(2):93–108. pmid:18498608
- 15. Charlwood J. Infra-red TV for watching mosquito behaviour in the ’dark’. Transactions of the Royal Society of Tropical Medicine and Hygiene. 1974;68:264. pmid:4153581
- 16. Healy TP, Copland MJ. Activation of Anopheles gambiae mosquitoes by carbon dioxide and human breath. Med Vet Entomol. 1995;9(3):331–6. pmid:7548953
- 17. Charlwood J, Jones M. Mating behaviour in the mosquito, Anopheles gambiae s. 1. save: I. Close range and contact behaviour. Physiological Entomology. 1979;4(2):111–20.
- 18. Costantini C, Gibson G, Sagnon NF, Torre AD, Brady J, Coluzzi M. Mosquito responses to carbon dioxide in B West African Sudan savanna village. Medical and veterinary entomology. 1996;10(3):220–7.
- 19. Gaburro J, Duchemin J-B, Paradkar PN, Nahavandi S, Bhatti A. Assessment of ICount software, a precise and fast egg counting tool for the mosquito vector Aedes aegypti. Parasites & Vectors. 2016;9(1):590. pmid:27863526
- 20. Patin F. An introduction to digital image processing. online]: http://www.programmersheavencom/articles/patin/ImageProc%20pdf. 2003.
- 21. Cribellier A, van Erp JA, Hiscox A, Lankheet MJ, van Leeuwen JL, Spitzen J, et al. Flight behaviour of malaria mosquitoes around odour-baited traps: capture and escape dynamics. Royal Society open science. 2018;5(8):180246. pmid:30225014
- 22. Liu MZ, Vosshall LB. General visual and contingent thermal cues interact to elicit attraction in female Aedes aegypti mosquitoes. Current Biology. 2019;29(13):2250–7. e4. pmid:31257144
- 23.
AlShamsi M, Salloum SA, Alshurideh M, Abdallah S. Artificial intelligence and blockchain for transparency in governance. Artificial Intelligence for Sustainable Development: Theory, Practice and Future Applications: Springer; 2021. p. 219–30.
- 24. Matthews G, Hancock PA, Lin J, Panganiban AR, Reinerman-Jones LE, Szalma JL, et al. Evolution and revolution: Personality research for the coming world of robots, artificial intelligence, and autonomous systems. Personality and individual differences. 2021;169:109969.
- 25. Nawaz MS, Fournier-Viger P, Shojaee A, Fujita H. Using artificial intelligence techniques for COVID-19 genome analysis. Applied Intelligence. 2021:1–18. pmid:34764587
- 26. Yang SJ, Ogata H, Matsui T, Chen N-S. Human-centered artificial intelligence in education: Seeing the invisible through the visible. Computers and Education: Artificial Intelligence. 2021;2:100008.
- 27. Shabbir J, Anwer T. Artificial intelligence and its role in near future. arXiv preprint arXiv:180401396. 2018.
- 28. Jiang F, Jiang Y, Zhi H, Dong Y, Li H, Ma S, et al. Artificial intelligence in healthcare: past, present and future. Stroke and vascular neurology. 2017;2(4). pmid:29507784
- 29. Géron A. Hands-on machine learning with Scikit-Learn, Keras, and TensorFlow: Concepts, tools, and techniques to build intelligent systems: O’Reilly Media; 2019.
- 30.
Schuld M. Supervised learning with quantum computers: Springer; 2018.
- 31.
Ghahramani Z, editor Unsupervised learning. Summer School on Machine Learning; 2003: Springer.
- 32. Najafabadi MM, Villanustre F, Khoshgoftaar TM, Seliya N, Wald R, Muharemagic E. Deep learning applications and challenges in big data analytics. Journal of big data. 2015;2(1):1–21.
- 33.
Szeliski R. Computer vision: algorithms and applications: Springer Science & Business Media; 2010.
- 34. Bravo DT, Lima GA, Alves WAL, Colombo VP, Djogbenou L, Pamboukian SVD, et al. Automatic detection of potential mosquito breeding sites from aerial images acquired by unmanned aerial vehicles. Computers, Environment and Urban Systems. 2021;90:101692.
- 35. Passos WL, Araujo GM, de Lima AA, Netto SL, da Silva EA. Automatic detection of Aedes aegypti breeding grounds based on deep networks with spatio-temporal consistency. Computers, Environment and Urban Systems. 2022;93:101754.
- 36. Kittichai V, Pengsakul T, Chumchuen K, Samung Y, Sriwichai P, Phatthamolrat N, et al. Deep learning approaches for challenging species and gender identification of mosquito vectors. Scientific reports. 2021;11(1):4838. pmid:33649429
- 37. Joshi A, Miller C. Review of machine learning techniques for mosquito control in urban environments. Ecological Informatics. 2021;61:101241.
- 38. Spitzen J, Takken W. Keeping track of mosquitoes: a review of tools to track, record and analyse mosquito flight. Parasites & Vectors. 2018;11(1):123. pmid:29499744
- 39.
Chen K, Zhang N, Powers L, Roveda J, editors. Cell Nuclei Detection and Segmentation for Computational Pathology Using Deep Learning. 2019 Spring Simulation Conference (SpringSim); 2019 29 April-2 May 2019.
- 40.
Edmundoptics. Flea®3 FL3-U3-13E4M-C 1/1.8" Monochrome USB 3.0 Camera 2022 [Available from: https://www.edmundoptics.com/p/flea3-fl3-u3-13e4m-c-118-monochrome-usb-30-camera-/29803/#.
- 41. Dutta AaZ, Andrew. The VIA Annotation Software for Images, Audio and Video. 2019.
- 42.
Yangcha. Iview-Display image, show mouse position and pixel values from web browser 2020 [Available from: https://github.com/yangcha/iview#readme.
- 43.
Srs1 Software. SRS1 Cubic Spline for Excel 2022 [Available from: https://www.srs1software.com/SRS1CubicSplineForExcel.aspx.
- 44. He K, Gkioxari G, Dollár P, Girshick R, editors. Mask r-cnn. Proceedings of the IEEE international conference on computer vision; 2017.
- 45. Abdulla W. Mask R-CNN for object detection and instance segmentation on Keras and TensorFlow 2017 [Available from: https://github.com/matterport/Mask_RCNN.