Peer Review History
| Original SubmissionNovember 21, 2023 |
|---|
|
PONE-D-23-38812Insect Detect: An open-source DIY camera trap for automated insect monitoringPLOS ONE Dear Dr. Sittinger, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Feb 09 2024 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Ramzi Mansour Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 2. Did you know that depositing data in a repository is associated with up to a 25% citation advantage (https://doi.org/10.1371/journal.pone.0230416)? If you’ve not already done so, consider depositing your raw data in a repository to ensure your work is read, appreciated and cited by the largest possible audience. You’ll also earn an Accessible Data icon on your published paper if you deposit your data in any participating repository (https://plos.org/open-science/open-data/#accessible-data). 3. Please note that PLOS ONE has specific guidelines on code sharing for submissions in which author-generated code underpins the findings in the manuscript. In these cases, all author-generated code must be made available without restrictions upon publication of the work. Please review our guidelines at https://journals.plos.org/plosone/s/materials-and-software-sharing#loc-sharing-code and ensure that your code is shared in a way that follows best practice and facilitates reproducibility and reuse. 4. Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Partly ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: N/A ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: Comments to the Authors: The aim of this paper is to present the Insect Detect DIY camera trap, a low-cost and customizable automated monitoring system for flower-visiting insects, utilizing off-the-shelf hardware and open-source software, allowing for large data collection and reliable insect activity estimation. The paper provides substantial value to the scientific community, as it addresses the current hot topic of obtaining reliable abundance counts of insects, which holds crucial importance for biodiversity conservation. However, the readability could be enhanced, particularly concerning the main goal of the study, which revolves around the detection, tracking, classification algorithms, and their corresponding results. For better understanding of the study, it would be helpful to clarify the points below. 1. Consider using full names Bombus, Coccinellidae, Coleoptera, Graphosoma etc. for Figures, Tables and within the text, as it improves the overall readability of the study instead of “bee_bombus”, “beetle_cocci”, “bug”, “bug_grapho”, etc. 2. The frequent mention of Python scripts, R-scripts, or output files using their specific output format (.csv) in the text can be reduced to enhance readability. Instead, it is recommended to use a simpler phrase such as "The data output for automated insect monitoring" rather than specifying "The data output from the Python script for automated insect monitoring." 3. Similar to the statement in line 442-444, "with the default settings (“-min_tracks 3” and “-max_tracks 1800”) to exclude all tracking IDs with less than three or more than 1,800 images," there are instances where the text reads like camera trap documentation, which is somewhat understandable. However, a paper presenting impressive results like this should strive for good readability. 4. The paper includes a valuable GitHub repository, which is highly commendable. Therefore, it is suggested to remove all comments regarding code or scripts from the manuscript, including the entire section 2.4. Data Visualization, to enhance readability. 5. I hope I didn't miss it, but I was curious why the focus was solely on hoverflies in the results and particularly in the discussion, considering that the camera and model classified other intriguing insects as well. It may be beneficial to address this in the introduction, as it was rather surprising. 6. Starting at Fig. 6 (Fig 8., S2 Table, line 274) the text “The model was trained on a custom dataset with 21,000” images is repeated four times throughout the manuscript. I think it is enough to mentioned it once in detail and the other times in a shortened version. 7. Fig. 2 is highly impressive. I would suggest that, since it falls under the processing pipeline section, some essential details about the YOLO detection model and its hyperparameters could be included. Additionally, it might be beneficial to incorporate information about the post-processing classification stage, the Efficientnet, and a few minor but significant hyperparameters. 8. If I understood correctly, the YOLO models for insect detection were trained on 1335 images for 300 epochs, while the Efficientnet model for insect classification was trained on 21000 images for 20 epochs. It would be helpful if you could provide a clearer explanation for this choice of experimental setup, particularly addressing and discussing any concerns regarding potential overfitting when training with just 1335 images for 300 epochs. 9. Additionally, it could be beneficial to mention that Resnet50, YOLO standard backbone, and Efficientnet were employed in the study, with the final decision to use Efficientnet based on its superior performance. 10. Caption for Fig. 8 is too lengthy; consider providing a more detailed explanation in Section 3.2 on Insect Classification Validation. Reviewer #2: This paper presents an automated, do-it-yourself camera trap system for monitoring flower-visiting insects. The system consists of two components: a real-time camera with a deep learning-based object detector that identifies and captures insect images on an artificial platform, and an insect classification model that identifies species from captured images. The camera trap's accuracy was tested in a controlled laboratory experiment using hoverflies as a test species. The classification model was validated using images captured by deploying the camera trap at test sites. Additionally, the authors conducted a brief case study demonstrating the system's capabilities by analyzing data related to hoverfly behavior. The authors employed appropriate methods and provided detailed documentation and guidance to ensure the camera trap's reproducibility for non-expert users, encouraging citizen science engagement in insect monitoring. This is a significant contribution of the paper. However, I believe some issues related to the camera trap's evaluation should be addressed before accepting the paper for publication. If these concerns can be satisfactorily addressed, I recommend accepting the paper for publication in PLoS ONE. -------------------- Major comments -------------------- (1) A key strength of this paper, compared to the existing literature discussed in the introduction, lies in its development of a camera trap system built with open-source software. This system, accessible to non-experts in computer science or engineering, addresses a significant gap in the field. However, the introduction currently lacks a clear explanation of this gap and its significance. To provide readers with better understanding of this paper's contributions, I recommend the authors include a concise description of this research gap before introducing current research at Line 93. (2) While the introduction mentions real-time processing, it doesn't fully justify its preference over offline processing. Providing more detail on this aspect would benefit the reader's understanding of the proposed system's necessity. (3) The abstract (Line 34) and Discussion section (Line 587) states that "on-device detection and tracking reliably estimated insect activity/abundance...". I am a bit confused about this statement as under Insect track evaluation results (Line 390) I did not find any quantitative metrics that was calculated to measure the reliability or accuracy of insect tracking to back up the above mentioned statement. While there are standard methods to evaluate accuracy of a Multi-object tracking problem (e.g Luiten et al. "HOTA: A Higher Order Metric for Evaluating Multi-Object Tracking", IJCV 2020.), I do not think a complete evaluation of tracking is necessary as the final measurement of the system is the number of insects landed on the platform. However, as the reliability and accuracy of insect counting is integral for the system, some sort of quantitative metric is necessary. Authors have attempted to address this by comparing the total number of automatically detected insect tracks to those observed visually. I do not agree this is an appropriate way to measure the reliability because the metric itself is susceptible to misleading results due to potential cancellation of false positives and negatives. Alternatively, I recommend presenting separate counts for True Positives, False Positives, and False Negatives generated by the on-device tracking. These counts can then be used to calculate Precision, Recall, and F-score for the camera trap's detection of hoverflies (https://en.wikipedia.org/wiki/Precision_and_recall). Similar metrics have been used by other research cited in this paper (e.g: 17, 48). Presenting these final average metrics (Precision, Recall, F-score) in the abstract and Discussion/Conclusion sections would provide a more quantitative and reliable measure of the system's detection accuracy. (4) In the Introduction (Line 102), Methods (Line 191) and Discussion (Line 588) authors state that the “...the whole pipeline runs at 190 ~12.5 fps (1080p HQ frame resolution), which is sufficient to reliably track most insects…”. However, the evaluations presented in the insect tracking section (Line 320) utilize 4K HQ frames. This inconsistency raises concerns about which resolution was used for the actual classification task, as the authors mention both 1080p and 4K image synchronization throughout the manuscript. Since image resolution can significantly impact classification accuracy, it's crucial to clarify this point. If the camera trap system and its classification model were indeed tested on 4K captured (and cropped) images, reporting the processing speed for 4K frames would be more accurate and reflect the true operational speed of the system. Additionally, a brief discussion on the trade-offs between speed and resolution, and how these choices affect the monitoring of different insect species, would be valuable for readers. (5) I appreciate the detailed hardware assembly instructions provided in the documentation. However, it's currently unclear how the individual components connect to each other. To address this, I suggest incorporating a simple hardware schematic (complimenting Figure 1) to visually illustrate the connections between each component. (6) Using artificial flower visits as a proxy for insect counts may not be as accurate as direct flower observations. This is because insect visitation to artificial flowers can be influenced by various characteristics of the flowers themselves. Please include a brief discussion on how the choice of platform materials and colors could affect capture rates, thereby improving the reliability of this method. (7) Line 87-89: The detection rate and accuracy of a deep learning model depends on various factors including its architecture and quality of the training dataset. Hence, the use of deep learning-based models for insect detection can result in false negative detections leading to underestimating insect counts. Please briefly mention this in the introduction and further discuss how this drawback could be overcome in the Discussion section. -------------------- Minor Comments -------------------- Presentation Structure: Authors have presented this study in easy to understand clear language. However, the structure the manuscript presented was confusing to me. I would like to propose authors to consider restructuring the manuscripts Methods and Materials and Results sections. Here Methods and Materials sections would contain two subsections on Camera Trap and Insect Classification model and only contain methodology associated with it. The Results section (or Experiments and Results) sections would contain all the experimental evaluations including results of YOLO models, classification model, field deployment etc, divided among two subsections on Camera Trap and Insect Classification model. Line 24: Not all traditional monitoring methods (e.g. focal flower observations or quadrat observations, transect walks) may provide data with high taxonomic resolution. Line 68 and 71: Other research that uses motion detection for insect monitoring include “van der Voort, Genevieve E., et al. "Continuous video capture, and pollinia tracking, in Platanthera (Orchidaceae) reveal new insect visitors and potential pollinators." PeerJ 10 (2022): e13191.”, “Steen, Ronny. "Diel activity, frequency and visit duration of pollinators in focal plants: in situ automatic camera monitoring and data processing." Methods in Ecology and Evolution 8.2 (2017): 203-213.” Line 106: Please provide an appropriate reference and data on the speed of the hoverfly species. Line 123: Is 91 Wh the combined capacity of the two batteries? Line 127: What are the dimensions of the platform? Line 131: I suggest including the component list and associated cost values also in the supporting materials. This is because the cost provided in the manuscript may change over time. Line 158: It is unclear on what types other homogeneous backgrounds the YOLO model was tested with. Could you please clarify? Line 161: Why were the metrics not calculated for the test split? Line 178: Please provide references for Kalman Filter and Hungarian Algorithm. Line 191: [See Major comment 3 and 4] Line 199: Please include the type of metadata recorded in the figure caption. Line 205: Please include more information on how the power consumption was measured. What device did you use to measure the energy consumption? Under what ambient conditions (temperature, humidity) was the test conducted? Was the solar panel connected to the system during this test? Were 5 insects tracked simultaneously or sequentially during the test? Also, what was the camera resolution? Line 208: Was the estimate of 20 hours calculated considering the threshold value mentioned in line 210? Line 229: Could you please explain why YOLOv5 was used on captured images first to classification without directly using EfficientNet-B0 on captured images? Line 234: Please provide a reference to the EfficientNet-B0 model. Line 259: Could you please explain why high inference speed is critical for this step. As the images are classified offline and as the main aim is to achieve the best classification accuracy, shouldn’t the accuracy be prioritized over inference speed? Line 313: Please change “mm” to millimeters. Line 324: Please provide the video settings used by the smartphone camera including its resolution and framerate. Also, could you please provide more detail on the speed the smartphone camera video was played at? (e.g 50% of original speed). Line 364: Please explain why a threshold of 70% was used? Why not set a lower value allowing us to record more data? Line 416: I suggest authors include the S3 table in the manuscript as it reflects the performance of the classification model on the real world data. Also Table 2 can be moved to Supplementary materials. Line 516: I suggest authors present the analysis of hoverfly behavior under a subsection “Example data analysis" or " Case Study”. Line 579: Could you please provide any references to support the statement that the artificial flower platform can be standardized similarly to yellow pan traps. [Also see Major comment 6]. Line 582: Please change the “camera trap system” to “camera trap hardware” as the software system was not evaluated for monitoring insects visiting real flowers or in outdoor settings. Line 583: I agree with authors that the presented hardware system can be used to monitor insects visiting real flowers. However, it is unclear how the software solution will translate to monitoring insect visits to real flowers. Basic monitoring insect visit to real flowers requires detecting insects in an image, obtaining its coordinates, tracking its position and movements with changes in environment and its posture, and comparing insect coordinates with flower coordinates to identify flower visits. Could you please mention how the presented methods can achieve these requirements, or alternatively present these requirements as future work. This could be an expansion to the discussion presented under Insect detection and tracking subsection in the Discussion section. Line 603: Under insect detection and tracking section please include a discussion on how the methods presented in this study can be implemented with different IoT platforms and how development of more efficient computational platforms can leverage the results of this study. Line 609, 611, 625: Studies cited in [45,46,48] can also be discussed in the introduction section. Line 611: Other research that used motion enhancement includes “Ratnayake, Malika Nisal, Adrian G. Dyer, and Alan Dorin. "Tracking individual honeybees among wildflower clusters with computer vision-facilitated pollinator monitoring." Plos one 16.2 (2021): e0239504.” Line 620 - 622: Currently there is research being conducted on re-identification of insects (see. Borlinghaus, Parzival, Frederic Tausch, and Luca Rettenberger. "A Purely Visual Re-ID Approach for Bumblebees (Bombus terrestris)." Smart Agricultural Technology 3 (2023): 100135.”). Please discuss the possibility of using a similar mechanism for the proposed study to improve its sampling accuracy. Fig 2.: Please label the purple line from Script node to cropped detections. Fig 4: Please indicate the start and end of the recording period. Fig 6 and Fig 8: Please label the color bar. Fig. 7: I suggest removing the “15 min recording” from axis titles to simplify the plot. Also rename y axis to “Ground truth” or “Manual video observations” and the x axis to “Camera trap recordings”. Fig 11. I suggest moving this figure to supplementary materials. Results presented in 11F are a bit confusing as all the camera traps were not deployed for the same period of time. I recommend removing plot 11F. Could you please provide more information on how sunshine was measured? Please adjust the secondary y axis scale to match that of the primary y axis. Also include a legend describing what each line in the plot represents. Fig.12: Please make all y axis scales 12A-12E same to enable easy comparison of data across camera traps. Fig 14: As the recording time per day varies between camera traps, I suggest normalizing the number of unique tracking IDs by the recording time. Please make all y axis scales 14A-14E same to enable easy comparison of data across camera traps Fig 15: As the discussion does not extensively analyze the relationship between rainfall and hoverfly activity, this Figure can be removed. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 1 |
|
PONE-D-23-38812R1Insect Detect: An open-source DIY camera trap for automated insect monitoringPLOS ONE Dear Dr. Sittinger, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Mar 29 2024 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Ramzi Mansour Academic Editor PLOS ONE [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: (No Response) Reviewer #2: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: N/A ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: The paper titled "Insect Detect: An Open-Source DIY Camera Trap for Automated Insect Monitoring" represents a commendable effort, offering a hardware solution for insect counting in natural environments and their classification into taxonomic orders, thereby providing significant value. This contribution to the scientific community, encompassing both the hardware solution and the open-source code, underscores its importance. However, regrettably, the text itself presents challenges in readability. It's not the language per se, but rather the overall structure and organization of section headings that can be perplexing. In my view, the authors should strongly consider revising some section headings and streamlining certain portions of the paper to enhance its accessibility and, consequently, its impact as a valuable scientific resource. Below, I outline specific questions regarding these aspects. • Figure 5 could benefit from improvement. It's unclear why there are x and y dimensions labeled for all images, especially considering they are of different sizes. Shouldn't the images be resized to a consistent size? Removing the axes would free up space, allowing for slightly larger images and thus improving visibility. • In the paragraph beginning at line 285, you mention that the dataset is divided into 80% for training, 10% for validation, and 10% for testing. Could you clarify how this division is achieved? Is it done randomly or as 80%, 10%, 10% for each class? • In the paragraph starting at line 285, you mention that you are utilizing the YOLOv5 classifier. Could you provide insights into any differences between training a typical ResNet50 versus a YOLOv5 ResNet50? Your commentary on this topic would be greatly appreciated. • In the paragraph starting at line 285, you mention using an image size of 128x128 for insect classification and 320x320 for initial insect detection. While the results are promising at these resolutions, could you comment on whether using a larger image size would further improve results? Additionally, considering hardware constraints, especially if this is not being done on the Pi-module, how does this impact your choice of image size? • I find the section headings confusing, particularly regarding the processing pipeline. Starting with section 2.2 on software, why is insect detection (2.2.1) not within the processing pipeline (2.2.2)? In my opinion, the software section should be renamed "Processing Pipeline," or better yet, remove the hardware and software distinctions entirely and replace them with more thematically fitting headings, such as 2.1. Pi-module - Insect Detect and 2.2. Processing Pipeline. • In line 183, Table 1 there are results for insect detection in the methods section. Shouldn't these results be in the results section instead? Reviewer #2: This paper introduces an automated, do-it-yourself camera trap system for studying flower-visiting insects. The system comprises two key components: a real-time camera equipped with a deep learning-based object detector. This detector identifies and captures images of insects landing on an artificial platform. Additionally, an insect classification model analyzes the captured images to identify the species of each insect. The authors have adequately addressed the comments and concerns raised in the previous review, incorporating the necessary changes into the manuscript. I encourage the authors to continue updating and maintaining the documentation and software associated with this research, as it provides a valuable tool for researchers and citizen scientists engaged in insect monitoring studies. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 2 |
|
Insect Detect: An open-source DIY camera trap for automated insect monitoring PONE-D-23-38812R2 Dear Dr. Sittinger, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Ramzi Mansour Academic Editor PLOS ONE |
| Formally Accepted |
|
PONE-D-23-38812R2 PLOS ONE Dear Dr. Sittinger, I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now being handed over to our production team. At this stage, our production department will prepare your paper for publication. This includes ensuring the following: * All references, tables, and figures are properly cited * All relevant supporting information is included in the manuscript submission, * There are no issues that prevent the paper from being properly typeset If revisions are needed, the production department will contact you directly to resolve them. If no revisions are needed, you will receive an email when the publication date has been set. At this time, we do not offer pre-publication proofs to authors during production of the accepted work. Please keep in mind that we are working through a large volume of accepted articles, so please give us a few weeks to review your paper and let you know the next and final steps. Lastly, if your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. If we can help with anything else, please email us at customercare@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Ramzi Mansour Academic Editor PLOS ONE |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .