Peer Review History
| Original SubmissionMay 15, 2025 |
|---|
|
Dear Dr. Deady, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Aug 02 2025 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org . When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols . Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols . We look forward to receiving your revised manuscript. Kind regards, Fatma Refaat Ahmed, Ph.D. Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 2. We note that the original protocol that you have uploaded as a Supporting Information file contains an institutional logo. As this logo is likely copyrighted, we ask that you please remove it from this file and upload an updated version upon resubmission. 3. Thank you for stating the following in the Competing Interests section: “MD, DM, DAJC, SBH developed the Build Back Better app, they recieve no benefit from this program.” Please confirm that this does not alter your adherence to all PLOS ONE policies on sharing data and materials, by including the following statement: "This does not alter our adherence to PLOS ONE policies on sharing data and materials.” (as detailed online in our guide for authors http://journals.plos.org/plosone/s/competing-interests). If there are restrictions on sharing of data and/or materials, please state these. Please note that we cannot proceed with consideration of your article until this information has been declared. Please include your updated Competing Interests statement in your cover letter; we will change the online submission form on your behalf. 4. In the online submission form, you indicated that “Data cannot be shared publicly as this was not clearly stated within the orginal ethics. However, deidentified data can be made available upon reasonable request to the authors.” All PLOS journals now require all data underlying the findings described in their manuscript to be freely available to other researchers, either 1. In a public repository, 2. Within the manuscript itself, or 3. Uploaded as supplementary information. This policy applies to all data except where public deposition would breach compliance with the protocol approved by your research ethics board. If your data cannot be made publicly available for ethical or legal reasons (e.g., public availability would compromise patient privacy), please explain your reasons on resubmission and your exemption request will be escalated for approval. 5. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information. Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? Reviewer #1: No Reviewer #2: Yes Reviewer #3: Partly Reviewer #4: Yes ********** 2. Has the statistical analysis been performed appropriately and rigorously? -->?> Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: No Reviewer #4: No ********** 3. Have the authors made all data underlying the findings in their manuscript fully available??> The PLOS Data policy Reviewer #1: No Reviewer #2: Yes Reviewer #3: No Reviewer #4: No ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English??> Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes Reviewer #4: Yes ********** Reviewer #1: Thanks for the opportunity to review this manuscript. Overall, the science is fine and its well written. I have significant concerns about the rationale for the study, and the interpretation of the findings, as follows: The introduction, and therefore the discussion, fails to address the well know engagement problem in digital mental health. There are hundreds of thousands of apps, less than 5% are used at 30 days, there are well-known severe issues with engagement of digital mental health apps (SEE- https://pubmed.ncbi.nlm.nih.gov/29871870/). The rationale for this study must be made in light of this problem. What did this study do to address the known challenges of engagement, and where it did not address these issues, why? The manuscript conflates the method of past studies and this study. Was co-design done as part of this project? If so, please elaborate on how, when, what, etc in the methods and results. If not, please move the co-design findings out of the method and into the introduction, clearly expand on how the co-design was conducted, and critically, why? Headgear was an effective intervention, you co-designed a new intervention (why? how?) that was no longer effective. This must be explained in the introduction and revisited in the discussion. The main point of the discussion that is missing is how co-design improvements to Headgear resulted in a non-effective intervention. Please interrogate the co-design approach, the changes that were made, and what the broader digital mental health literature can tell us to explain this unexpected result. The power calculation describes a prevention study, the title and method describes a treatment study. In addition, why wasnt the Headgear study used to power this study? The study cycles between the terms usability, satisfaction, usage, acceptability with inconsistency. (SEE - https://pubmed.ncbi.nlm.nih.gov/30914003/) The aims in the introduction need to be clearer about how these constructs will be assessed and what thresholds for satisfaction and usability will be set. It is not sufficient to say satisfaction was higher, what was the pre-study threshold? The method describes qualitative data that is not presented, and all of the 12 user engagement and app-related feedback questions results are not presented. Table 6 presents means that are difficult to interpret- what does a few hundred opens actually mean? Reviewer #2: Thank you for the opportunity to review the manuscript entitled "Efficacy of a smartphone app to improve mental health among emergency service workers: A randomised controlled trial". It is well-written and addresses an important topic. Below are some comments for consideration: Lines 79-84: Few studies have been conducted on electronic support workers (ESW) using digital interventions. What might be the reasons for this? Is it due to the demanding nature of their work, which may limit the use of electronic devices during work hours? This is a crucial point, as it could impact adherence to the intervention. More evidence is needed to explore the behavioral habits regarding technology use as a tool for addressing mental health issues in this group. Line 88: Please provide a brief overview of the acceptability, acceptance, and preferences of ESW regarding the pilot study outcomes. What changes were implemented from the pilot results to inform the randomized controlled trial (RCT)? Lines 130-132: All study participants were instructed to download the Build Back Better smartphone app, including a login code tailored to their group allocation. Can participants log in on multiple devices, as long as they enter the correct code? If so, this raises concerns about the potential for participants to access the app on multiple devices or on devices belonging to others, which could lead to contamination. Line 159: Regarding the content of the intervention—1) mindfulness, 2) healthy coping, 3) managing thoughts, and 4) valued actions—why were these aspects selected? How was this content validated? Please describe the intervention in detail, including session structure, duration, and how participants can seek support or ask questions related to this content. Lines 166-167: The app provides links and phone numbers to various mental health and workplace support services. What is the purpose of including this content? How can you determine whether the effectiveness observed in participants is attributable to the app intervention or if it results from direct contact with mental health services? Line 177: What are the psychometric properties of the instruments used in the study? Table 6: In the app activity usage patterns, the monthly open frequency (average number of times an activity is accessed) is reported. How much time do participants spend on the app? What measures are in place to ensure participants engage with all the content within the app? Reviewer #3: The present paper reports on a two-arm repeated measures randomized trial aimed at improving mental health among emergency service workers. The intervention arm includes the full version of the Build Back Better app, while the control arm involves symptom tracking only. Each participant is assessed at three-time points: baseline, 4 weeks post-baseline, and 3-month follow-up. The manuscript has several methodological concerns that warrant attention: Sample Size Calculation in Repeated Measures Design: Sample size estimation for repeated measures trials is complex and typically requires specifying the intra-class correlation (ICC) or within-subject correlation. This critical parameter appears to be missing in the current manuscript. Without it, the sample size justification is incomplete. Multiple Primary Outcomes: Pages 8–10 describe six different outcomes, all of which seem to be treated as primary. This raises serious concerns about inflated Type I error rates due to multiple comparisons. Unless corrected (e.g., through statistical adjustment or designating a single primary outcome), the study is likely underpowered to detect significant effects across all six measures. The absence of a clearly defined primary outcome is a major design flaw. High Dropout Rate and Underpowering: The reported dropout rate of 30% is substantial. While the authors have attempted to adjust the sample size accordingly, Figure 1 (CONSORT diagram) shows only a total of 171 participants completing the trial. If this is the number used for analysis at the primary endpoint, the study may be underpowered even for a single outcome. Additionally, Figure 1 is difficult to read and should be revised for clarity. Handling of Dropout and Partial Data: It is unclear how data from participants who dropped out but provided partial data were handled. Were these data included in any form of mixed-effects model or imputed? Clarification is needed on how missing data were addressed, particularly for a longitudinal design. Power Analysis Inconsistency: On page 16, the authors report testing an interaction term for statistical significance, yet the power analysis on page 12 appears to be based on detecting a main effect only. A valid power analysis must align with the actual statistical model used in the analysis. If the interaction term is of interest, it should have been included in the original power estimation. Conclusion: I believe the paper has the potential to make a valuable contribution, but in its current form, it requires major revision. A well-defined primary outcome and a properly aligned power analysis would go a long way in strengthening the manuscript. Additionally, clarifying how missing data were handled and revising the presentation of the CONSORT diagram would enhance transparency and rigor. Reviewer #4: This paper reports a primary analysis of an RCT comparing a digital intervention with a monitoring only digital intervention for emergency service workers. There was limited evidence for differential efficacy between the two groups, with both showing comparable positive outcomes across a range of primary and secondary outcomes. The paper is well written, however, in my opinion, deeper reflection on the drop out and engagement patterns, and what this means for digital interventions in this space (e.g., early intervention/indicated prevention in ESWs). Major Comments - Method: Please explain the rationale for choosing a self-initiated mood/activity log rather than a protocol typically used in experience sampling studies (i.e., questionnaires sent at fixed/random/semi-random times each day?). Whilst event-related approaches are common for behaviours like alcohol use, they are less common for internal experiences like mood, and studies usually have low compliance in these designs. This could also be a Discussion point for low engagement in both conditions. - Data analysis page 13: The analysis plan includes multiple testing. What is the rationale for not correcting for multiple tests? Does controlling for multiple testing influence the pattern of results? - Data analysis page 13 line 289-291. Explain the thresholds for no, minimal and engagers. They seem quite arbitrary (e.g., why not use a continuous variable?) and it would be worth including this as a broader point in the discussion (incl the lack of guidelines around what acceptable engagement actually is in digital interventions and monitoring (e.g., see experience sampling literature) - Discussion: The authors acknowledge the high amount of drop out and from Baseline to T1 and T2, and low engagement, in both conditions (but particularly for the full intervention condition) in the Discussion. Related to the point above, perhaps the instruction given to participants influenced engagement (Page 6 line 132: Pts were asked to use the app consistently for 30 days”). On the one hand, this instruction is vague. On the other hand, if pts were instructed to use the app/s flexibly, perhaps the low compliance reflects just that (e.g., they used it when they needed to). The Discussion can be enhanced by elaborating on these points in the context of the specific intervention/s but also more broadly in terms of what it means for “preventive” digital interventions (or interventions more generally) for this population. - Discussion: The disconnect between very low engagement and relatively positive app-feedback is a surprising result which has not been adequately addressed. I recommend adding a discussion point about why this disconnect might be present in this group and whether positive feedback is enough to recommend broader use/scale-up (i.e., what can we really do with this information given that the full intervention was not really used and did not improve most outcomes compared to the control?). Relatedly, is this disconnect common in digital intervention studies, or within this sample? Minor Comments - The authors mention in the introduction and methods that the app was co-designed with ESWs. Please specify the level of co-design according to established models – e.g., did they actively collaborate with and make decisions together with the research team or was still a top down approach to development (with researchers/clinicians making the final decisions). This could add to the authors point in the Discussion about app development and dropout/engagement. - Page 4 line 78-79: “However, less is known on the equivalence of app-based interventions compared to 79 browser-based digital programs [21].” This makes it seem like the trial will be comparing these 2 variants or focusing on browser-based programs. I recommend removing or revising so that it aligns with the approach of the current study. - Method: Please include examples of the mood and activity tracking questions (e.g., in supp materials) - Methods: Usability and acceptability are broad terms that are defined differently across studies and within implementation literature. Please define what these concepts and link to previous literature. Relatedly, please describe how the engagement and app-related feedback questionnaire was developed (was it based on a previous study? Was it developed as a bespoke questionnaire?) - Results: Given that the purpose of the RCT is to compare the effect of the two groups across time, rather than within-group changes, I suggest summarising these results clearly first. - Page 16 line 341. In the primary outcomes section, it is written that the interaction “seldom” reached significance. No statistical tests for the interaction were significant, please reword to remove ambiguity. - Discussion page 29. Please revise this sentence so that that the meaning is clear: “Nevertheless, although both mindfulness and behavioural activation formed part of the Build Back Better content, as part of the series of skills presented and may not have been engaged with in sufficient depth to elicit effect.” ********** what does this mean? ). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy Reviewer #1: No Reviewer #2: Yes: Cho Lee Wong Reviewer #3: No Reviewer #4: No ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/ . PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org . Please note that Supporting Information files do not need this step. |
| Revision 1 |
|
Dear Dr. Deady, Please submit your revised manuscript by Dec 20 2025 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org . When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.
If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols . Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols . We look forward to receiving your revised manuscript. Kind regards, Fatma Refaat Ahmed, Ph.D. Academic Editor PLOS ONE Journal Requirements: 1. If the reviewer comments include a recommendation to cite specific previously published works, please review and evaluate these publications to determine whether they are relevant and should be cited. There is no requirement to cite these works unless the editor has indicated otherwise. 2. Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author Reviewer #5: All comments have been addressed Reviewer #6: (No Response) ********** 2. Is the manuscript technically sound, and do the data support the conclusions??> Reviewer #5: Yes Reviewer #6: Partly ********** 3. Has the statistical analysis been performed appropriately and rigorously? -->?> Reviewer #5: Yes Reviewer #6: N/A ********** 4. Have the authors made all data underlying the findings in their manuscript fully available??> The PLOS Data policy Reviewer #5: No Reviewer #6: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English??> Reviewer #5: Yes Reviewer #6: Yes ********** Reviewer #5: Thank you for the opportunity to conduct a second-look review. I was asked to comment on a manuscript that has already undergone extensive peer review, and I have therefore focused my assessment on whether the authors have addressed the editor’s and reviewers’ first-round comments. It would be unfair of me to provide fresh round of comments at this stage. The authors have, in my view, largely addressed the prior comments through concrete textual revisions, clearer reporting, and additional analyses. A small number of issues are acknowledged rather than fully resolved. Specific points addressed * formatting, file naming, and figure/table placement have been corrected. * the statement has been clarified and the authors request an ethics-based exemption; I note this requires an editorial decision as it int clear why the committee are blocking. * the description of the power analysis has been aligned with the tests actually conducted; explicit between-group results at follow-ups have been added. * handling is now described (linear mixed models with maximum likelihood under a MAR assumption). * the results and discussion have been adjusted to acknowledge low engagement and minimal between-group differences, with implications for unguided digital interventions made more explicit. On balance, the first-round comments have been adequately addressed. I recommend acceptance subject to the editorial team being satisfied with the data availability wording and any minor textual edits they may request. Reviewer #6: The authors have made a commendable effort to address the concerns raised in the previous review round and the manuscript has been improved. However, some questions remain. - Please clarify which software or method was used for the power analysis (e.g., G*Power, R package, or other). - The use of linear mixed models (LMMs) with maximum likelihood estimation is methodologically appropriate for repeated measures data; however, the authors indicate that models “did not adjust for baseline scores.” The rationale for this decision should be provided. - Little’s MCAR test was non-significant (p = 1.00), suggesting that there was no systematic missingness. However, attrition rates differed significantly between conditions at both follow-ups. This discrepancy indicates that data were unlikely to be missing completely at random (MCAR). - Including descriptive comparisons of baseline characteristics across engagement groups (non-engagers, minimal engagers, and engagers) would enhance the interpretability of the findings and provide important context regarding potential differences in initial symptom severity or demographics. - The discussion section is well written, appropriately acknowledging the null results and relevant contextual factors affecting the target population. In fact, the authors noted that the active control condition (mood and behavior tracking) is an important feature of this trial and likely contributed to improvements in both arms. The implications of using an active comparator deserve greater discussion. - Exploratory findings, particularly the benefits for PTSD symptoms and help-seeking participants, should be presented more cautiously, given the small subsample and missing data. - Finally, the conclusion that unguided digital interventions may not suit this population is reasonable but could be complemented by more constructive recommendations for future development (hybrid or guided models? adaptive tailoring based on symptom severity?). ********** what does this mean? ). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy Reviewer #5: Yes: Dr Daniel Leightley Reviewer #6: No ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] To ensure your figures meet our technical requirements, please review our figure guidelines: https://journals.plos.org/plosone/s/figures You may also use PLOS’s free figure tool, NAAS, to help you prepare publication quality figures: https://journals.plos.org/plosone/s/figures#loc-tools-for-figure-preparation. NAAS will assess whether your figures meet our technical requirements by comparing each figure against our figure specifications. |
| Revision 2 |
|
<p>Efficacy of a smartphone app to improve mental health among emergency service workers: A randomised controlled trial PONE-D-25-23028R2 Dear Dr. Deady, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice will be generated when your article is formally accepted. Please note, if your institution has a publishing partnership with PLOS and your article meets the relevant criteria, all or part of your publication costs will be covered. Please make sure your user information is up-to-date by logging into Editorial Manager at Editorial Manager® and clicking the ‘Update My Information' link at the top of the page. For questions related to billing, please contact billing support . If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Fatma Refaat Ahmed, Ph.D. Academic Editor PLOS One Additional Editor Comments (optional): Reviewers' comments: Reviewer's Responses to Questions Comments to the Author Reviewer #6: All comments have been addressed Reviewer #7: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions??> Reviewer #6: Yes Reviewer #7: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? -->?> Reviewer #6: Yes Reviewer #7: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available??> The PLOS Data policy Reviewer #6: Yes Reviewer #7: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English??> Reviewer #6: Yes Reviewer #7: Yes ********** Reviewer #6: The authors have addressed all the comments and concerns raised in the prior review round. Clarifications regarding the power analysis, baseline adjustment rationale, and handling of missing data have been included. They also have added descriptive comparisons of engagement groups as requested which improves the contextualization of results. The manuscript is clearly written, technically sound, and the conclusions are appropriately matched to the data. Moreover, they also have made available the dataset of the study. I have no further concerns, and I consider the manuscript suitable for publication. Reviewer #7: I have been invited to conduct a review on this paper, which has already undergone two previous rounds of review with other reviewers. Therefore, I am focusing my comments and ratings on whether the authors have addressed previous comments or concerns from the previous reviewers. In my opinion, the authors have adequately addressed the previous comments, with a minimal data set now being made publicly available. I recommend acceptance. ********** what does this mean? ). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy Reviewer #6: No Reviewer #7: No ********** |
| Formally Accepted |
|
PONE-D-25-23028R2 PLOS One Dear Dr. Deady, I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS One. Congratulations! Your manuscript is now being handed over to our production team. At this stage, our production department will prepare your paper for publication. This includes ensuring the following: * All references, tables, and figures are properly cited * All relevant supporting information is included in the manuscript submission, * There are no issues that prevent the paper from being properly typeset You will receive further instructions from the production team, including instructions on how to review your proof when it is ready. Please keep in mind that we are working through a large volume of accepted articles, so please give us a few days to review your paper and let you know the next and final steps. Lastly, if your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. You will receive an invoice from PLOS for your publication fee after your manuscript has reached the completed accept phase. If you receive an email requesting payment before acceptance or for any other service, this may be a phishing scheme. Learn how to identify phishing emails and protect your accounts at https://explore.plos.org/phishing. If we can help with anything else, please email us at customercare@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Fatma Refaat Ahmed Academic Editor PLOS One |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .