Peer Review History
| Original SubmissionMarch 14, 2025 |
|---|
|
PONE-D-25-06596Validation of Mentalization Scale (MENT-S) in francophone control and clinical samplesPLOS ONE Dear Dr. Descartes, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Jun 17 2025 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Marco Innamorati Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf 2. We note that you have indicated that there are restrictions to data sharing for this study. For studies involving human research participant data or other sensitive data, we encourage authors to share de-identified or anonymized data. However, when data cannot be publicly shared for ethical reasons, we allow authors to make their data sets available upon request. For information on unacceptable data access restrictions, please see http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. Before we proceed with your manuscript, please address the following prompts: a) If there are ethical or legal restrictions on sharing a de-identified data set, please explain them in detail (e.g., data contain potentially identifying or sensitive patient information, data are owned by a third-party organization, etc.) and who has imposed them (e.g., a Research Ethics Committee or Institutional Review Board, etc.). Please also provide contact information for a data access committee, ethics committee, or other institutional body to which data requests may be sent. b) If there are no restrictions, please upload the minimal anonymized data set necessary to replicate your study findings to a stable, public repository and provide us with the relevant URLs, DOIs, or accession numbers. Please see http://www.bmj.com/content/340/bmj.c181.long for guidelines on how to de-identify and prepare clinical data for publication. For a list of recommended repositories, please see https://journals.plos.org/plosone/s/recommended-repositories. You also have the option of uploading the data as Supporting Information files, but we would recommend depositing data directly to a data repository if possible. Please update your Data Availability statement in the submission form accordingly. 3. We note that your Data Availability Statement is currently as follows: All relevant data are within the manuscript and its Supporting Information files. Please confirm at this time whether or not your submission contains all raw data required to replicate the results of your study. Authors must share the “minimal data set” for their submission. PLOS defines the minimal data set to consist of the data required to replicate all study findings reported in the article, as well as related metadata and methods (https://journals.plos.org/plosone/s/data-availability#loc-minimal-data-set-definition). For example, authors should submit the following data: - The values behind the means, standard deviations and other measures reported; - The values used to build graphs; - The points extracted from images for analysis. Authors do not need to submit their entire data set if only a portion of the data was used in the reported study. If your submission does not contain these data, please either upload them as Supporting Information files or deposit them to a stable, public repository and provide us with the relevant URLs, DOIs, or accession numbers. For a list of recommended repositories, please see https://journals.plos.org/plosone/s/recommended-repositories. If there are ethical or legal restrictions on sharing a de-identified data set, please explain them in detail (e.g., data contain potentially sensitive information, data are owned by a third-party organization, etc.) and who has imposed them (e.g., an ethics committee). Please also provide contact information for a data access committee, ethics committee, or other institutional body to which data requests may be sent. If data are owned by a third party, please indicate how others may request data access. 4. Please amend your list of authors on the manuscript to ensure that each author is linked to an affiliation. Authors’ affiliations should reflect the institution where the work was done (if authors moved subsequently, you can also list the new affiliation stating “current affiliation:….” as necessary). 5. We note you have included a table to which you do not refer in the text of your manuscript. Please ensure that you refer to Table 1 in your text; if accepted, production will need this reference to link the reader to the Table. 6. We notice that your supplementary tables 5, and 6 are included in the manuscript file. Please remove them and upload them with the file type 'Supporting Information'. Please ensure that each Supporting Information file has a legend listed in the manuscript after the references list. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Partly Reviewer #2: Yes Reviewer #3: Partly ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: No ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: No ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: This study evaluated the psychometric properties of the French version of the Mentalization Scale (MentS) in both community and clinical samples. A total of 711 participants, including individuals with borderline personality disorder (BPD), ADHD, and co-occurring BPD and ADHD, completed the scale. Confirmatory factor analysis supported a 27-item, three-factor structure (mentalizing self, mentalizing others, and motivation to mentalize) as optimal for both groups. According to the Authors, the French version of the MentS is suitable for research and clinical use. The topic of the paper is both timely and interesting. However, there are several important points that require attention. I recommend a thorough and very careful revision of the manuscript to address these issues and strengthen the overall quality of the work. In the Introduction, the Authors present the RFS scale in a somewhat cursory manner. It would have been beneficial to provide a more comprehensive overview of the RFS, including discussion of related instruments such as the RFQ-8. Notably, the authors do not address several critical issues—frequently highlighted in the literature—regarding the challenges of measuring mentalization with the RFS. As a result, the transition to the MentS scale feels abrupt and insufficiently justified. Furthermore, it is important to acknowledge that, both before and after the development of the MentS, other instruments for assessing mentalization have been introduced. Including references to these alternative measures would have strengthened the context and provided a more balanced perspective. In the Introduction (lines 80–92), the authors assert that Fonagy and Luyten have addressed and resolved the methodological limitations outlined earlier (lines 77–79). They also cite encouraging data supporting the francophone version of the RFQ. However, the subsequent transition to introducing the Mentalization Scale (MentS) feels abrupt and insufficiently justified. While the MentS is indeed a valuable tool for multidimensional mentalization assessment, the authors do not adequately articulate its comparative advantages over existing measures. A more thorough discussion is needed to clarify why the MentS was selected over alternative instruments, particularly in light of its strengths (e.g., multidimensional structure) and potential limitations. Highlighting how the MentS addresses gaps left by other tools would provide a stronger rationale for its use and enhance the coherence of this section. In my view, the description of the MentS in the manuscript is insufficient and does not offer readers a clear understanding of the instrument. The structure and dimensions of the MentS are not adequately defined, while greater emphasis is placed on the interpretation of scores. Providing a more detailed explanation of the scale’s underlying structure and its specific dimensions would significantly enhance the reader’s comprehension and better contextualize the meaning and relevance of the reported scores. I found the section of the Introduction addressing convergent validity (lines 128 to 134) to be somewhat confusing, possibly due to an oversimplification of the authors’ argument. The formulation of multiple hypotheses in the latter part of the introduction is particularly perplexing, as several of these hypotheses, especially the one regarding a negative correlation between the MentS-S scale and self-reported childhood trauma, are not logically substantiated within the text. Additionally, I am concerned by the decision to assess childhood trauma using a self-report measure, as this approach raises questions about the A seemingly minor point concerns the way references are cited within the text. While I understand that the authors may have chosen this citation format to simplify the preparation of the reference list, it does not conform to any recognized citation styles typically used in scientific writing. Adhering to a standard citation style is important for clarity and consistency throughout the manuscript. Method In describing the sample, the authors state that they recruited participants for the control group via the Prolific website. Based on my understanding, Prolific is a platform that compensates research participants for their time and contributions. While this approach facilitates rapid and diverse recruitment, it does raise the question of whether such a method increases the likelihood of enrolling so-called “professional participants”—individuals who frequently participate in online studies for compensation. It is still possible for samples to include individuals with substantial prior experience in online research, which may influence their responses or introduce certain biases. While Prolific offers valuable tools to mitigate the risk of recruiting predominantly “professional participants,” researchers should remain mindful of this potential limitation and consider reporting the level of participant experience in their sample description. Including a brief description of the MentS and reporting Cronbach's alpha values at the end of the Participants paragraph is unusual and disrupts the logical flow of the manuscript. The Participants section should focus on describing the sample's characteristics and recruitment methods, as well as any relevant demographic information. Details about the instruments used—including their structure, dimensions, and psychometric properties such as reliability (e.g., Cronbach's alpha)—are more appropriately placed in the Materials or Measures section. In summary, the inclusion of the MentS description and its Cronbach's alpha values in the Participants paragraph is misplaced. This information would be more appropriately presented in the section dedicated to describing the study's materials or measures, where readers expect to find details about the instruments and their psychometric properties. Additional measures The section describing additional measures administered alongside the MentS lacks clarity regarding the rationale for selecting these instruments. It is important for the authors to explain why each measure was included and how it relates to the study’s objectives or hypotheses. Without this context, readers may find it difficult to understand the relevance and contribution of these additional assessments. Furthermore, the information about the translation into French, which appears at the end of this paragraph, seems out of place. Details about translation procedures are typically presented in a dedicated section on instrument adaptation or within the description of the specific measure being translated. Placing this information in the "Additional Measures" section disrupts the logical flow and organization of the manuscript. To improve clarity and coherence, the authors should clearly justify the inclusion of each additional measure and relocate the translation details to a more appropriate section of the manuscript. As a minor point, it should be noted that the authors used the RFQ-8, the short form of the Reflective Functioning Questionnaire, rather than the longer version of the instrument. Statistical analysis Based on the results of the confirmatory factor analysis (CFA) conducted on both the control and clinical samples, the authors decided to eliminate item 25 from their version of the MentS. After removing item 25, the authors used this abbreviated version of the MentS for subsequent reliability analyses. Then they computed the associations between the measures of interest in both samples. The lack of clarity regarding the MentS scoring system in the manuscript creates significant ambiguity, particularly in interpreting the meaning of positive or negative values. The fact that the authors used the Jamovi software for statistical analyses should have been stated at the beginning of the paragraph, not at the end of the section dedicated to the CFA. Discussion In this section of the manuscript, the authors' extensive focus on findings derived from the Childhood Trauma Questionnaire (CTQ) short form—a 28-item self-report measure—warrants further scrutiny. Although the CTQ is a validated instrument, its brevity and reliance on retrospective self-reporting raise concerns about its ability to comprehensively assess complex and multifaceted experiences of childhood maltreatment. The CTQ evaluates five subscales (physical, emotional, and sexual abuse, along with emotional and physical neglect), yet such a condensed format may lack the depth needed to capture the nuances of traumatic experiences, including contextual factors, chronicity, and subjective impact. Given these limitations, the heavy emphasis on CTQ-based results in the discussion may inadvertently oversimplify the interpretation of childhood trauma's role in the study's outcomes. It is worth to note that self-report measures are vulnerable to recall bias, social desirability bias, and underreporting, especially for stigmatized experiences like abuse. Participants may consciously or unconsciously minimize or deny traumatic events. The CTQ quantifies maltreatment severity but provides no qualitative insights into the lived experience of trauma. Critical factors such as developmental timing, relational dynamics, and coping mechanisms remain unaddressed. Finally, the CTQ’s subscales (physical/emotional/sexual abuse, physical/emotional neglect) show high intercorrelations, making it difficult to isolate specific trauma types. This overlap complicates interpretations of how distinct maltreatment experiences relate to outcomes like mentalization deficits. The brief mention of the study’s limitations in the final section of the discussion is inadequate. A thorough and transparent discussion of limitations is essential for contextualizing the findings, acknowledging potential biases, and guiding future research. Simply referencing these issues without elaboration does not provide readers with a clear understanding of how the study’s design, measures, or sample characteristics may have influenced the results. Expanding this section to address specific methodological constraints, such as the reliance on self-report measures, sample representativeness, or the generalizability of the findings, would strengthen the manuscript and enhance its scientific rigor. Reviewer #2: Although the authors justify model refinements (e.g., residual correlations and removal of item 25), a brief expanded discussion on clinical interpretation and usability of the shortened 27-item scale would benefit readers. Implications for practitioners using this tool in diagnostic or therapeutic settings could be elaborated—especially given the growing interest in mentalization in clinical psychology. A few language polishing points (minor grammar or flow) may help improve readability, though these are not major. Reviewer #3: The present study explores the psychometric properties of the francophone translation of the MentS, performing a confirmatory factor analysis and evaluating test-retest reliability. The study is sound. However, I would like to draw your attention to some areas for improvement. First of all, the acronym MENT-S appears in the title, while MentS is used throughout the manuscript. I recommend choosing one version and using it consistently. In general, I suggest reviewing spelling according to a consistent language style (British or American English, e.g., behavior / behaviour). I also recommend that the authors revise the writing and the English language throughout the manuscript, as some sentences are difficult to understand or seem incomplete, and there are a few mistakes. Below are some examples: - Line 22 ("it's" should be "its"?) - Lines 33–34 (revise sentence structure) - Line 48 ("may more likely to report increased") - Line 58 ("The centrality of mentalizing in human" → "mentalization") - Lines 150–151 (combine the two sentences) - Lines 161–162 (combine the two sentences) - Line 165 ("sample composed of" → "sample was composed of") - Line 252 ("analysis were" → "analysis was") - Lines 308–309 (split the sentence using a full stop → ". Results ...") - Line 352 (some values are reported as r=xxx, others as r = xxx. Decide on spacing and ensure consistency) I also recommend avoiding the use of asterisks in the text to indicate p-values (it is acceptable in tables, but in the text it is always preferable to report values fully, e.g., p < 0.001). Furthermore, in table- section "note", when acronyms are explained, the initial letters of the words should be capitalized (e.g., ADHD = Attention Deficit Hyperactivity Disorder; RMSEA = Root Mean Square Error of Approximation). Abstract: I suggest reporting the test-retest reliability values in parentheses and naming at least the constructs that were assessed when referring to "additional measures" (line 40). Methods: - Lines 173–175 would be more appropriate in the Results section. - I would move the following line (inclusion criteria at the end of line 169). - Before describing the various measurement instruments, insert a subheading titled Measures. - The section on missing data should be placed under Data Analysis and should specify whether any missing data were present. Also, provide a reference for the rule “If less than 30% of item values were missing”. - Line 291–292: specify why the non-parametric Spearman coefficient was used (were the variables not normally distributed?). Additionally, there is a lack of detail regarding outliers, multicollinearity, heteroscedasticity, etc. While the general approach for statistical analyses is sound and report a comprehensive set of fit indices was reported, several statistical concerns and methodological considerations should be addressed to enhance the rigor and interpretability of the results. - The use of ML estimation may not be optimal given that questionnaire data are typically ordinal in nature. ML assumes multivariate normality and continuous data, which is often violated in Likert-type scales. A more appropriate estimation method would be Weighted Least Squares Mean and Variance adjusted (WLSMV), which is robust for ordinal variables and commonly recommended in such contexts. - The final model incorporates 24 correlated residuals, which is a substantial number relative to the total number of items (28). While the justification provided (i.e., similar item wording) is acknowledged, such an extensive modification raises concerns about overfitting and may artificially inflate model fit. Correlated errors should be added sparingly and only when strong theoretical justification is present. I read that you have brought this observation within limitation of the study. - After introducing model modifications and removing item 25, it would be important to statistically compare the models (e.g., using Chi-square difference tests or ΔCFI/ΔRMSEA) to support the decision to retain the revised structure. Additionally, cross-validation using split samples or bootstrapping could help assess the stability of the modified model. - you briefly mention the use of Spearman’s rho due to non-normal distributions but, as previously mentioned, other essential assumptions such as outliers, multicollinearity, and heteroscedasticity are not addressed. Clarification on these points would strengthen the statistical validity of the analyses. Given the large number of residual correlations (24 pairs) needed to improve model fit in the confirmatory factor analysis, I wonder whether you considered using an Exploratory Structural Equation Modeling (ESEM) approach. ESEM could have allowed for more flexibility in modeling item cross-loadings without relying on post-hoc correlated error terms, and might have provided a better representation of the underlying structure. Including a rationale for not choosing this approach, or a brief discussion of its potential relevance, would strengthen the methodological justification of the CFA strategy adopted. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No Reviewer #3: No ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 1 |
|
Validation of the Mentalization Scale (Ment-S) in francophone control and clinical samples. PONE-D-25-06596R1 Dear Dr. Descartes, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice will be generated when your article is formally accepted. Please note, if your institution has a publishing partnership with PLOS and your article meets the relevant criteria, all or part of your publication costs will be covered. Please make sure your user information is up-to-date by logging into Editorial Manager at Editorial Manager® and clicking the ‘Update My Information' link at the top of the page. For questions related to billing, please contact billing support. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Marco Innamorati Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewer #1: Reviewer #3: Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: All comments have been addressed Reviewer #3: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: (No Response) Reviewer #3: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: (No Response) Reviewer #3: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: (No Response) Reviewer #3: No ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: (No Response) Reviewer #3: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: (No Response) Reviewer #3: (No Response) ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #3: No ********** |
| Formally Accepted |
|
PONE-D-25-06596R1 PLOS ONE Dear Dr. Descartes, I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now being handed over to our production team. At this stage, our production department will prepare your paper for publication. This includes ensuring the following: * All references, tables, and figures are properly cited * All relevant supporting information is included in the manuscript submission, * There are no issues that prevent the paper from being properly typeset You will receive further instructions from the production team, including instructions on how to review your proof when it is ready. Please keep in mind that we are working through a large volume of accepted articles, so please give us a few days to review your paper and let you know the next and final steps. Lastly, if your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. You will receive an invoice from PLOS for your publication fee after your manuscript has reached the completed accept phase. If you receive an email requesting payment before acceptance or for any other service, this may be a phishing scheme. Learn how to identify phishing emails and protect your accounts at https://explore.plos.org/phishing. If we can help with anything else, please email us at customercare@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Marco Innamorati Academic Editor PLOS ONE |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .