Peer Review History
| Original SubmissionSeptember 23, 2020 |
|---|
|
PONE-D-20-29968 Developing and Validating an Individualized Breast Cancer Risk Prediction Model for Women Attending Breast Cancer Screening PLOS ONE Dear Dr. Román, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit a point by point response to each of the reviewers' comments. Pay special attention to the comments regarding clarification on the motivation for this paper, including how this will add to the literature and move the field of breast cancer risk prediction forward. In addition, there were numerous comments about the statistical methods that need more detail and clarification in the manuscript. Please submit your revised manuscript by Dec 17 2020 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols We look forward to receiving your revised manuscript. Kind regards, Erin J A Bowles Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 2.We note that you have indicated that data from this study are available upon request. PLOS only allows data to be available upon request if there are legal or ethical restrictions on sharing data publicly. For information on unacceptable data access restrictions, please see http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. In your revised cover letter, please address the following prompts: a) If there are ethical or legal restrictions on sharing a de-identified data set, please explain them in detail (e.g., data contain potentially identifying or sensitive patient information) and who has imposed them (e.g., an ethics committee). Please also provide contact information for a data access committee, ethics committee, or other institutional body to which data requests may be sent. b) If there are no restrictions, please upload the minimal anonymized data set necessary to replicate your study findings as either Supporting Information files or to a stable, public repository and provide us with the relevant URLs, DOIs, or accession numbers. Please see http://www.bmj.com/content/340/bmj.c181.long for guidelines on how to de-identify and prepare clinical data for publication. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories. We will update your Data Availability statement on your behalf to reflect the information you provide. 3. One of the noted authors is a group or consortium [BELE and IRIS Study Groups]. In addition to naming the author group, please list the individual authors and affiliations within this group in the acknowledgments section of your manuscript. Please also indicate clearly a lead author for this group along with a contact email address. 4. Your ethics statement should only appear in the Methods section of your manuscript. If your ethics statement is written in any section besides the Methods, please move it to the Methods section and delete it from any other section. Please ensure that your ethics statement is included in your manuscript, as the ethics statement entered into the online submission form will not be published alongside your manuscript. 5. We note you have included a table to which you do not refer in the text of your manuscript. Please ensure that you refer to Table 4 in your text; if accepted, production will need this reference to link the reader to the Table. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Partly ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: I Don't Know ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: No ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: The manuscript report the creation of a new breast cancer risk prediction model developed among a large mammography cohort in Spain. The aim of the study, rationale and potential implications of their new risk model are poorly outlined, making it difficult to evaluate the paper's impact. This also makes it difficult to evaluate whether the methods are appropriate to answer their question. I found the introduction failed to clearly communicate the motivation for the study. The authors first say that existing breast cancer risk prediction models did not specifically focus on women eligible for mammography screening. (Line 83) However, the Gail model and BCSC models were both developed among women undergoing mammography screening. Granted, these studies were from the US where women begin screening at a younger age, but they account for age in the risk estimates. So I think this motivation for a new risk model is weak. Next the introduction talks about the fact that only 1 model was focused on personalizing screening, and that it only provided short term risk estimates and does not account for time varying covariates. So the authors say there is a need for short term and long term risk estimates from large cohorts. However, short term vs. long term is not defined, nor the time varying covariates that may be important to consider. Also, if we are interested in personalized screening, does short term or long term risk matter more? Which should we use for changing screening behavior? There is no discussion of this important point. Then in the final paragraph of the introduction the authors state that the aim is to estimate biennial risk of breast cancer. Which seems like the focus is on short term risk. A better motivation for this risk model is to include detailed information on BBD type and mammographic findings, which the existing risk models tend not to use. I do think that would be valuable. Reading between the lines, it seems like you do not have sufficient information to run the other risk models (reproductive risk factors, breast density) so you wanted to make a model using the variables that you do have. I think this is fine, but I would clearly communicate that. One reason the existing risk models have not been well implemented for personalized screening is because it is difficult to collect all of the necessary risk factors in practice, so I think a simpler model like this could be useful, particularly if it performs as well as other models that include many variables that may not be available. This point is not at all addressed in the paper. When we get to the methods, the model is a Cox model with time varying covariates. Risk predictions for 2 year intervals out to 20 years are validated. In the results the 10 year risk prediction is reported first, which was surprising as I was expected the 2 year risk estimate to be the focus from the intro. The performance is evaluated for 2 year intervals, and AUC is moderate and calibration is good. However, it is unclear how you would apply this 2 year model in practice. The 2 year performance is pretty good. Do you evaluate a woman's 2 year risk, and then alter the screening interval accordingly? Do you look at the 10 or 20 year risk? Its unclear why having risk estimates for every 2 year interval is more useful than selecting one interval. The discussion does not elaborate on the clinical impact of this model. It also does not compare the model performance to performance estimates for existing models in the literature. And the model performance was not directly compared to model performance of existing models, which would be very helpful. It is also concerning that breast density was not available, as it would be very important to determine whether the mammography findings provided additional predictive value beyond density, or if they were more predictive than density. This is an important point that was unable to be addressed in the study. In the end the paper provided another breast cancer risk prediction model that appears about the same as other models in terms of discriminatory accuracy, which is moderate. This may be useful for some institutions where other risk factors are not available, but doesn't really move the needle in terms of improving our ability to identify high risk women who need more intensive screening. Reviewer #2: General This paper develops a breast cancer risk model for women aged 50-69y, using data from a cohort of 121,969 women attending 2y mammography screening 1995-2015 at a centre in Barcelona, Spain. 60% of the sample are used to develop a model based on age, family history, benign breast disease and mammographic features associated with abnormalities reported by interpreting radiologists. Performance is assessed in a hold-out (40%) sample using calibration coefficients and AUC. The authors conclude they have developed a model for short- and long-term risk assessment, that could be used to guide screening strategies. It appears to be a valuable cohort and data set. The most interesting aspect of the analysis to me was use of the radiological abnormalities for risk assessment. Major 1. It is not clear that the model is suitable for long-term predictions. You only evaluated it when the variables were updated through time (every two years)? For example, I doubt that the mammographic abnormalities used are long-term predictors? I would expect that they indicate that some cancers were missed at the screen. No data are presented to assess this? 2. The abstract reports that the model is "validated" and "could help to guiding individualized screening strategies". This seems too strong. For example, you have not applied the model to a different setting than in which it was developed, nor tried it out in a different time epoch. Further, some of the results are at odds with the literature, including the risk associated with proliferative benign disease, which seems to high. Such aspects would make be wary about proposing it as anything more than a working model to be tested / improved. 3. The statistical methods are inefficient. Rather than split the data, one could have considered cross validation or bootstrapping to estimate optimism in the estimates, for example. There is very little apparent model / variable selection done in this work. Why did you split the data in this way? 4. Some of the commentary on previous work does not seem right, specific comments below. 5. I didn't follow all the methods used, some clarification would be helpful. Specific comments below. 6. It is a shame that the data cannot be made available due to confidentiality. There are precedents for researchers releasing such data used to fit risk models. For example, you can access a modified version of the BCSC data used for their model, where categories have been coded (e.g. not individual year of age). I'd encourage the authors to consider trying to do this if at all possible. What are the confidentiality issues here? It would also be worth making your code available, for transparency of statistical methods used. Minor 1. Introduction "To date, only one model has specifically aimed to predict a women's individual risk..", there are probably hundreds, and you have referenced more than one. I don't follow what you mean by this. 2. Methods para 1. National screening started 2006. Confirm that organised program started in Barcelona 1995? What was coverage through time in your cohort? 3. Methods para 2. Centres routinely gather info on family history .. How do they do this? e.g. self report for family history? This stayed the same 1995-2015? 4. Very little missing data leading to exclusions, but do you know reasons? 5. Why did you define family history in the way chosen? Before or after looking at data? 6. Putting atypia with usual type will bias your risk estimate, and make it less useable in practice. Seems a bad thing to do for utility of the final model and needs more acknowledgement (and ideally do something to rectify). 7. Do you know reasons for unknown biopsy path result? Related to epoch? 8. Invasive / DCIS. WOuld be useful to assess heterogeneity of results by this? Would be particularly interesting regarding calcs as risk factor. At the very least I think it would be helpful to provide information on the number invasive / DCIS by age and calendar time entry? 9. Did you consider other risk factors for your model, or only those reported? If others, which ones. 10. What is a partly conditional Cox model? Is it just a Cox model? 11. How did you incorporate changing risk factors through time? As a time-dependent covariate? 12. What robust confidence intervals (method). 13. Explain more your "at risk" definition. I don't follow "2 years after the last mammographic examination for follow-up of interval cancer cases". 14. What were reasons for censoring? How many for each reason. e.g. Did anyone die? What if a woman did not attend her screening visit? What if she was older than 69y? 15. How did you model age? In piecewise constant 5y intervals? Why? 16. Did you AUC consider follow-up time? e.g. Some will have entered cohort later than others. It appears that you look at 2y risk as predictor and yes/ no cancer in that period. Multiple values for each person. Did you adjust for loss of independence due to this? Standard Hanley and McNeil would not? 17. Please report actual p values, not p<0.05 etc. 18. Please report confidence intervals on calibration coefficients. 19. In text it appears a lot of women had biopsies with unknown diagnosis (almost one quarter?). Why so many? When was this? At entry or at any time throughout followup? 20. The distribution of 10y risk show none with >8% 10y risk. This is a cutoff used by clinical guidelines in UK to identify women at high risk. Why none? Is the model useful for intended purpose if no women at high risk are identified? 21 Discussion 238. ".case-control design... may overestimate.." why? 22. Discussion BBD. Several models include this, not only one. For example, the IBIS model you reference, BCRAT includes information on biopsies, there are others. 23. Table 1, p-value < 0.05 for all - a bit meaningless. Suggest either drop completely, or put the actual p-value in the table. 24. Almost 30% women had a mammographic abnormality. Is this consistent with what you'd expect? Can you put this into context? Does this mean BI-RADS category 2+? Did you look at risk based BIRADS 3+ (i.e. recalled or not)? 25. Table 4. I don't think you give sufficient detail for me to know how you calculated this table (methods). In particular, how did you estimate expected risk? Is it based on updating risk factors through time? Please provide enough detail in the methods for reproducibility. 26. Finally, worth verifying you have included everything in the TRIPOD checklist. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 1 |
|
PONE-D-20-29968R1 Developing and Validating an Individualized Breast Cancer Risk Prediction Model for Women Attending Breast Cancer Screening PLOS ONE Dear Dr. Román, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please address all remaining comments from Reviewer #2. Please also do a thorough read of the manuscript and correct all typographical errors (there are several). Please submit your revised manuscript by Apr 09 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols We look forward to receiving your revised manuscript. Kind regards, Erin J A Bowles Academic Editor PLOS ONE Journal Requirements: Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: All comments have been addressed Reviewer #2: (No Response) ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: The authors have responded to the reviewer comments in an appropriate way, in particular they have added clarifications to their statements and analytic decisions. Reviewer #2: Many thanks to the reviewers for addressing my review. A few small clarification issues from last review below. -- 6. It is a shame that the data cannot be made available due to confidentiality. There are precedents for researchers releasing such data used to fit risk models. For example, you can access a modified version of the BCSC data used for their model, where categories have been coded (e.g. not individual year of age). I'd encourage the authors to consider trying to do this if at all possible. What are the confidentiality issues here? It would also be worth making your code available, for transparency of statistical methods used. Response: We thank the reviewer for this contribution. We have uploaded the database to the Harvard Dataverse online repository. The data is accessible with DOI: https://doi.org/10.7910/DVN/3T7HCH - Fantastic thank you. Would it also be possible to make available the analysis code used for this paper - for complete reproducibility? 12. What robust confidence intervals (method). Response: In particular, we used the robust standard error reported by the Huber sandwich estimator to create the robust confidence intervals. This is a standard estimation method to obtain robust estimates and is the one reported both by the Standard Cox function (coxph function of the R package survival) and by the model used for the Partly Conditional Cox model (partlyconditional package). More information of this package can be found at: https://github.com/mdbrown/partlyconditional - Did you include this in the text? 14. What were reasons for censoring? How many for each reason. e.g. Did anyone die? What if a woman did not attend her screening visit? What if she was older than 69y? Response: We thank the reviewer for this comment. As we extracted our data directly from the population-based screening program databases we are assured of having the maximum follow-up of these women. We collected in a comprehensive and systematic way all the participations that these women have made in the breast cancer screening program. However, we cannot know exactly what the cause of each loss of follow-up is. As we do all the analysis by screening participation and not by women, and taking into account the time between mammograms, there is not any problem at all for the analysis if a woman skips one of the screening tests but she returns to the screening later. Of the 121,969 women in our cohort, 63,694 had their last mammogram within the last two years of the study, which are 2014 and 2015. These women were censored at the end of the study period. 20,436 women of the remaining had their last mammogram at age 68 or 69, so these women completed their screening process during the study follow-up. Of the remaining 37,839 we know that the majority are women who have decided not to participate in the 2014-2015 round or who have changed health areas and therefore are not in our study population. Regarding the women who die, the screening program does not have an exhaustive record of this cause and they appear to us as non-participating women. - Did you include this in the text? 15. How did you model age? In piecewise constant 5y intervals? Why? Response: To build the model we used age as a quantitative variable. By using a partly conditional Cox model we performed the analysis by screening participation instead of by women. Therefore, risk factors through time were incorporated as time-dependent covariates, age included. However, to present the results in an easy way we showed age in piecewise constant 5-year intervals in tables. We believe this is a commonly used way to present results (e.g. BCSC, see Tice JA, Miglioretti DL, Li CS et al. Breast Density and Benign Breast Disease: Risk Assessment to Identify Women at High Risk of Breast Cancer. J Clin Oncol. 2015;33(28):3137-43) - Did you include in the text? (If you made your analysis code available this would be even more transparent) 16. Did you AUC consider follow-up time? e.g. Some will have entered cohort later than others. It appears that you look at 2y risk as predictor and yes/ no cancer in that period. Multiple values for each person. Did you adjust for loss of independence due to this? Standard Hanley and McNeil would not? Response: We thank the reviewer for this interesting comment. Yes, we do consider follow-up time at every horizon for every individual in the cohort. In table 3 (previous table 4), we estimated the AUC for every 2-year interval, which means, that we estimated the 20-year risk AUC for those individuals followed for 20 years, the 18-year risk AUC for those followed for at least 18 years, the 16-year risk AUC for those followed for at least 16 years, and so on. The AUC for each time horizon was estimated with all the women in the validation cohort followed at least that time, using the predicted risk of the model and whether she has developed a tumor or not at the specific time horizon. - Thank you. But is the risk score a time-varying covariate over the 20y horizon, or you use the baseline assessment? The two are not the same and good to clarify in paper methods. 25. Table 4. I don't think you give sufficient detail for me to know how you calculated this table (methods). In particular, how did you estimate expected risk? Is it based on updating risk factors through time? Please provide enough detail in the methods for reproducibility. Response: We thank the reviewer for this important observation. We agree with this comment, both in table 3 and 4, we must explain how we estimate the expected risk. We have tried to clarify it better by adding the following sentence to methods. New text methods (underlined): The expected breast cancer rate was calculated as the average of the model predicted risk for each woman in a specific subgroup. - Can you say more about this? Predicted risk of breast cancer to what time? There are different ways to do this, cf. https://doi.org/10.1214/19-STS729 . I assume cumulative hazard to time each measurement of predictors / event / censoring, but useful to clarify. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 2 |
|
Developing and Validating an Individualized Breast Cancer Risk Prediction Model for Women Attending Breast Cancer Screening PONE-D-20-29968R2 Dear Dr. Román, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Erin J A Bowles Academic Editor PLOS ONE |
| Formally Accepted |
|
PONE-D-20-29968R2 Developing and validating an individualized breast cancer risk prediction model for women attending breast cancer screening Dear Dr. Román: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Erin J A Bowles Academic Editor PLOS ONE |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .