Peer Review History
| Original SubmissionSeptember 18, 2020 |
|---|
|
PONE-D-20-29493 The School Attachment Monitor - a novel computational tool for assessment of attachment in middle childhood. PLOS ONE Dear Dr. Minnis, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Both reviewers agree that the language style needs to be revised thoroughly and even more important both raise methological questions that need to be addressed in a revision that I look forward to read. Please be aware that machine learning methods are still new to the field and need to be explained in more details than more common methods. Please submit your revised manuscript by Feb 14 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols We look forward to receiving your revised manuscript. Kind regards, Svenja Taubner Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 2. In your Data Availability statement, you have not specified where the minimal data set underlying the results described in your manuscript can be found. PLOS defines a study's minimal data set as the underlying data used to reach the conclusions drawn in the manuscript and any additional data required to replicate the reported study findings in their entirety. All PLOS journals require that the minimal data set be made fully available. For more information about our data policy, please see http://journals.plos.org/plosone/s/data-availability. Upon re-submitting your revised manuscript, please upload your study’s minimal underlying data set as either Supporting Information files or to a stable, public repository and include the relevant URLs, DOIs, or accession numbers within your revised cover letter. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories. Any potentially identifying patient information must be fully anonymized. Important: If there are ethical or legal restrictions to sharing your data publicly, please explain these restrictions in detail. Please see our guidelines for more information on what we consider unacceptable restrictions to publicly sharing data: http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. Note that it is not acceptable for the authors to be the sole named individuals responsible for ensuring data access. We will update your Data Availability statement to reflect the information you provide in your cover letter. 3. Please amend the manuscript submission data (via Edit Submission) to include author Rui Huan. 4. We note that Figure 4 and Supplementary figure include an image of a participant in the study. As per the PLOS ONE policy (http://journals.plos.org/plosone/s/submission-guidelines#loc-human-subjects-research) on papers that include identifying, or potentially identifying, information, the individual(s) or parent(s)/guardian(s) must be informed of the terms of the PLOS open-access (CC-BY) license and provide specific permission for publication of these details under the terms of this license. Please download the Consent Form for Publication in a PLOS Journal (http://journals.plos.org/plosone/s/file?id=8ce6/plos-consent-form-english.pdf). The signed consent form should not be submitted with the manuscript, but should be securely filed in the individual's case notes. Please amend the methods section and ethics statement of the manuscript to explicitly state that the patient/participant has provided consent for publication: “The individual in this manuscript has given written informed consent (as outlined in PLOS consent form) to publish these case details”. If you are unable to obtain consent from the subject of the photograph, you will need to remove the figure and any other textual identifying information or case descriptions for this individual. 5. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Partly Reviewer #2: Partly ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: I Don't Know ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: No Reviewer #2: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: First of all, I applaud the authors for the incredible amount of effort that has been put into this project. While the author’s work could prove a valuable asset in making attachment research more feasible in large scale studies, there are nevertheless several important limitations that I feel need to be addressed. First, the language style of the manuscript seems loose. Bullet points are used frequently and there are a lot of expressions that give the manuscript a feel of more ordinary speech rather than formal writing. An example of this is the end of the first paragraph on page 15: …” childhood attachment research (20), probably due to the technical challenges of doing so.” Where the speculation at the end of the sentence are not further contextualized or substantiated and also don’t serve a particular purpose. In the same vein, the next paragraph continues with “Around the same time,…” followed with a direct quotation that could have been paraphrased instead. There are several more examples like this throughout the manuscript. If possible I would also integrate most of the bullet points into the text to make it more seamless. In the methods section there are several core bits of information missing. While the development of the procedure is for the most part well described, important details such as a confusion matrix of the both the manual ratings of SAM vs MCAST as well as the manual SAM ratings vs the algorithms ratings should be added. This would allow the reader to more accurately gauge where the strengths and weaknesses of the different rating systems lie, where they over estimate one attachment classification over the other and so on. Also the Machine Learning algorithm along with its tuning parameters is not described at all. While this might be because of issues with the overall length of the manuscript, in this case please provide an appendix with the appropriate information. From the text presented it is unclear which kind of algorithm was used (neural-nets? Random Forests? Gradient Boosting? If yes which variant?) as well as how the hyperparameters where set up and what the end values were for those hyperparameters. Also, there should be a sentence justifying choosing leave-one-out cross validation over k-fold cross-validation along with a sentence in the limitations section on how this method of cross-validation is probably overestimating the generalizability of the algorithm because of the small sample size and lack of a separate validation set. I hope the authors find these comments helpful and not too discouraging as this an excellent work which just needs some polishing. Reviewer #2: I am very sympathetic to the exciting and pioneering line of research presented in this manuscript. I feel strongly that it has the potential to "push the envelope" within the attachment research and allow these concepts to leave a greater empirical footprint than they currently do and more informatively enter common parlance for clinical assessments. In that sense, I share the authors' enthusiasm about their truly impressive research and validation efforts emerging from many years of arduous and no doubt painstaking development. However, that said, based on the style of this manuscript, I would like to caution the authors about underestimating the difficulty of this task and the importance of subjecting their work to an exceedingly high standard of theoretical and empirical scrutiny, especially if they intend to advance the bold claim of having developed a "new attachment measure" including an algorithm-based rating system premised on machine-learning. If they do not do so, I predict a lukewarm acceptance of their work within the attachment community and this, of course, would be inimical to their objective. While I understand the aim to promote the new instrument, I had a number of concerns regarding (a.) the lack of theoretical and methodological rigour in evaluating the SAM, especially in the results and discussion and (b.) the language used throughout the manuscript which I feel needs to be considerably toned down against the backdrop of what I would still consider to still be preliminary findings. My hunch is that more cautious language and a theoretical section that offers some thoughts on the boundaries inherent in computerizing such an essentially interpersonal task as story-completion at such a young age would ultimately convince more readers. I will spell out below what exactly I mean by this in the order of importance. 1. From a theoretical perspective the discussion would benefit substantially from discussing two related issues at length: First, as far as computerised administration using the SAM is concerned, what drawbacks to the authors expect in which populations? More specifically, to the extent that story-completion is inherently an interpersonal paradigm providing access to attachment representations (and builds on the cooperative principle and the associated Gricean maxims) is it actually realistic to assume that the same communication patterns will emerge when children tell their stories to a computer vs. an experimenter? In fact, is it conceivable that some attachment patterns (especially avoidant children) are blurred due to this less interpersonal mode of measuring attachment because the procedure is less anxiety provoking and/or creates fewer opportunities for co-regulation by the experimenter? Moreover, most story-completion procedures use a highly interactive warm-up phase as well as interactive prompts in the event that children avoid certain core issues or dilemmas in the story. Children's responsiveness to these interactive prompts are a key factor in administering the task (because the child get's the sense of a real play interaction) as well in coding the task. From this vantage point, is it very far-fetched to assume that the absence of these features may result in a very different measure compared to other story-completion measures? To be clear, I am not arguing that the SAM does not measure an interesting and important construct, but I would be hesitant based on a single study to make strong claims that we are tapping into parallel (though perhaps related) processes with SAM and the MCAST. To me, the high levels of concordance between the two measures are not sufficient to make such a claim, especially in light of the partial non-independence of the ratings (see below). Second, as far as the machine learning algorithm is concerned, I found myself wondering whether any measure based on measurement of movement and distances can ever do more than provide a behavioural "correlate" or "marker" of attachment representations, as indexed by story-completions. The point is that the machine-learning algorithm captures an entirely different level of information and is "ignorant" of the meaning and representations the child conveys in the story. While it is fascinating that there may be some overlap between content and behavior, I would shy away from equating the two as the authors do when they refer to the machine-learning algorithm as an automated means of classifying or rating "attachment". Analogously, I would venture that no neuroscientist would equate certain patterns of neural activity with an attachment classification, but would rather and more conservatively refer to physiological outcomes, correlates or markers (depending on the state of evidence, see Cacioppo et al., 2017, Handbook of Psychophysiology). I would therefore strongly urge the authors to adopt a similar more conservative terminology in describing the results of their machine learning algorithm, especially as it remains unclear if the results generalise to other (especially at-risk) samples and also show overlap with other attachment measures that have received more validation and cover a similar age-range (e.g., CAI; see Jewell et al., 2019). 2. With regard to the methods and results, I felt many important questions were left unanswered: a.) on p. 12 the authors state that to their surprise children were "perfectly capable of a) understanding and using a two dimensional plan version of the dolls-house".This is obviously a casual observation, but what exactly does "perfectly capable" mean? Did the order of presentation (fuzzy felt map first group vs. MCAST first group) influence the quality of narration or not? After all, it is conceivable that those who first received the MCAST had warmed up more effectively and therefore were better able to "step it up a notch" and also narrate stories in the two-dimensional space of the fuzzy felt map. The same question applies later on again when the authors counterbalanced SAM and MCAST in their n=61 sample for Phase 2. Did order influence their results in any way in either of these samples? b.) On p. 14 the authors report this result: "The overall attachment classification for SAM did not differ depending on which version of the prototype was administered: (Spearman’s rho (116) = 0.065, N.S.)." Maybe I missed something, but I thought attachment classification and prototype version were both categorical measures. Is Spearman's rho appropriate for categorical measures? c.) Information on the sample is very sparse. Table 1 does not divide the information on age and gender-distribution up by Phase of the Study. Therefore, I could not glean whether any age-group or gender was under- or over-represented during any phase of the study. d.) As far as I could tell, for the manual rating in some cases the expert rater was consulted for the SAM and MCAST on the same child? Hence, coding on these cases was not independent (even if the ratings were completed on different days) and I would like to see the results before the discordance was abolished with the help of the expert rater - after all, this is more likely to reflect the result that would be obtained outside of a validation study. e.) Two children were excluded from the algorithm-based approach due insufficient data-quality. I think it is justified to argue that these children should be included when reporting the results because they would ultimately lead to ultimately discordance (similar to "cannot classify" in the AAI) f.) The information provided on the algorithm is very sparse. Instead the authors cite a conference paper by Roffo and colleagues that cannot be accessed online. Specifically, it remained unclear to me what "use of the entire dataset" meant? Did the algorithm also have access to and use information other than the videos, i.e., did it "know" the attachment classification of the child and try to predict it from the information in the video? Or did the algorithm "learn" based purely on the information drawn from the videos. The authors must provide many more details in an Appendix or I will not be in a good position to make a call as to the significance of their findings. g.) Was age, gender or deprivation index score related to the concordance between MCAST and SAM and between machine-learning algorithm and MCAST? 3. Comment on Language: The manuscript is replete with comments, such as this: "From the user perspective, these modifications simplified SAM set-up compared to CMCAST since story stems could potentially be delivered on any school computer (laptop or PC) installed with our SAM software, and the only other specific “kit” required would be the dolls-house mat, furniture, computerised dolls and the webcam-like camera with depth sensor. These are light and highly portable compared to the MCAST original set-up." As a whole I couldn't escape the feeling that this manuscript is too oriented to promoting the new measure and explaining the practilaties of this measure. As consequence, I got the sense of a user manual manual or even a commercial website without the necessary self-critical, dispassionate attitude necessary for a rigorous empirical examination. I think it would already help a great deal, as I mentioned above, if the authors delved more into the difficulties and risks inherent in transferring an interactive story-completion task to a computerized environment and then subjecting the video-based data to a computerized algorithm. Also, it would importnatn to acknowledge that the most authoritative review of attachment measures in the field actually states that middle childhood and adolescence are phases for which "no gold-standard measure exist" (p. 72). Hence, calling the MCAST a gold-standard does not seem appropriate and comparing the results of the SAM with other more dimensional story-completion procedures, such as the ASCT or the MSSB would make sense. 4. Finally, regarding clinical utility of the SAM, I have reservations about believing that the SAM would indeed yield comparable levels of storytelling to MCAST (or other story-stem techniques) in non-typically developing children. My primary concerns, some of which I am reiterating here, would be that (a.) the cognitive demands placed on the child in the SAM presumably exceed those required for the MCAST (b.) the task cannot be tailored to the child's individual skill level (c.) no warm-up procedure could be implemented, and (d.) to the extent that story-telling reflects an interpersonal process between the child and the experimenter the SAM presumably only captures this aspect insufficiently (as mentioned above, but potentially even more relevant in clinical populations). As for the machine-learning based algorithm, its clinical utility will be enhanced substantially if it proves sensitive to identifying behavioural correlates of disorganization. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 1 |
|
PONE-D-20-29493R1 The School Attachment Monitor - a novel computational tool for assessment of attachment in middle childhood. PLOS ONE Dear Dr. Minnis, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. The reviewer is an expert in story stem assessments and made excellent points to revise the manuscript. I agree with the points that the limitations of a solely behavioral assessment need to be addressed more thoroughly and that readers should get more detailed information on the algorythms behind your analysis. I feel confident that the manuscripts will be a valid contribution to the PLOS One readership after this revision. Please submit your revised manuscript by Jul 02 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Svenja Taubner Academic Editor PLOS ONE Journal Requirements: Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #2: (No Response) ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #2: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #2: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #2: No ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #2: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #2: I appreciate the authors thorough and interesting response letter. In my view, although many of their points are well taken, too little of the pros and cons discussed in the response letter have made their way into the revised manuscript. In fact, I firmly believe that the manuscript will make a more lasting impact if the authors show readers that they are aware of these issues, by unpacking the concerns that may be raised by their task and automated ratings and pitting their own counter-arguments against these possible concerns. I do not think glossing over these issues will do them any favours. Essentially, I am thus advocating for significantly extending the discussion and limitations sections by addressing the following issues: 1. Ever since narrative measures have become popular in attachment research in the mid-1980s with Mary Main's move to the level of representation, they have been considered windows to children's internal representations. In so doing, they involve tapping into children's processes of meaning-making, by which I mean they are thought to provide access to the child's subjective perspective on interactions and relationships with their caregivers and themselves. As such, narrative measures put us in a much better position to understand what Bowlby conceived of as the "goal-corrected partnership" that eventually develops between caregivers and children and involves a basic capacity for perspective-taking. From this perspective, a purely proximity-seeking and movement-based algorithm carries the inherent risk of implying that we can safely ignore this information and a sole focus on behavior (akin to the Strange Situation) will do. Do the authors really want to suggest this, i.e., that the same rating principles apply to story-completions as they do to the strange situation? Forgive me for saying so, but that would somehow feel a bit like putting the narrator in a Skinner box. From their response letter, however, I gathered that they are not advancing such a bold claim, but, rather, alerting readers to the possibility that the assessment of movement and behavioural manipulation of the figures may offer valuable insights with attachment-relevant implications. 2. While the authors nicely illustrate the warm-up procedure used in the SAM, the lack of standardized prompting procedures (which by the way is a standardized way of tailoring the task to the individual child) remains an important weakness of the SAM that should be addressed in the discussion. Notably, for administration, prompting by the experimenter arguably increases the engagement with the task, establishing the give-and-take nature of interactive play between the experimenter and the child. More importantly, for rating children who avoid a story-theme even after a standardized prompt are typically rated as higher in avoidance or denial, therefore providing evidence for disentangling attachment-avoidance and attachment-resistance. Moreover, this procedure is considered particularly important for young children so as to help distinguish potential lack of comprehension from avoidant manoeuvres. 3. I concur with the authors that readers of their manuscript will presumably mostly be psychologists and psychiatrists. However, I do not go along with their decision to therefore dispense with a balanced view on the limitations of the algorithm. In fact, I feel they are under an obligation to do so (in non-technical terms) in a separate paragraph in the discussion. For example, it is crucial to address whether their statistical leave-one-out cross validation approach may overestimate the generalisability of their algorithm (e.g., due to overfitting, sample size issues and a lack of a validation set) and I would venture that most statically well-trained psychologists will be able to grasp the gist of these issues and therefore be put in a better position to reach an informed decision as to whether they should use the SAM or not. Finally, I do not understand why the authors cannot provide anonymised secondary data (e.g., spreadsheet codes derived from SAM vs. MCAST vs. the algorithm, age , gender) which could be useful for meta-analyses. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #2: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 2 |
|
The School Attachment Monitor - a novel computational tool for assessment of attachment in middle childhood. PONE-D-20-29493R2 Dear Dr. Minnis, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Svenja Taubner Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: |
| Formally Accepted |
|
PONE-D-20-29493R2 The School Attachment Monitor - a novel computational tool for assessment of attachment in middle childhood. Dear Dr. Minnis: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Svenja Taubner Academic Editor PLOS ONE |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .