Peer Review History
| Original SubmissionApril 7, 2020 |
|---|
|
PONE-D-20-09557 RNAthor – fast, accurate normalization, visualization and statistical analysis of RNA probing data resolved by capillary electrophoresis PLOS ONE Dear Dr. Szachniuk, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. The revised manuscript should address all the critical points raised by all reviewers. Please submit your revised manuscript by Jul 24 2020 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols We look forward to receiving your revised manuscript. Kind regards, Danny Barash Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: No Reviewer #2: Partly ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: N/A Reviewer #2: N/A ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: Experimental data from RNA structure probing assays in the form of reactivities to structure-sensitive reagents can be integrated with RNA structure-prediction algorithms to improve prediction accuracy. To this end, the raw data is first processed through assay-specific pipelines to get reactivities of nucleotides of RNAs. The reactivities are normalized, and become the input to a structure-prediction algorithm along with the sequences of RNAs of interest. SHAPE-based probing followed by capillary electrophoresis is the traditional way to perform such experiments, and ShapeFinder is a popular tool to estimate reactivities from the electrophoresis data. RNAthor by Gumna et al. primarily serves the purpose of normalizing the reactivities from ShapeFinder and saving files that can be input to RNAstructure. It performs additional visualizations and some statistical tests. Normalization and exploratory visualization of data are important steps of data analysis. Experimental biologists often struggle with this. Hence, a good interactive web application for this would be a great development. Normalization of such data must account for poor quality information for some nucleotides. Gumna et al. have implemented their empirically developed approach to identify and exclude such nucleotides from analysis. This is the primary technical contribution in this manuscript. However, the approach has not been validated or compared with existing approaches. Tests against some benchmark data and performance assessment are required. Besides, the authors have not described the logic behind their exclusion criteria but simply stated them as mathematical rules. Of the three criteria, the one I could interpret --- filtering negative reactivities --- is common practice in the field. The other two criteria have cutoff values for comparison of peak areas, but it has not been ascertained that they are optimal. They have simply been stated as prior empirical knowledge. Hence, the novelty is low. Additionally, the authors claim that RNAthor "significantly reduces the time required for data analysis". However, this is not substantiated by any of the results. I'd rather say that an interactive application such as RNAthor could save the time spent writing scripts for data analysis. The current wording would be appropriate if the manuscript presented algorithmic advances that reduce time complexity of the analysis. Further, the authors have motivated the manuscript in several places by claiming that SHAPE followed by capillary electrophoresis and analyzed by ShapeFinder is a widely recommended approach to study RNA structures. However, the references to support these assertions are often old papers from one particular lab. Hence, the application seems to be of limited interest. Following are some other comments: 1. It would be nice to have a standalone version of the application. 2. What happens to data uploaded on the server if the user doesn't make an account? Is it also stored on your server for months? Please comment if there any data security issues. 3. Page 10, line 207 says "averaged normalized SHAPE data with standard deviation". Is this referring to average across nucleotides or samples? 4. Page 11, lines 232-239 describe additional statistics computed by RNAthor. However, the purpose behind these specific statistical tests and what the users could do with them is missing. Reviewer #2: The manuscript by Gumna et al. presents a web-based platform for quality control of RNA structure probing data obtained by experiments that combine SHAPE chemistry with capillary electrophoresis (CE) quantification of the SHAPE reaction’s cDNA products. The platform takes a SHAPE reactivity profile as input and performs automated data normalization, needed to bridge between different experimental conditions and different RNAs, and automated detection of unreliable data points. Users can also visualize the data and run a statistical test that aims to assess reproducibility and possibly also structural variation. This work tackles a very important aspect of structure probing data analysis, in particular one that has not received sufficient attention to date. Normalization and other quality control steps remain a relatively unexplored area, and researchers often resort to one of a few popular strategies, which many find to be over-simplistic, too narrowly focused, and generally unsatisfactory. As such, this work has the potential to have real impact. However, more work is needed on the authors’ side to bring this work to its full potential, in particular, more testing and a solid convincing demonstration of the utility and validity of the proposed approach. Several things are missing. First, there are many methods and tools for SHAPE data analysis, currently disregarded by this work. Only 2 software platforms, ShapeFinder and QuShape, are mentioned here, but so many other tools were developed over the last 5-6 years. It is true that newer tools were designed with Seq/MaP/MaPseq protocols in mind; however, this is irrelevant because the proposed platform accepts reactivities as input. It doesn’t matter how reactivates were obtained, as reactivities processed from next-gen or from CE platforms are still reactivities. It has also been shown multiple times (mainly by the Weeks lab) that next-gen-based reactivities have very similar statistical properties to the one obtained by CE platforms. Accounting for other data processing platforms is important because they offer similar normalization routines, and in fact, most of them feature additional popular normalization routines. This, in turn, impacts the novelty of the proposed platform. What I find to be unique to this work is the authors’ approach to automating quality control, particularly the removal of potentially unreliable data points. I don’t think other platforms offer something similar, but this leads me to my second point, which is that the manuscript lacks any demonstration of the performance of the approach (or any other feature unique to this work, such as reproducibility assessment). I understand the authors have gained substantial experience analyzing structure probing data, but the fact that they believe their method “works well” is insufficient for publishing it. The only way to get users to try this out is by showing them, visually and also quantitatively, that the outputs are indeed robust and reliable. I have referred to this issue in more details in my comments below. Finally, I wonder why the authors limit consideration to both SHAPE data and CE-based platforms. I think the scope of their work could be extended relatively easily. As I mentioned above, any reactivity profile needs to be normalized and quality-controlled. Furthermore, the bigger a dataset is, the more critical automation is, so why not consider the plethora of datasets obtained by high-throughput sequencing-based experiments? I also don’t see a reason to limit this work to SHAPE data. There are many popular alternatives today, including DMS, HRF, and several SHAPE variants, such as NAI. I understand the authors may have tailored their automated QC routine to the special properties of SHAPE, but it is worth testing how well it does on DMS data. Additionally, as shown by several labs, SHAPE-CE, SHAPE-MaP, and SHAPE-Seq all generate very similar data, so why not extend the scope at least in the context of the SHAPE probe? To summarize, this work needs to be revised to account for a large body of work from labs other than the Weeks lab and for recent advances in structure probing experimental and data analysis capabilities. However, at the same time, it could also leverage the recent expansion in the scope of structure probing to provide a tool of much broader applicability than its current designation, especially because it targets a step in data analysis which has not been adequately addressed to date. Detailed comments and suggestions are below. There is some novelty in the automation of selection of reactivities for exclusion from analysis based on background signal. To the best of my knowledge, this type of quality control is normally done manually. However, the manuscript and the proposed tool target analysis of relatively short RNAs (no longer than ~300-500 nt) due to SHAPE/DMS and CE limitations. In such cases, manual/visual inspection of the signal and the background traces is not so time-consuming. The authors also claim that the proposed automation of selection “works well for SHAPE experiments performed in vitro or in vivo”(page 8). There are two key issues with this statement. First, it is not supported by evidence. Second, it is not clear how the authors determine that their method “works well”. I understand that this statement is based on vast experience with SHAPE data analysis, but this is not convincing from a reader’s perspective, and the readers are your potential users. I would like to see numerous real SHAPE data traces (in vitro and in vivo) from more than one lab, for which the automated method works as well as manual correction. This, in turn, requires a suitable quantitative assessment metric. Since currently there is no consensus metric for evaluating “goodness” of SHAPE data, the authors could come up with their own metric, as long as it is appropriate and convincing. Some investigators use SHAPE-directed structure prediction accuracy to benchmark the performance of data processing pipelines. However, I don’t find that convincing (unless differences are truly dramatic) because the NNTM model introduces so much additional complexity and uncertainty to the output. Another way is to show that agreement between replicates improves after the automated routine is applied to the data. Microarray informatics method developers commonly use this approach to demonstrate that a proposed pre-processing step is effective. Existing measures of replicate agreement that are specialized to structure probing data are found in Choudhary et al., Bioinformatics, 2016 and Choudhary et al., Genome Biology, 2019. Alternatively, the authors could propose a novel quantitative measure that captures those data characteristics that make them think it “works well”. Data normalization: The authors implemented the popular box-plot method, but they caution the reader to avoid using it for RNAs shorter than 300 nt (page 9). So what should a user do if he/she is studying a relatively short RNA? This lack of options severely limits the utility of the proposed platform. Please also see my other comment below regarding normalization strategies that other data analysis platforms offer. How about providing an alternative strategy for relatively short RNAs? Some statements need to be toned down and/or revised. For example, “Currently, the most common method of RNA chemical probing is SHAPE used in conjunction with capillary electrophoresis (SHAPE-CE)” (Abstract). While it is true that SHAPE used to be the most popular method for a decade or so, DMS appears to be as popular as SHAPE nowadays. I think it is more appropriate to say that one popular reagent choice is SHAPE. Note that there is a similar statement in the Introduction section, which also needs to be reworded. Additionally, I think that over the last 2-3 years, SHAPE/DMS in conjunction with next-gen sequencing (via Seq or MaPseq protocols, and very recently, also via direct RNA nanopore sequencing) has become a much more popular choice than traditional CE-based structure probing. A quick literature search would reveal both the widespread use of DMS for modification and the widespread reliance on NGS for cDNA sequencing. Another statement I found to be somewhat outdated is “ShapeFinder is a popular computational tool for the extraction of quantitative SHAPE reactivity values from raw CE electropherograms.” To the best of my knowledge, many labs use QuShape (a newer and more automated SHAPE-CE analysis software from the Weeks lab) and others use in-house scripts. While ShapeFinder used to be the platform of choice for SHAPE-CE analysis, a quick literature search will show this is no longer the case, especially over the last 2-3 years. Note, however, that I acknowledge there are major issues with QuShape’s performance, as I know that many labs are unhappy with it and seek better alternatives. Finally, I also find the statement “… manual normalization of these values to a uniform scale and exclusion of unreliable data are both required before their usage by RNA structure prediction software” to be somewhat misleading because QuShape does feature reactivity normalization and might also allow users to exclude unreliable data (not sure about the latter, though). The way the abstract is worded, one might think that users currently have no software tools for normalizing the data and possibly also excluding unreliable measurements, and I don’t think that is indeed the case. In continuation to my previous point, there are multiple software platforms, other than QuShape, which allow users to normalize structure probing data. In contrast to ShapeFinder and QuShape, these platforms were designed for NGS-based probing data, hence the initial input to these tools must take the form of reads, FASTA files, or read counts. However, once reactivities have been calculated, these softwares could be used to apply several different normalizations to them. In other words, reactivity normalization is independent of how reactivities are obtained. They may be obtained from CE or NGS data, using a variety of platforms, but once you have them, they can be normalized by numerous existing platforms. For example, SEQualyzer (Choudhary et al., Bioinformatics, 2016) features 2 normalization strategies: 2-8% and box-plot, RNA Framework (Incarnato et al., NAR, 2018) features 3 normalization strategies: 2-8%, box-plot, and 90% Winsorization, and StructureFold (Tang et al., Bioinformatics, 2015) features (via the Galaxy platform) 2-8% normalization and an option to cap reactivities. Some of these platforms also provide data visualization, for a single experiment and sometimes also for replicates. Some platforms are also open-source (e.g., SEQualyzer and RNA Framework) and this, in turn, allows users to directly use their normalization modules. All this prior work is missing from this manuscript, which might create an impression that users currently have no automated way of normalizing SHAPE/DMS reactivities, which is incorrect. Moreover, the most popular normalization strategies are likely 2-8% and box-plot and are described in detail in Sloma and Mathews, Methods Enzymology, 2015. In fact, one could easily implement them in a Matlab/R/Python script and I disagree with the authors that they “require significant user training and are … time consuming and prone to errors”. Page 4, second sentence: the description of SHAPE is limited to traditional truncation-based SHAPE chemistry. However, since 2014, modifications are alternatively detected via the MaP approach, where the RT introduces mutations at modified sites. This strategy can only be used in conjunction with DNA sequencing, which is likely why it is not mentioned here. However, for the text to be to be more accurate and up-to-date, I think it should be mentioned. Results, subsection “A brief overview of SHAPE-CE raw data analysis using ShapeFinder software”: I think this should be omitted or at the least moved to the Supp. Material because knowing how ShapeFinder works is not necessary for understanding the authors’ work. This is because both normalization and detection of unreliable data points (i.e., data QC) occur after the output from ShapeFinder has been obtained. Furthermore, ShapeFinder does not trigger the issues that normalization and data QC address. These issues are inherent to chemical modification experiments. Results, subsection “RNAthor web application”: I find this material to be suitable for a user manual, especially the description of the various buttons, Reference page, Contact page, and Terms and Conditions page. These are not necessary for understanding the work or getting an idea of how user-friendly it is. It should be possible to compress this subsection into the most essential 4-5 sentences that refer to the ease of use and web interface. I also strongly encourage the authors to compile a short user manual and make it available on the platform’s webpage. Results, subsection “Input Data”: Similar comments as above. The description is too detailed and should be substantially shortened (e.g., no need to mention which button to press). Results, subsection “Output Data”: Similar comments as above. Results, subsection “Output Data”: Note that most or all existing structure probing data analysis pipelines feature data visualization, not only ShapeFinder and QuShape. Some pipelines also allow users to visualize several reactivity profiles together. Results, subsection “Additional Analysis of Normalized SHAPE-CE Data”: I agree with the authors that SHAPE/DMS data departure from a normal distribution. In fact, it has been rigorously established multiple times that this departure is quite significant (see, for example, Sukosd et al., NAR, 2014, Eddy, Ann. Reviews Biophysics, 2014, Deng et al., RNA, 2016). I therefore do not see why a test for normality is implemented. If the authors have SHAPE data that is nearly Gaussian, I would like to see it appended as Supp. Material. Results, subsection “Additional Analysis of Normalized SHAPE-CE Data”: To facilitate differential reactivity analysis between two samples and/or to assess reproducibility across experiments, the authors implemented a Mann-Whitney U test, aka Wilcoxon rank-sum test. Note that a very similar test, namely, a Wilcoxon signed-rank test, was recently used for differential reactivity analysis in Choudhary et al., Genome Biology, 2019, albeit applied to similarity scores, not reactivities. The Wilcoxon rank-sum test makes an independence assumption, whereas the signed-rank version of the test relaxes this assumption. Can the authors justify their assumption that the compared samples are independent, especially if these are replicates of the same experiment? Additionally, note that the proposed test does not account for biological variation between samples, since one needs >= 2 replicates from each condition/experiment to assess biological variation per condition. What this means is that if one performs the experiment on two biological replicates (especially relevant to in vivo samples), the test might indicate that results are not reproducible because it picks up the biological variation in the RNA’s structure and its reactivity to the SHAPE reagent. I think it is important to alert users to the limitation of the proposed test and to emphasize what it is capable of detecting. This limitation is inherent to any differential analysis test that compares between reactivity profiles without pre-assessing biological variability between samples in a condition. As with the automated QC routine, I am missing a demonstration of the utility and the validity of the proposed statistical test. This could be done via benchmarking against existing methods (see, for example, Choudhary et al., Genome Biology, 2019 for a benchmark of recent methods). Potential users need to see some concrete evidence that the proposed test has statistical power. Alternatively, at the least, the authors should show examples of reproducible and irreproducible traces and their corresponding test scores as well as examples of how the test performs on biological replicates vs technical replicates. A comparison with existing reproducibility (QC) measures could also be helpful. It is clear that the authors have gained tremendous experience analyzing CE-based structure probing data, and that the proposed platform is the result of years of empirical hands-on experience. This is invaluable, especially since there aren’t many labs with that level of expertise. For this reason, this work has the potential to lead to real advances in data analysis. In particular, the authors say “RNAthor was extensively tested on ShapeFinder output files (IPF) from published and unpublished SHAPE-CE experiments performed in our laboratory. It was also tested on IPF files from hydroxyl radicals and DMS probing experiments resolved by capillary electrophoresis.” I think what’s really missing in this manuscript, other than an up-to-date review of existing work, is a demonstration of improved performance on many real datasets from multiple probes, multiple conditions, multiple labs, and multiple RNAs. Such demonstration should also include comparisons to existing alternatives, and there are several ones other than ShapeFinder and QuShape. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 1 |
|
PONE-D-20-09557R1 RNAthor – fast, accurate normalization, visualization and statistical analysis of RNA probing data resolved by capillary electrophoresis PLOS ONE Dear Dr. Szachniuk, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. The revised manuscript should address all the critical points raised by all reviewers. Please submit your revised manuscript by Oct 11 2020 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols We look forward to receiving your revised manuscript. Kind regards, Danny Barash Academic Editor PLOS ONE [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: (No Response) Reviewer #2: (No Response) ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Partly ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: N/A ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: No Reviewer #2: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: I thank the authors for considering my suggestions. The revised manuscript presents a better case for the utility of RNAthor. The authors have explained why it is not necessary to have a standalone version of RNAthor at this time, added relevant references, as well as addressed my other minor comments. A good number of experimental biologists are indeed unskilled at even basic data analysis tasks. Such biologists could benefit from a web server where they can upload outputs of ShapeFinder or QuShape and save normalized data, relevant figures and results from statistical tests. Hence, as a software tool RNAthor seems to have utility. The primary contribution of the manuscript is a web server that combines existing functionalities from other sources. The method for automatic identification of unreliable data is an implementation of the rules that they have utilized previously (e.g., in their articles in Nature Communications, RNA Biology). The statistical tests that they perform have been used by other researchers active in the field. So overall, there appears to be no novelty with regards to the methods. In my previous review, my major concern was that the method to identify “unreliable probing data” has not been validated. In revision, the authors have shown that the results from RNAthor are comparable to those from their manual analysis. This serves as evidence that there are no bugs in the software and that the rules followed by the human analyst have been faithfully implemented. The differences that they observed can be explained by subjectivity of manual analysis. However, objective validation of the method is still lacking. The authors claim that comparison of RNAthor with manual analysis is the only possible validation. I believe that at the very least, the authors could test a range of cutoffs to identify and exclude unreliable data. The results from different cutoffs could be objectively compared by examining the accuracy in reproducing well-studied RNA structures, or other biological results that are widely believed in the field to be true. In my assessment, stating that an implemented algorithm is based on experience is not enough to claim novelty of the method. To make such a claim, the authors must demonstrate that out of a set of other plausible and reasonable methods, the method implemented in the web server is the one that performs the best. The authors can take a look at the following paper from the Laedarach lab as an example. In this article, Woods and Laedarach automated some of the rules used by humans for manual analysis. They tested a range of algorithms to identify the one that performs the best. Woods, C. T., & Laederach, A. (2017). Classification of RNA structure change by ‘gazing’at experimental data. Bioinformatics, 33(11), 1647-1655. In summary, 1. I find the article acceptable if it is classified as reporting a software tool. I’d recommend that the authors tone down their claim of novelty with regards to “an algorithm for the automatic identification of unreliable probing data” as this claim requires objective validations that are not there. I understand that the authors may have added this claim in response to my earlier comment about lack of novelty. My comment was based on assessing the manuscript as a methods research article, which may not be the intention of the authors. So it is best to remove this claim. Also, the authors should make the raw data used for figures 3-4 available as supplementary data, or provide links if they are available online. 2. To be accepted as a methods research article, major revisions are still needed to demonstrate that the method is indeed optimal and applicable for general use. Lack of gold standards to objectively evaluate methods for analysis of RNA structure-probing data is a challenge faced by all methods researchers active in the field. However, the community has also found acceptable ways for such evaluation. If the authors did indeed mean to publish RNAthor as a methods research article, I hope that the authors will borrow ideas from other manuscripts and consider validating their method. Reviewer #2: The authors added several useful features to their web server, such as options to perform and visualize data-directed secondary structure prediction and to analyze DMS data. Hopefully, this would make the proposed tool more appealing to potential users, although overall, the main novelty in this work remains a fairly simple routine for automated detection of unreliable data points. A comparison between manual and automated data processing helps demonstrate that the automated detection is reliable/judicious, although I would expect to see more examples if I were a potential user, or at least manual analysis by additional experts (not the makers of the tool). Regarding the optional statistical tests, I think there are more statistically-sound and powerful differential reactivity analysis methods out there, and I don’t feel what’s offered by this tool is very powerful. Finally, please note that the graphics is of poor quality and should be improved. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 2 |
|
RNAthor – fast, accurate normalization, visualization and statistical analysis of RNA probing data resolved by capillary electrophoresis PONE-D-20-09557R2 Dear Dr. Szachniuk, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Danny Barash Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: |
| Formally Accepted |
|
PONE-D-20-09557R2 RNAthor – fast, accurate normalization, visualization and statistical analysis of RNA probing data resolved by capillary electrophoresis Dear Dr. Szachniuk: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Danny Barash Academic Editor PLOS ONE |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .