Peer Review History
| Original SubmissionSeptember 25, 2024 |
|---|
|
-->PONE-D-24-38474-->-->Towards clinical applicability of fMRI via systematic filtering-->-->PLOS ONE Dear Dr. Koten, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Feb 01 2025 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org . When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:-->
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols . Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols . We look forward to receiving your revised manuscript. Kind regards, Kendrick Kay Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 2. Please note that PLOS ONE has specific guidelines on code sharing for submissions in which author-generated code underpins the findings in the manuscript. In these cases, we expect all author-generated code to be made available without restrictions upon publication of the work. Please review our guidelines at https://journals.plos.org/plosone/s/materials-and-software-sharing#loc-sharing-code and ensure that your code is shared in a way that follows best practice and facilitates reproducibility and reuse. 3. Thank you for stating in your Funding Statement: “This study was supported by FWF grant (P 22577-B18).” Please provide an amended statement that declares *all* the funding or sources of support (whether external or internal to your organization) received during this study, as detailed online in our guide for authors at http://journals.plos.org/plosone/s/submit-now. Please also include the statement “There was no additional external funding received for this study.” in your updated Funding Statement. Please include your amended Funding Statement within your cover letter. We will change the online submission form on your behalf. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions -->Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. --> Reviewer #1: Partly Reviewer #2: Yes ********** -->2. Has the statistical analysis been performed appropriately and rigorously? --> Reviewer #1: No Reviewer #2: Yes ********** -->3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.--> Reviewer #1: Yes Reviewer #2: Yes ********** -->4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.--> Reviewer #1: Yes Reviewer #2: Yes ********** -->5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)--> Reviewer #1: Koten and colleagues performed a well-thought-out and clearly explained set of experiments to test whether a new set of detrending and denoising methods could improve the reproducibility of task-based fMRI over conventional detrending and denoising methods. The authors also used a clever validation framework that allowed them to avoid bias in their methodology. However, the results are not presented very clearly, and I believe there are issues with the results and interpretations that need to be addressed before this article is ready for publication, as outlined below. I am not convinced that the SG filters improve the reliability of fMRI signals or connectivity over the standard SPM methods based on these data. Although it is allued to that SG filters are more reliable, the “SPM denoised, detrended, cleaned, HRF” pipeline seems to lead to just as good reliability as the SG pipelines. For instance, the time course reliability of the SPM pipeline is 0.41, while the time course reliabilities of the best two SG pipelines are 0.48 and 0.41. If the claim is to be made that the SG pipelines have better reliability, there needs to be either a statistical comparison, or a much more substantial difference in reliability. Additionally, why is the one pipeline highlighted red in Tables 1 and 2? This is not obviously the best performing pipeline in each table to me. It should be noted somewhere why this pipeline is highlighted, and if this pipeline is considered “optimal”, the reasoning should be noted somewhere. Furthermore, the reliability measures of time courses in Table 2 are grand means, how do these reliability measures vary across participants and regions? Could these measures be statistically compared between pipelines if the grand mean is not calculated? Figure 9 shows that the SG method produces better connectivity reliability than the SPM method. However, based on Table 2, the two SPM methods that were tested, but not presented in Figure 9, look as if they have as high or near to as high connectivity as the SG methods. For a fair comparison of SG to SPM methods, either all methods or the methods with the highest reliabilities should be compared in Figure 9. In the section on power spectra of signal time courses, why is it assumed that the SG filters retaining high frequency signal, but removing low frequency signal is akin to removing unwanted noise? I would imagine that much of this high frequency signal is also due to unwanted noise and is not solely task-related. The signals should be compared to the predictor time courses using ICCs to determine which detrending/cleaning method is optimal, as is done later in the manuscript. All figure axes should be labeled with units. It’s hard to evaluate the results in the figures without this. As an example, I cannot tell what the time units are for Figure 6, so I do not know how to convert these to frequency as mentioned in the text. This study seems to assume that reproducibility of 1 is the gold standard. However, we know that brain activity is constantly fluctuating and changing, even in the presence of repeating, external stimuli. Therefore, some variance between sessions should be expected in the fMRI signal, even in the presence of absolutely no noise. If the gold standard time course is the predictor time course, shouldn’t this be plotted in Figures 6 and 7 so that we can visually evaluate which pipeline is the closest to what we would expect? Figure 6: I think the top figure plots the “residual” time courses, correct? If so, this should be mentioned since it is unclear as written. It seems like the individual HRF and “event-related average” are used somewhat interchangeably. This seems correct to me, but it would be clearer if one term was defined and used. Reviewer #2: Reproducibility assessment of fMRI is a common practice at group level. Conventional signal post processing of fMRI has shown poor reproducibility on an individual level. The study aims to introduce a data driven signal filtering workflow with Savitzky-Golay filters to fMRI signal for improving single subject reproducibility in comparison SPM’s default signal filtering method. By improving subject level reproducibility, the work aims to improve the potential value of fMRI scans in clinical assessments. Overall the experiments are documented in great detail and comprehensive. However the current workflow is focused on SPM based workflow and described in some SPM specific terms. To allow the research to have a wider reach in the general fMRI research community, I have some suggestions to allow this work to be adopted by FSL, fMRIPrep, and AFNI users. It was a really good read and I recommend the study to be published in PLOS One after addressing the several points listed below: SPM filter: please elaborate the nature of this filtering method. SPM uses a 128s cutoff for high pass filter by default, and creates cosine regressors. This is identical to fMRIPrep’s recommendation, and can be implemented in other types of workflows. GLM frameworks for deriving nuisance regressors: PCA was done on nuisance regressors of each category (white matter, CSF, motion). There’s no description of what the regressors are in detail to derive the principal components. Can you please elaborate: Moton: are you using just the the 6 rigid-body motion parameters (3 translations and 3 rotation), or included temporal derivatives and quadratic terms (described in Satterthwaite 2013; 6 base motion parameters + 6 temporal derivatives of six motion parameters + 12 quadratic terms of six motion parameters and their six temporal derivatives=24 regressors)? White matter and CSF signals: same issue applies temporal derivatives and quadratic terms applies (1 base parameters + 1 temporal derivatives of base parameters + 2 quadratic terms of base parameter and their temporal derivative=4 regressors for WM and CSF each). Or is this derived by compcor (Behzadi2007) as the number of regressors of WM and CSF are 5? I would recommend to consult the fMRIPrep documentation (https://fmriprep.org/en/stable/outputs.html#confound-regressors-description) and BIDS functional derivative (https://bids-specification.readthedocs.io/en/bep012/derivatives/functional-derivatives.html) for detailed descriptions. Data driven denoising regressors (such as CompCor and ICA-AROMA) have shown inconsistency between software versions, hence hinders reproducibility. The authors used a fixed number of PC regressors to ensure the same loss of temporal degrees of freedom across subjects, but this doesn’t prevent the workflow from introducing noise through float point differences (see Compcor 6 and aroma in Fig 6 vs Fig 11. https://doi.org/10.1371/journal.pcbi.1011942). Have the authors fixed the random seed to ensure the reproducibility of the workflow within the scope of the research project? The reason can beI would like the authors to address this as a limitation of the current workflow. Following point 3, I am curious to see the impact of SG filter on other workflows established in the literature (see https://doi.org/10.1016/j.neuroimage.2017.03.020 and https://doi.org/10.1371/journal.pcbi.1011942). However this is not crucial to the manuscript, since the comparison listed in the study is sufficient to highlight the effect of SG filter. ********** -->6. PLOS authors have the option to publish the peer review history of their article (what does this mean? ). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy .--> Reviewer #1: No Reviewer #2: No ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/ . PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org . Please note that Supporting Information files do not need this step. |
| Revision 1 |
|
Towards clinical applicability of fMRI via systematic filtering PONE-D-24-38474R1 Dear Dr. Koten, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice will be generated when your article is formally accepted. Please note, if your institution has a publishing partnership with PLOS and your article meets the relevant criteria, all or part of your publication costs will be covered. Please make sure your user information is up-to-date by logging into Editorial Manager at Editorial Manager® and clicking the ‘Update My Information' link at the top of the page. If you have any questions relating to publication charges, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Kendrick Kay Academic Editor PLOS ONE Additional Editor Comments (optional): Dear authors, We are in principle happy to accept your manuscript, provided you make the minor suggested revisions from R1. Kendrick Kay PLOS ONE Academic Editor Reviewers' comments: Reviewer's Responses to Questions -->Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.--> Reviewer #1: (No Response) Reviewer #2: All comments have been addressed ********** -->2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. --> Reviewer #1: Yes Reviewer #2: Yes ********** -->3. Has the statistical analysis been performed appropriately and rigorously? --> Reviewer #1: Yes Reviewer #2: Yes ********** -->4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.--> Reviewer #1: Yes Reviewer #2: Yes ********** -->5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.--> Reviewer #1: Yes Reviewer #2: (No Response) ********** -->6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)--> Reviewer #1: Overall, my comments have been largely addressed, and I only have a few minor comments left. I would like to thank the authors for their detailed response to my first comment. I think that some brief text should be added to the based on this response, i.e. why the reliability of SPM pipeline with low-pass filters may be inflated or have other shortcomings. I see that Figure S10 was added to address my concern on statistical comparisons. However, there is not sufficient information to interpret the figure. If the dot is warm-colored, does this mean that the first or second pipeline had higher reliability? Based on the result, I am assuming that warm colors indicate the first pipeline listed is higher and cool colors indicate the opposite. This information could just be added to the caption. What are the units of the Figure 5-8 y-axis labels “Power” and “Bold Response”? Are they in percent signal change, Z-score, or some other normalized amplitude? Reviewer #2: The authors has used PCA component, and PCA to refer to principal component (PC) in the method section. Please correct these! I am happy with all the responses. ********** -->7. PLOS authors have the option to publish the peer review history of their article (what does this mean? ). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy .--> Reviewer #1: No Reviewer #2: No ********** |
| Formally Accepted |
|
PONE-D-24-38474R1 PLOS ONE Dear Dr. Koten, I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now being handed over to our production team. At this stage, our production department will prepare your paper for publication. This includes ensuring the following: * All references, tables, and figures are properly cited * All relevant supporting information is included in the manuscript submission, * There are no issues that prevent the paper from being properly typeset If revisions are needed, the production department will contact you directly to resolve them. If no revisions are needed, you will receive an email when the publication date has been set. At this time, we do not offer pre-publication proofs to authors during production of the accepted work. Please keep in mind that we are working through a large volume of accepted articles, so please give us a few weeks to review your paper and let you know the next and final steps. Lastly, if your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. If we can help with anything else, please email us at customercare@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Kendrick Kay Academic Editor PLOS ONE |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .