Peer Review History
| Original SubmissionJuly 23, 2024 |
|---|
|
PCLM-D-24-00175 Comparison of multidecadal variability in climate reanalyses and global models PLOS Climate Dear Dr. Westgate, Thank you for submitting your manuscript to PLOS Climate. After careful consideration, we feel that it has merit but does not fully meet PLOS Climate’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. In particular, I recommend that the authors provide a clarification of the methodology, as requested by Reviewer 1 and make data available meeting the journal standards. Furthermore, both reviewers have evidenced that a better reference to relevant literature shall be provided. In addition to that, some theoretical aspects should be better discussed: 1. arguments shall be provided about to better explain the role of atmospheric variability, as requested by Reviewer 2, 2. why models that are not able to capture Reanalysis variability would not be reliable for future projections, as requested by Reviewer 2. Please submit your revised manuscript by Oct 11 2024 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at climate@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pclm/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. We look forward to receiving your revised manuscript. Kind regards, Valerio Lembo Academic Editor PLOS Climate Journal Requirements: 1. Please note that PLOS Climate has specific guidelines on code sharing for submissions in which author-generated code underpins the findings in the manuscript. In these cases, we expect all author-generated code to be made available without restrictions upon publication of the work. Please review our guidelines at https://journals.plos.org/climate/s/materials-and-software-sharing#loc-sharing-code and ensure that your code is shared in a way that follows best practice and facilitates reproducibility and reuse. 2. In the online submission form, you indicated that "The data that support the findings of this study are available from the corresponding author upon a reasonable request." All PLOS journals now require all data underlying the findings described in their manuscript to be freely available to other researchers, either a. In a public repository, b. Within the manuscript itself, or c. Uploaded as supplementary information. This policy applies to all data except where public deposition would breach compliance with the protocol approved by your research ethics board. If your data cannot be made publicly available for ethical or legal reasons (e.g., public availability would compromise patient privacy), please explain your reasons by return email and your exemption request will be escalated to the editor for approval. Your exemption request will be handled independently and will not hold up the peer review process, but will need to be resolved should your manuscript be accepted for publication. One of the Editorial team will then be in touch if there are any issues. 3. Figure 9, 10 and 11: please (a) provide a direct link to the base layer of the map (i.e., the country or region border shape) and ensure this is also included in the figure legend; and (b) provide a link to the terms of use / license information for the base layer image or shapefile. We cannot publish proprietary or copyrighted maps (e.g. Google Maps, Mapquest) and the terms of use for your map base layer must be compatible with our CC-BY 4.0 license. Note: if you created the map in a software program like R or ArcGIS, please locate and indicate the source of the basemap shapefile onto which data has been plotted. If your map was obtained from a copyrighted source please amend the figure so that the base map used is from an openly available source. Alternatively, please provide explicit written permission from the copyright holder granting you the right to publish the material under our CC-BY 4.0 license. Please note that the following CC BY licenses are compatible with PLOS license: CC BY 4.0, CC BY 2.0 and CC BY 3.0, meanwhile such licenses as CC BY-ND 3.0 and others are not compatible due to additional restrictions. If you are unsure whether you can use a map or not, please do reach out and we will be able to help you. The following websites are good examples of where you can source open access or public domain maps: * U.S. Geological Survey (USGS) - All maps are in the public domain. (http://www.usgs.gov) * PlaniGlobe - All maps are published under a Creative Commons license so please cite “PlaniGlobe, http://www.planiglobe.com, CC BY 2.0” in the image credit after the caption. (http://www.planiglobe.com/?lang=enl) * Natural Earth - All maps are public domain. (http://www.naturalearthdata.com/about/terms-of-use/) Additional Editor Comments (if provided): [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Does this manuscript meet PLOS Climate’s publication criteria?> Reviewer #1: Yes Reviewer #2: Yes ********** 2. Has the statistical analysis been performed appropriately and rigorously?-->?> Reviewer #1: Yes Reviewer #2: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available (please refer to the Data Availability Statement at the start of the manuscript PDF file)??> The PLOS Data policy Reviewer #1: No Reviewer #2: No ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English??> Reviewer #1: Yes Reviewer #2: Yes ********** Reviewer #1: Please see the attached review. Reviewer #2: Westgate & Kravtsov present an updated model–data comparison of the magnitude of decadal surface air temperature and sea level pressure variability in the Northern Hemisphere. Comparing model output from two CMIP generations and two 20th century reanalysis datasets, they find that models generally underestimate decadal variability compared to the reanalyses, confirming the results of earlier work from the same group. Overall, this is an interesting analysis using rather sophisticated time series analysis methods and appears generally suitable for publication. However, the embedding in the existing literature should be improved before the paper can be considered for publication. I also detail a number of other comments below. In addition, from my understanding the current data availability statement does not comply with the journal's guidelines. Overall, I recommend major revisions. General comments: 1. Introduction Confusingly, the Introduction starts by cross-referencing some specific graphs or findings (“see below”, “the SAT record above”). I think the paper would be much improved by re-writing the first 1-2 paragraphs in a more traditional style, i.e., setting a general context for the work before narrowing the focus to the specific research question. 2. Embedding in the existing literature L74–90 embeds the current study in the existing literature on the mismatch of simulated and observed decadal variability. However, while it is shown how this study fits in the context of previous studies from the authors’ group, it is not discussed how their work is embedded in the wider literature, which is mostly subsumed into two large citation brackets (L74, L78). In the Discussion, comparison with the existing literature is again almost absent with only a short comment noting similar results “to our previous studies”. Novelty is again only compared to studies from the same group (L594–599). Both the Introduction and Discussion should be expanded with a focus on comparison to other works. 3. Analysis at the model-level From my experience, papers evaluating the performance of CMIP models are often a popular reference for future single-model or model comparison studies. Therefore, I believe that making some results available on a model level would broaden the audience and impact of this study. In particular, the authors may consider adding a) A table listing some of the metrics shown in Figs. 5+6 (especially standard deviation and dominant period) for each model (with ranges where several ensemble members are available). Currently, it is unclear which models perform better or worse since the dots in the Taylor diagram are not labeled (and the diagram would become too crammed if they were) b) A supplementary plot showing the regression maps from Figs. 9+10 for each model. This would also help visualize the cancellation described in the text 4. Context for the outlier models It is striking that the outlier models are part of a group of models with very strong AMOC variability on centennial timescales compared to other CMIP6 models (Waldman et al. 2021, Meccia et al. 2023, Mehling et al. 2024). It would add depth to the paper if the authors could discuss this link. Interestingly, there is another model (IPSL-CM6A-LR) in this study that was not classified as an outlier but has a very similar mode of AMOC variability (Jiang et al. 2021). Does its multi-decadal variability nevertheless have similar characteristics compared to the outlier models? Jiang, W., Gastineau, G., Codron, F. (2021). Multicentennial Variability Driven by Salinity Exchanges Between the Atlantic and the Arctic Ocean in a Coupled Climate Model. Journal of Advances in Modeling Earth Systems 13, e2020MS002366. https://doi.org/10.1029/2020MS002366 Meccia, V. L., Fuentes-Franco, R., Davini, P., Bellomo, K., Fabiano, F., Yang, S., von Hardenberg, J. (2023). Internal multi-centennial variability of the Atlantic Meridional Overturning Circulation simulated by EC-Earth3. Climate Dynamics 60, 3695–3712. https://doi.org/10.1007/s00382-022-06534-4 Mehling, O., Bellomo, K., von Hardenberg, J. (2024). Centennial-scale variability of the Atlantic Meridional Circulation in CMIP6 models shaped by Arctic-North Atlantic interactions and sea ice biases. arXiv:2406.09919. http://arxiv.org/abs/2406.09919 Waldman, R., Hirschi, J., Voldoire, A., Cassou, C., Msadek, R. (2021). Clarifying the Relation between AMOC and Thermal Wind: Application to the Centennial Variability in a Coupled Climate Model. Journal of Physical Oceanography 51, 343–364. https://doi.org/10.1175/JPO-D-19-0284.1 5. Atmospheric vs oceanic origins of multidecadal variability Based on the anti-correlation of reanalysis and multi-model mean SLP patterns related to the first PC of multidecadal variability, the authors conclude that “the modeled multidecadal SAT patterns are in fact driven by quasi-random atmospheric circulation (SLP) anomalies” but that observed patterns might be driven by oceanic processes. For several reasons, I am not convinced that the results support this conclusion: • The models do not behave uniformly due to the cancellation of positive and negative patterns, as discussed in the manuscript. This can also be seen in Fig. 10d, where there is very low model agreement on the sign (almost none of the extratropics are stippled). Hence, I see little motivation to compare to the multi-model mean; at least, this statement should be verified on a per-model basis. • A significance test would be needed before drawing conclusions from the correlations. Currently, the speculation about a “random origin” (L520) vs a negative correlation (as claimed in the abstract) are somewhat contradictory. • If the modelled variability–SLP regression/correlation is not significant compared to observations, this does not mean that the models’ variability must be forced by atmospheric variability; it is equally plausible that these models simply underestimate the effect of SST anomalies on SLP (e.g., Omrani et al. 2014). • The manuscript analyzed annual mean SLP but typically, it is winter SLP (see the definition of the NAO index) that is assumed to be driving oceanic processes, as buoyancy loss is greatest and mixed layers are deepest in winter. • Again, the embedding of this result in the wider literature is missing. For example, Peings et al. (2016) and Ting et al. (2014) present a similar analysis but reach different conclusions. Regarding atmospheric vs oceanic drivers of multidecadal variability, there has been a long debate concerning the AMV, for which the review of Zhang et al. (2019) contains plenty of references. Omrani, N. E., Keenlyside, N. S., Bader, J., Manzini, E. (2014). Stratosphere key for wintertime atmospheric response to warm Atlantic decadal conditions. Climate Dynamics 42, 649–663. https://doi.org/10.1007/s00382‐013‐1860‐3 Peings, Y., Simpkins, G., Magnusdottir, G. (2016). Multidecadal fluctuations of the North Atlantic Ocean and feedback on the winter climate in CMIP5 control simulations. J. Geophys. Res. Atmos. 121, 2571–2592. https://doi.org/10.1002/2015JD024107 Ting, M., Kushnir, Y., Li, C. (2014). North Atlantic multidecadal SST oscillation: External forcing versus internal variability. Journal of Marine Systems 133, 27–38. https://doi.org/10.1016/j.jmarsys.2013.07.006 Specific comments: L54–56: Since no pre-industrial control runs are analyzed in the paper, this sentence appears obsolete. L66: There is a multitude of statistical techniques to estimate the forced signal from observations (or a single model realization), e.g., Sippel et al. (2019), Frankignoul et al. (2017) and references therein. Frankignoul, C., Gastineau, G., Kwon, Y.-O. (2017). Estimation of the SST Response to Anthropogenic and External Forcing and Its Impact on the Atlantic Multidecadal Oscillation and the Pacific Decadal Oscillation. J. Clim. 30, 9871–9895. https://doi.org/10.1175/JCLI-D-17-0009.1 Sippel, S., et al. (2019). Uncovering the Forced Climate Response from a Single Ensemble Member Using Statistical Learning, J. Clim. 32, 5677–5699. https://doi.org/10.1175/JCLI-D-18-0882.1 L81: “Stadium wave” should be defined/explained in the context of climate variability L110: It would be very useful to briefly describe these reanalysis products, since they are used as reference afterwards. For example: Which variables are assimilated? Do they use the same SST and sea ice datasets as forcing? Are there any relevant known biases in your regions of interest, e.g., due to the low data availability in the first half of the 20th century? L116: Is the 10-member limit also applied for forced signal estimation? If so, is the bias that this introduces for ensembles where more members (up to 50) are available negligible? L120: How do you define “enormous” variability? Was there a specific numerical cutoff? L125–126: The number of “parent models” looks quite small to me, how did you define these? L139: A reference for the regression method of forced signal estimation is missing. L190 and following: Provide some references here how these methods are similar or deviate from your previous papers cited in the introduction. This would reinforce confidence in your methodological choices and help readers familiar with these papers progress more quickly on the present study. L206: Kravtsov (2017, GRL) showed that there are some differences for M=20 vs M=40. Are your results also robust for M<40 and is there a physical argument for using M>=40? L241: It sounds a bit awkward to start the Results section with “[…] corroborate all of the earlier results by Kravtsov and collaborators”. It undersells your own work and sounds more like a Discussion item. Consider at least moving it to the end of the paragraph. Also, please cite specific papers instead of the “2014–2020” range. L283/Fig 2: By “standard spread”, do you mean the multi-model standard deviation? It would probably be more informative to show quantiles instead of mean and standard deviation, and maybe even small “jittered” dots for individual models, giving a more realistic picture of the distribution. If the distribution is skewed/heavy-tailed, symmetrical bars +/- 1 standard deviation would give a misleading impression of the inter-model spread. L299 and elsewhere: I find the use of “spatial” when referring to the interaction of (area-integrated) climate indices confusing, please consider using another term to avoid confusion with true spatial (i.e., 2D gridded) patterns. L301: The claim that the two reanalysis datasets match well would be much more convincing if the 4 different reconstructions from each index were plotted in the same panel. A plot like this could be added to the Supplementary Material. The subtitles of Fig. 4 (and the Introduction) mention a “stadium wave” but it is never mentioned in the Results section. It would be interesting to know whether this “stadium wave” is robust or how it differs between the two reanalysis products. Fig. 5: The model evaluation using Taylor diagrams is interesting but the quantity that is being evaluated is not very intuitive. Could you convince the reader of the advantages of evaluating EOF loadings over, say, lagged correlations between the 5 climate indices, which would also provide a temporal dimension? Additionally, it would be helpful to show one or two examples (e.g., the reanalysis ensemble-means) of what is being evaluated in the Taylor diagrams, maybe as insets into Fig. 5 or as a supplementary plot. L329–338: Will someone who has never seen a Taylor diagram understand the plots after reading this paragraph? I recommend to just guide the reader (in one of the captions) where to read off standard deviation, correlation and RMSE (I assume “measure of dissimilarity” = RMSE?), aided by labels in the plots themselves. L346: The existence of two clusters is interesting, demonstrating that the period of “observed” variability might depend on which models are used for forced signal estimation. It would be very interesting to see some more analysis of what is going on here. A quick check would be to re-compute the forced-signal mean as in Fig. 1, but once for the underlying CMIP models from each cluster, and see where and when the most important differences arise. Also, I don’t see any obvious clustering by period for ERA20C. Could you elaborate on this? L381–399: Since this section is dedicated (from my understanding) to model–data comparison, I don’t see which additional insight can be gained by another comparison to the multi-model mean which is not related to the real world. Given that there are many figures in the manuscript already, removing this part could streamline the paper. L423: The boxes are confusing since they are subjective and don’t match the definition regions of the Pacific/Atlantic indices as the text might suggest. I recommend removing them. L433: Over land AND in the polar regions L470 and elsewhere: “Leading EOF” seems to be wrong terminology here. It is the regression of SAT onto the leading PC of the five climate indices, while the leading EOF would be associated with the leading PC of the entire (2D) SAT field. L506: This makes we wonder whether it wouldn’t be more useful to limit the 2D pattern evaluation to the northern hemisphere and/or the oceans. As you show in Fig. 9/10, the patterns over land and over the polar regions are very model-dependent even for the reanalysis products. Does it really make sense to evaluate against two specific atmospheric models (in the reanalyses) when the goal is a comparison to something that should be observation-like? Discussion: Could you comment briefly on the differences between CMIP5 and CMIP6 here (except for the presence of outlier models)? In particular, are there any notable improvements in CMIP6 compared to CMIP5? L599–602: “inhibits our ability to not only faithfully simulate the historical climate evolution but also to accurately predict future climate trends” is, in my view, too strong a statement. While it is true that models have had difficulties to reproduce some decadal-scale trends attributed to internal variability (like the 2000s’ warming “hiatus”), which is probably what the authors intend to say here, the long-term temperature trend is dominated by anthropogenic GHG forcing and internal variability is an almost negligible uncertainty for the future temperature evolution beyond decadal timescales (e.g., Lehner et al. 2020). It is important to use precise language when it comes to this topic. Lehner, F., et al. (2020). Partitioning climate projection uncertainty with multiple large ensembles and CMIP5/6, Earth Syst. Dynam. 11, 491–508. https://doi.org/10.5194/esd-11-491-2020 L603–607: This paragraph seems out of context with the rest of the paper, since a GSW (GSW=global stadium wave? but not even the acronym has been defined) has not been analyzed in the manuscript. See also the related comments above. Figures: Fig. 1: NMO index is missing Fig. 5 and following: axes labels are missing for all Taylor diagrams. Shouldn’t the black dots at (0,1) have a color for a dominant timescale as well? Figs. 9–11: Units for the colorbars are missing Fig. 11: Use “outlier models” instead of “CMIP6” in the subtitles Fig. 13: Panels c and d are not discussed in the text Figs. 15a: Should “SLP ALPI” be “SLP Global”? Technical corrections: L258: "in the background of a greater amount of uncertainty contained within the CMIP6 simulations" sounds grammatically wrong L261: cross -> across L307: Is “end effects” a typo? L418 of THE M-SSA filtered ... L452 Hobbs et al. (2020) is missing from the bibliography ********** what does this mean? ). If published, this will include your full peer review and any attached files. Do you want your identity to be public for this peer review? If you choose “no”, your identity will remain anonymous but your review may still be made public. For information about this choice, including consent withdrawal, please see our Privacy Policy Reviewer #1: No Reviewer #2: No ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.
|
| Revision 1 |
|
PCLM-D-24-00175R1 Comparison of multidecadal variability in climate reanalyses and global models PLOS Climate Dear Dr. Westgate, Thank you for submitting your manuscript to PLOS Climate. I am now in receipt of the required number of reviews for the manuscript "Comparison of multidecadal variability in climate reanalyses and global models". After careful consideration, we feel that it has merit but does not fully meet PLOS Climate’s publication criteria as it currently stands. You will notice that the two anonymous reviewers have submitted very different suggestions, Reviewer 1 eventually recommending rejection. I decided to recommend that a thorough and on-point revision is carried out by the authors. It is mandatory, in particular, that the inter-reanalysis evaluation initially requested by Reviewer 1 is performed, in order to quantitatively assess whether the claim results are consistent across different versions of the reanalysis are robust. Furthermore, any new finding that would justify the adoption of such complex method should be properly highlighted and compared to available literature. Given that the key finding is relatively well known to the research community, as pointed out by the reviewer, it is requested that a documented explanation is provided of why this would be the case, and in which way the described methodology would help making it clear and physically reasonable. Please submit your revised manuscript by Feb 23 2025 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at climate@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pclm/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. We look forward to receiving your revised manuscript. Kind regards, Valerio Lembo Academic Editor PLOS Climate Journal Requirements: 1. Please note that PLOS Climate has specific guidelines on code sharing for submissions in which author-generated code underpins the findings in the manuscript. In these cases, we expect all author-generated code to be made available without restrictions upon publication of the work. Please review our guidelines at https://journals.plos.org/climate/s/materials-and-software-sharing#loc-sharing-code and ensure that your code is shared in a way that follows best practice and facilitates reproducibility and reuse. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author Reviewer #1: (No Response) Reviewer #2: (No Response) ********** Reviewer #1: Yes Reviewer #2: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously?-->?> Reviewer #1: Yes Reviewer #2: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available (please refer to the Data Availability Statement at the start of the manuscript PDF file)??> The PLOS Data policy Reviewer #1: Yes Reviewer #2: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English??> Reviewer #1: Yes Reviewer #2: Yes ********** Reviewer #1: Please see my attached review. Reviewer #2: I thank the authors for their thorough replies and careful revisions. The manuscript is now much improved. I only have some remaining major comments related to significance testing, as well as a few smaller issues which should be straightforward to address. Major comments: Significance testing: The author replies (p. 13) state that “the Taylor diagram in Fig. 15 can be interpreted as the significance test”. I disagree with this approach of significance testing by eye. It should be feasible to construct a suitable null model for the multi-model mean pattern, and test whether the correlation of the actual multi-model mean pattern with the reanalyses falls outside the range of correlations from different realizations of the null model. This is especially important since a significant conclusion (even mentioned in the abstract) is drawn from the negative correlation between the MMM and the reanalysis SLP pattern. Period within uncertainty ranges?: I thank the authors for providing model-level data in the Supplementary Information. Based on the periodicities reported in Table S1, I am not convinced that the “simulated signals (…) have shorter time scales” (Abstract, L27). In fact, the period estimated from the reanalyses appears to lie within one standard deviation of the central estimate for many (though not all) models. Keeping in mind that the observed signal does not need to match the ensemble mean but should be within the ensemble spread of a model, this makes me question whether the simulated timescales are actually significantly shorter. If this statement is kept, please also apply an adequate significance test. Representativeness of the multi-model mean: I am still not convinced that the analysis' focus on the MMM pattern, which is characterized by large cancellations, accurately reflects the diversity of model behavior in terms of SLP. Fig. 14 shows that the NH SLP patterns of some models are in fact positively correlated with the reanalysis patterns, and many models have a correlation close to zero. First, it would be very useful to test the significance of these correlations and cluster the models into three groups (significantly positively correlated – not significantly correlated – significantly negatively correlated). Second, if some of the positive correlations are found to be significant, it should be highlighted that some models indeed manage to capture a similar mechanism compared to the reanalyses. Minor comments: L51: “have been successful in replicating patterns and time scales”: this claim should be referenced L61: “presence of the DCV components in both”: this is unclear to me – do you mean that it is unclear whether the observed decadal climate variability is externally forced or internally driven? Isn’t it assumed throughout the manuscript that the DCV arises from internal variability? L63: It would be good to give an “external” reference instead of [13] for this commonly used procedure, e.g., Deser et al. 2012 (https://link.springer.com/article/10.1007/s00382-010-0977-x) or the perspective by Deser et al. 2020 (https://www.nature.com/articles/s41558-020-0731-2) L130-136: I appreciate the authors’ effort at providing a description of the reanalysis products, but as it is, it is unnecessarily confusing. For example, both reanalyses are based on AGCMs (or more specifically, weather forecasting models): GFS for 20CR and IFS for ERA-20C. But the text sounds like this is only the case for IFS. If I am not mistaken, both reanalyses also use surface forcing more or less based on HadISST2. Please re-structure this paragraph so that it first emphasizes common approaches and then the differences. From my understanding, except for the different underlying GCMs (with IFS having the higher resolution), an important difference is that ERA assimilates SLP and surface winds while 20CR only assimilates SLP. A useful comparison can be found here (under “Expert Guidance”): https://climatedataguide.ucar.edu/climate-data/era-20c-ecmwfs-atmospheric-reanalysis-20th-century-and-comparisons-noaas-20cr L167: It would be good if the smoothing was motivated here similar to what you wrote in the author replies (especially regarding the elimination of the ENSO signal) L180: Did you test this explicitly for this study? In this case, a “not shown” is probably in order, otherwise a reference would be useful at the end of the sentence. L214 (and/or L75): it would be good to give the “original” reference for the regression/rescaling method here (e.g., Steinman et al., your ref. [7], but I am not sure if its origins go further back?). L234 and following: I agree with the first reviewer that the reader may not why see this fairly complex M-SSA procedure is applied to such a low-dimensional multivariate time series. I now understand the motivation better after reading your reply (p. 27 of the author replies), but I would like to encourage you to also incorporate some of this argument into the manuscript. L248: I recommend adding one sentence similar to what you wrote in the author replies (“Kravtsov (2017) found that an embedding dimension less than 40 did not isolate the lowest frequency mode as efficiently as 40 (and greater)“). L270, L407, L410, L413: Even using “effective”, the term “spatial” is still misleading here. Coild you find a different wording that avoids “spatial” altogether? Fig. S1: Could different line signatures be used to distinguish between the four different datasets? Technical comments: Data availability: The "script link" leads to a folder with figures, please check L34 and L55: remove the “—" L103: Should “observed counterpart” be “modeled counterpart”? L111: the reference for version 3 is Slivinski et al. 2019 (https://doi.org/10.1002/qj.3598) L131: Slivinski 2021 not in reference list L157: By contrast -> in contrast L164: parent models -> parent atmospheric models L178: Weiner -> Wiener L400: Is this reference the same as https://link.springer.com/article/10.1007/s00382-024-07451-4 ? L811: Could be updated to the published version (https://onlinelibrary.wiley.com/doi/abs/10.1029/2024GL110791) Please check the consistency of the inline references (numbered vs. author-year). The reference numbering seems off, as the last ref. in the text is [77] but there are 81 entries in the bibliography. ********** what does this mean? ). If published, this will include your full peer review and any attached files. Do you want your identity to be public for this peer review? If you choose “no”, your identity will remain anonymous but your review may still be made public. For information about this choice, including consent withdrawal, please see our Privacy Policy Reviewer #1: No Reviewer #2: No ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.
|
| Revision 2 |
|
PCLM-D-24-00175R2 Comparison of multidecadal variability in climate reanalyses and global models PLOS Climate Dear Dr. Westgate, Thank you for submitting your manuscript to PLOS Climate. After careful consideration, we feel that it has merit but does not fully meet PLOS Climate’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Aug 06 2025 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at climate@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pclm/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. We look forward to receiving your revised manuscript. Kind regards, Valerio Lembo Academic Editor PLOS Climate Journal Requirements: 1. Please note that PLOS Climate has specific guidelines on code sharing for submissions in which author-generated code underpins the findings in the manuscript. In these cases, we expect all author-generated code to be made available without restrictions upon publication of the work. Please review our guidelines at https://journals.plos.org/climate/s/materials-and-software-sharing#loc-sharing-code and ensure that your code is shared in a way that follows best practice and facilitates reproducibility and reuse. Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. Additional Editor Comments (if provided): Dear Dr. Westgate, as the required number of reviewer comments for the manuscript "Comparison of multidecadal variability in climate reanalyses and global models" is reached, I am now in the position to get back to you with a decision about how to proceed. Unfortunately, despite generally positive reviews by reviewer 3 and 4, reviewer 2 agress with the previous reviewer 1, recommending rejection in their concern regarding the novelty and usefulness of the approach. Reviewer 2 also states that they are not convinced by the author's reply to reviewer 1. At this point, I tend to agree with the authors that the complexity of the methodology is appropriate for the problem addressed, and they stated it convincingly in the revised manuscript and in the author's reply. The remaining important issue is whether there is enough material to "make this work a stand-alone paper" (in the words of reviewer 3). Besides providing convincing arguments on where this manuscript is positioned with respect to the existing literature, I agree with reviewer 1 that a process-oriented interpretation of "why" models and observations differ with each other, beyond a description of "how" they differ (as justified by the authors), could be an advantage. The authors provide indications of how this could be carried out in their reply (e.g. they argue that "one possibility is that the differences stem from too weak and, perhaps, incorrect response of climate-model atmospheres to the ocean-induces heat flux anomalies (reflected in the vast dominance of SST unrelated SLP noise in model simulations, and a wrong sign of very week SLP response in the ensemble-mean sense)." I encourage the authors to expand a bit on these arguments, perhaps even at a speculative level, providing a "Discussion" subsection or similar, without completely rearranging the nature of the manuscript. In addition to that, it would be helpful, as implicitly suggested by reviewer 3, putting more emphasis on the novel results, rather than on how these complement existing results. I decided to label these as "minor revision", both to be in line with the comments of other reviewers, and also because I think that as mentioned the proposed changes should not change substantially the structure of the paper. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author Reviewer #2: (No Response) Reviewer #3: (No Response) Reviewer #4: All comments have been addressed ********** Reviewer #2: Yes Reviewer #3: Yes Reviewer #4: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously?-->?> Reviewer #2: Yes Reviewer #3: Yes Reviewer #4: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available (please refer to the Data Availability Statement at the start of the manuscript PDF file)??> The PLOS Data policy Reviewer #2: Yes Reviewer #3: Yes Reviewer #4: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English??> Reviewer #2: Yes Reviewer #3: Yes Reviewer #4: Yes ********** Reviewer #2: I thank the authors for their thoughtful replies and revisions, especially regarding the new significance tests. I only have two small remaining concerns regarding the significance testing. After addressing these, I think that the revised version is publishable subject to some technical corrections. Significance testing for Fig. 15: If I understand the procedure correctly, the 2.5th percentile is derived from an ensemble of LIM realizations (mimicking the SLP time series in individual model runs) onto the reanalysis pattern. I don’t see why this should yield quantitatively equivalent results to first averaging a number of random LIM-based SLP patterns (mimicking the multi-model mean SLP pattern) and then correlating this with the reanalysis. I would encourage the authors to repeat the significance test using the latter procedure, and see if the correlation of the actual MMM and the reanalysis falls outside of the range given by correlation with the null model MMM. Significance testing of periodicity: Thank you for also providing a visual comparison in Fig. S1. From this figure, I would conclude that the modelled periods are not inconsistent with the periods derived from the reanalyses. The reason is that, for most models, the range spanned by different ensemble members intersects with the reanalysis uncertainty. A standard t-test to compare the means of the modelled vs. reanalysis distributions does not compare apples to apples. One distribution samples the observational uncertainty around a single realization (reality) while the other distribution samples the uncertainty due to different realizations, so it is not meaningful to compare the means of the two distributions. Especially since there are barely two full periods present in the record, the first does not equal the second. Therefore, I would advise removing your statements about inconsistency in the modelled vs reanalysis periodicity. Minor/technical comments: L135: is -> are L135 and following: Sorry to insist on the topic of reanalysis limitations, but two additional important uncertainties that should be acknowledged here are: - the spatially heterogeneous coverage of assimilated data (especially sparse during the early 20th century and/or in the Southern hemisphere) – this should help explain the poor match between the two reanalysis patterns in some data-sparse regions - that these reanalyses do not account for uncertainties in the SST/sea ice fields (which are also strongly extrapolated during early periods or in remote regions). Therefore, the uncertainties derived from an ensemble of reanalysis realizations provide only a lower bound of the true observational uncertainties. L164: “While the origins of CNRM-CM6-1 centennial AMOC variability are also in the Arctic, these oscillations are largely temperature driven” is not entirely true. First, a connection to the Arctic was not discussed in the Waldman et al. paper. Second, the upper ocean density anomalies in the convection regions are salinity-, not temperature-driven (cf. their Fig. 9). I would simply remove the quoted sentence, since this discussion is not too relevant for the manuscript at hand. L603: I assume that many readers (including myself) are not very familiar with LIM in general and fitting a LIM to data more specifically. Therefore, it would be great if the new null model could be explained in a bit more detail in the Supplementary Material. If the fitting/null model procedure is part of the code provided with the paper, that should also be sufficient to ensure reproducibility (provided that the code is clear enough). L883: Ref. [59] can be updated to its published version: https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2024GL110791 Reviewer #3: Major Comment: 1: I unfortunately have to agree with reviewer 1. You addressed the comments well in the first cycles, but the underlying problem remains. For an unbiased review, I read the paper as is with ignoring the earlier cycles of revisions and comments. But I also struggled to see what is gained by this type of analysis in comparison to the approaches of the mentioned earlier works. After reading the earlier replies, I do understand the Authors raised points and responses, but they seem not convincing enough to make this work a stand alone paper. If the mentioned future works adds some clear findings and shows the advantages of this used approach, this work will add to the ongoing efforts to disentangling forced and unforced component. Lines 27-29 and lines 703-708: I think these statements are to be expected after reading the mentioned studies and current literature on the subject and this to me shows that there is still something missing to complete the Authors appraoch and work that has been done so far. Reviewer #4: I carefully read the revised version and the authors' reply to the referees and I think the manuscript is suitable for publication. The revised version includes further statistical assessment, including significance test for the results, and the authors provided convincing arguments on the discussion of the results. ********** what does this mean? ). If published, this will include your full peer review and any attached files. Do you want your identity to be public for this peer review? If you choose “no”, your identity will remain anonymous but your review may still be made public. For information about this choice, including consent withdrawal, please see our Privacy Policy Reviewer #2: No Reviewer #3: No Reviewer #4: No ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 3 |
|
Comparison of multidecadal variability in climate reanalyses and global models PCLM-D-24-00175R3 Dear Dr. Westgate, We're pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you'll receive an e-mail detailing the required amendments. When these have been addressed, you'll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at https://www.editorialmanager.com/pclm/ click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. For billing related questions, please contact billing support at https://plos.my.site.com/s/. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they'll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact climate@plos.org. Kind regards, Valerio Lembo Academic Editor PLOS Climate Additional Editor Comments (optional): Reviewers' comments: Reviewer's Responses to Questions Comments to the Author Reviewer #2: (No Response) -------------------- Reviewer #2: Yes -------------------- 3. Has the statistical analysis been performed appropriately and rigorously?-->?> Reviewer #2: Yes -------------------- 4. Have the authors made all data underlying the findings in their manuscript fully available (please refer to the Data Availability Statement at the start of the manuscript PDF file)??> The PLOS Data policy Reviewer #2: Yes -------------------- 5. Is the manuscript presented in an intelligible fashion and written in standard English?<br/><br/>PLOS Climate does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.-->?> Reviewer #2: Yes -------------------- Reviewer #2: I thank the authors for once again thoroughly addressing the comments of the latest round of reviews, and believe that the manuscript is now suitable for publication. -------------------- what does this mean? ). If published, this will include your full peer review and any attached files. Do you want your identity to be public for this peer review? If you choose “no”, your identity will remain anonymous but your review may still be made public. For information about this choice, including consent withdrawal, please see our Privacy Policy Reviewer #2: No -------------------- |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .