Peer Review History
| Original SubmissionDecember 30, 2020 |
|---|
|
PONE-D-20-40989 Comprehensive marine substrate classification of Canada’s Pacific shelf PLOS ONE Dear Dr. Gregr, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. The reviewers find a lack of clarity in many areas, in particular around the maps that may be derived from the model and how they would compare to present maps. Reviewer 2 offers a way to bring clarity to the presentation of the manuscript that would help readers. Please submit your revised manuscript by May 21 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Judi Hewitt Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 2. We note that you have indicated that data from this study are available upon request. PLOS only allows data to be available upon request if there are legal or ethical restrictions on sharing data publicly. For more information on unacceptable data access restrictions, please see http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. In your revised cover letter, please address the following prompts: a) If there are ethical or legal restrictions on sharing a de-identified data set, please explain them in detail (e.g., data contain potentially sensitive information, data are owned by a third-party organization, etc.) and who has imposed them (e.g., an ethics committee). Please also provide contact information for a data access committee, ethics committee, or other institutional body to which data requests may be sent. b) If there are no restrictions, please upload the minimal anonymized data set necessary to replicate your study findings as either Supporting Information files or to a stable, public repository and provide us with the relevant URLs, DOIs, or accession numbers. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories. We will update your Data Availability statement on your behalf to reflect the information you provide. 3. Thank you for stating the following in the Competing Interests section: [The authors declare no competing interests.]. We note that one or more of the authors are employed by a commercial company: SciTech Environmental Consulting
Please also include the following statement within your amended Funding Statement. “The funder provided support in the form of salaries for authors [insert relevant initials], but did not have any additional role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript. The specific roles of these authors are articulated in the ‘author contributions’ section.” If your commercial affiliation did play a role in your study, please state and explain this role within your updated Funding Statement. 2. Please also provide an updated Competing Interests Statement declaring this commercial affiliation along with any other relevant declarations relating to employment, consultancy, patents, products in development, or marketed products, etc. Within your Competing Interests Statement, please confirm that this commercial affiliation does not alter your adherence to all PLOS ONE policies on sharing data and materials by including the following statement: "This does not alter our adherence to PLOS ONE policies on sharing data and materials.” (as detailed online in our guide for authors http://journals.plos.org/plosone/s/competing-interests) . If this adherence statement is not accurate and there are restrictions on sharing of data and/or materials, please state these. Please note that we cannot proceed with consideration of your article until this information has been declared. Please include both an updated Funding Statement and Competing Interests Statement in your cover letter. We will change the online submission form on your behalf. Please know it is PLOS ONE policy for corresponding authors to declare, on behalf of all authors, all potential competing interests for the purposes of transparency. PLOS defines a competing interest as anything that interferes with, or could reasonably be perceived as interfering with, the full and objective presentation, peer review, editorial decision-making, or publication of research or non-research articles submitted to one of the journals. Competing interests can be financial or non-financial, professional, or personal. Competing interests can arise in relationship to an organization or another person. Please follow this link to our website for more details on competing interests: http://journals.plos.org/plosone/s/competing-interests 4. We note that Figures 1, 4, 8, S3 and Striking Image in your submission contain map (Fig. 1) / satellite (Fig. 4, 8, S3 and Striking Image) images which may be copyrighted. All PLOS content is published under the Creative Commons Attribution License (CC BY 4.0), which means that the manuscript, images, and Supporting Information files will be freely available online, and any third party is permitted to access, download, copy, distribute, and use these materials in any way, even commercially, with proper attribution. For these reasons, we cannot publish previously copyrighted maps or satellite images created using proprietary data, such as Google software (Google Maps, Street View, and Earth). For more information, see our copyright guidelines: http://journals.plos.org/plosone/s/licenses-and-copyright. We require you to either (1) present written permission from the copyright holder to publish these figures specifically under the CC BY 4.0 license, or (2) remove the figures from your submission:
We recommend that you contact the original copyright holder with the Content Permission Form (http://journals.plos.org/plosone/s/file?id=7c09/content-permission-form.pdf) and the following text: “I request permission for the open-access journal PLOS ONE to publish XXX under the Creative Commons Attribution License (CCAL) CC BY 4.0 (http://creativecommons.org/licenses/by/4.0/). Please be aware that this license allows unrestricted use and distribution, even commercially, by third parties. Please reply and provide explicit written permission to publish XXX under a CC BY license and complete the attached form.” Please upload the completed Content Permission Form or other proof of granted permissions as an "Other" file with your submission. In the figure caption of the copyrighted figure, please include the following text: “Reprinted from [ref] under a CC BY license, with permission from [name of publisher], original copyright [original copyright year].”
The following resources for replacing copyrighted map figures may be helpful: USGS National Map Viewer (public domain): http://viewer.nationalmap.gov/viewer/ The Gateway to Astronaut Photography of Earth (public domain): http://eol.jsc.nasa.gov/sseop/clickmap/ Maps at the CIA (public domain): https://www.cia.gov/library/publications/the-world-factbook/index.html and https://www.cia.gov/library/publications/cia-maps-publications/index.html NASA Earth Observatory (public domain): http://earthobservatory.nasa.gov/ Landsat: http://landsat.visibleearth.nasa.gov/ USGS EROS (Earth Resources Observatory and Science (EROS) Center) (public domain): http://eros.usgs.gov/# Natural Earth (public domain): http://www.naturalearthdata.com/ [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Partly Reviewer #2: Partly ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: I Don't Know Reviewer #2: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: No ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: This is a fairly well written manuscript that describes the construction and use of a statistical model based on random forest classification using bottom type data to map substrate along Canada’s Pacific margin. While the authors extensively describe their modelling development and provide a convincing argument for the model’s validity, I found that a good comparison of the model with detailed marine benthic habitat maps that exist for the region was missing. Unfortunately, I am not qualified to fully evaluate the statistical approaches described in the manuscript, so focused on the practical aspects of constructing substrate maps. While I think that the model could be very helpful in making a first approximation of substrate distribution when good multibeam bathymetric data is not available, but sediment samples and other seafloor data is at hand, I am concerned that it could also be misleading. Generally, I found the modeling concept sound and the authors do a good job at testing the model’s applicability. I will leave it to those better qualified than me to evaluate the statistic. I have a few comments to make about the text. While the text is generally clear there are some areas that I found in need of further explanation or clarification. I have added these few comments in the pdf. However, a few points need to be addressed here. While the manuscript is well referenced, and the references cited appear appropriate and comprehensive, the citation format is mixed (i.e., numbered as per the journal’s format in places while in other places not numbered). In addition, I have provided below some references that the authors may find useful, especially in regard to their discussion of “mixed” substrate types and that might be useful in validating the model. The term “mixed substrate” needs to be better explained and defined when first used. The authors use backscatter as data to identify substrate types but do not fully explain how some “soft” substrate clasts such as gravel, cobble and pebbles can form a hard substrate type as a gravel-pebble-cobble pavement, which is quite common in the offshore BC region. One confusing problem appears in the comparison of the 20 m and 100 m resolution models where a submarine canyon is not identified in the 20 m model. I am not sure why this is the case. If a distinct change in depth occurs with the presence of a canyon, why would the model not detect this? Is it because the canyon is too large? If so, what about gullies and small features, would they also not be identified in the 20 m model? My recommendation is that the manuscript be published after minor modifications and clarification of points raised in this review are addressed. I would be especially interested in further validation of the model using available published Canadian marine benthic habitat maps for the Southern Georgia Strait region (see Greene and Barrie, editors (2011). This would show how well the authors’ model fits with comprehensive habitat maps based on MBES data interpretations. Suggested pertinent references: Greene, H.G., Yoklavich, M.M., Starr, R., O’Connell, V.M., Wakefield, W.W., Sullivan, D.L. MacRea, J.E. and Cailliet, G.M., 1999. A classification scheme for deep-water seafloor habitats: Oceanographica ACTA, v. 22, n. 6, p. 663-678. Greene, H.G., Yoklavich, M.M., O’Connell, V.M., Starr, R.M., Wakefield, W.W. and Cailliet, G.M., 2000, Mapping and classification of deep seafloor habitats: ICES paper CM 2000/T:08, 11 p. Greene, H.G., Bizzarro, J.J., Tilden, J.E., Lopez, H.L., and Erdey, M.D., 2005. The benefits and pitfalls of geographic information systems in marine benthic habitat mapping: In Wright, D.J. and Scholz, A.J., (Eds.), Place Matters. Oregon State University Press, Portland, OR, 34-46. Greene, H.G., Bizzarro, J.J., O’Connell, V.M., and Brylinsky, C.K., 2007. Construction of digital potential marine benthic habitat maps using a coded classification scheme and its application: In Todd, B.J., and Greene, H.G. (Eds.), Mapping the Seafloor for Habitat Characterization, Canadian Geological Association Special Paper 47, 141-155. Reviewer #2: PONE-D-20-40989 review Overall comments This manuscript describes an interesting and potentially very useful initiative that represents a lot of work and will make a good paper. At present, however, I find its presentation is unclear, with too much blurring between the conventional sections of introduction, methods, results, and discussion. I also have some questions about how the analyses were performed and the inferences made from the results. The aim of this study is to generate reliable maps of seabed substrate type for the Pacific coast and continental shelf of Canada. The authors compile ca. 200,000 point records of seabed substrate type and use a tree-based machine-learning method, Random Forest (RF), to develop correlations between the observations and gridded layers for depth, depth-derivatives, and seabed wave energy. These correlations are then used to predict substrate type as a continuous layer across the region. The authors then evaluate these predictions by both cross-validation and against a set of independent seabed observations. The main components of interest in the study, therefore, are: (1) the provenance and spatial distribution of the point observations; (2) the provenance and accuracy of the predictor variables; (3) the provenance and spatial distribution of the independent test data; (4) the modelling methods used, and (5) the resulting maps, with the final mapped classifications being the most important. What we actually get in the ms is a lot of detail and talk about how well the modelling method has been applied but not much detail, or clarity, on what, to me, are the main points of interest (1), (2), (3), and (5). I found the sequence of sections and content confusing and difficult to follow, with elements of introduction, methods, results, and discussion jumbled up together throughout. As a simple example, why not start the Methods with a description of the study area instead of a statement about the modelling method used and how good it is? The important, and useful thing about the study, by my reading at least, is that it brings together all available point sample data for the region and uses them to generate a classification of the entire Pacific continental shelf of Canada. The methods are important but there is already plenty of published information available about how different modelling methods and evaluation metrics compare, and at present in this ms over-emphasis of these details eclipses appreciation of the main achievement of the study: the compilation of the input data set and the new maps of substrate type generated from them. The Introduction stretches to 8 pages, includes much material that would be more appropriate in the Discussion, and does not, to me at least, seem to follow a logical progression. The Introduction should provide a concise background to the study; why and where it was undertaken, background and issues associated with the area of research, methods available, and what the specific objectives of this study. The statements given at the start of the Results section are an example of one of my main issues with the way the study has been presented: it places all the emphasis on the modelling methods and very little on the input data compilation, particularly in terms of spatial distribution, the credibility of the predictor variables, or, most importantly, the utility of the final outputs. It reads more as a methods paper than an attempt to produce a useful resource for environmental management (which, of course, is what it actually is). Specific comments Abstract The abstract is a concise summary of the study; summarising background, aim, study area, data and methods used, results, and conclusions. If the body of the manuscript followed this simple, logical, and interpretable structure, this would be a very nice paper. As it stands, I find the subsequent sections muddled, over-long, and confusing. Line 28-30: “Predictive power was lower … when models were evaluated with independent data sets, emphasising how this is different from model fit”. This kind of statement seems a bit disingenuous to me, suggesting that ‘model fit’ by cross-validation using subsets of the training data and performance against fully independent survey data are of equal value in assessing the utility of a model. The real test of any model, particularly those designed to inform environmental management decisions, is how well its predictions match reality, in the form of independent observations. Perhaps no need to include “emphasizing how this is different from model fit” (it tells us nothing after all) but add more explanation in the discussion about how the models performed against independent data. Lines 37-38: “This understanding relies on models of habitat suitability …”. This seems a sweeping statement to me when “This’ covers all aspects of marine ecosystem management. Lines 38-39: “The credibility of which depends in large part on the accuracy of the underlying environmental predictors.” Yes, this is very true but I would observe that the final layers you develop here are based on the same techniques (RF) and thus carry the same issues of uncertainty associated with the predictor variables. Line 46: need to reference Random Forest at first use. Line 51: “mobilizing available observations …”: how do you ‘mobilize’ observations? Lines 56-57: Meaning of this sentence is unclear to me: at this stage, the reader has no idea what is meant, in this context, by “weighting for prevalence”, and presumably the meaning is ‘use of diverse evaluation metrics’, and what does ‘qualitative assessment’ refer to here? Line 61: Prior to MBES, seabed characteristics were derived from empirical point observations, which were often accurate and, for older lead-line records, included physical samples of the seabed. The 'inference' element comes when continuous bathymetry layers are created. For most of the world's oceans and seas, this is still the case. Line 69: Need to be more concise with language: as written here, the meaning is “comprehensive surveys are particularly expensive and time consuming for less developed countries …”. The time and expense are the same, whatever the economic status of the country, its just the affordability that differs. And the final clause of the sentence doesn’t match its subject (i.e. “comprehensive acoustic surveys … can take decades to completely map.” Line 74: How much, approximately of the shelf here has been mapped? ##: “… a diversity of metrics …”. Has no meaning; just tell us what you used. Methods Lines 221-223: should be in the Introduction. Line 226: “nested” needs to be defined, i.e., nested within what? I assume within the ‘coastwide’ model but this is not explicit in your sentence. Line 227: “paired models”, meaning what? Without clear explanation these terms are meaningless to the reader. Lines 219-240: These first three paragraphs of the Methods are also a prime example of what I struggle with in the presentation of this study. They give a condensed summary, an abstract in effect, of what the study did but without any detail. This level of explanation would work well in the Introduction but here, in the Methods, it’s just confusing. For instance, the most fundamental aspect of the study is the input dataset of substrate type observations: this is the first thing the reader needs to be told about, in detail, to be able to understand what the subsequent models are working on and thus assess whether the resulting maps make sense. At present, the input data appear almost as an afterthought, with just a passing reference to Table 1 and no explanation of the data provenance, spatial distribution, or reliability. Lines 259-260: But how was the weighting done: more weight to higher prevalence? Need explicit methods descriptions. Lines 270-272: There is not enough detail on how this partition into training and test partitions was done. For spatial data, the way in which test data are selected can have strong influence on subsequent evaluation metrics. Were the test data selected at random, or in spatial bands, or by a more sophisticated spatially disaggregated method? Also, the wording here and later implies that only one iteration of each model was generated, all using the same partition of training and test data. If that is what was done, explanation is needed as to why k-fold cross validation (multiple iterations of each model, each iteration using a different split of the input data between training and test) was not conducted. Line 275 and onwards: “Addressing our objectives”. Why is this a subsection in the Methods? Too much of the text here should really be in the Introduction or Discussion, not here in the Methods. Line 280: “We weighted classes according to their prevalence”. Again, how? Weighted up, or down, and by what proportion? Line 285-286: The input variables of this study, both response and predictor, are relatively very simple, representing primarily (entirely?) physical factors. I am not convinced by the argument that the physical process factors here should be expected to be non-stationary. I suspect differences in the density (‘prevalence’) and reliability of input response and predictor data will have a more important influence on outcomes than non-stationarity of processes. Line 306 and onwards: “Model evaluation”. Again, by my reading of it, far too much wordage that should be (or already is) covered in the introduction or discussion. I might be jaded but much of this reads like rehashed material from textbooks. The point is, however, that I am used to working with these kinds of data and this kind of modelling method, and the further I read here, the more I find myself confused as to what was done and why. Line 347: “Model build data”. At last! But there is no detail given about the spatial distribution of these data. For interpretation of the results, I would argue that it is essential to show the reader maps showing how these input data are distributed in space. Line 379 and onwards: “Independent evaluation data”. How did you decide on which data to include in the ‘build’ set and which in the ‘independent’ set? Both sets include DFO Dive and ROV data, so how do these differ from the cross-validation test data withheld from the training dataset? If the two set of data are actually just arbitrary subsets from the same sources, the independence of the ‘independent’ dataset would be questionable. Again, needs clearer explanation of basic details. Results As with the methods, I find the sequence of sections here to be unintuitive, and the content to mix results with discussion material. Lines 404-407: This paragraph is discussion material. Line 533: Ah ha. Here, at last, we have more detail about the independent data but still, I would say, not enough to assess their utility. For instance, N = nearly 2,500 ROV ‘mud’ observations for the coastwide model domain but if each observation represents records of substrate type at 20 m intervals along seabed transects each one of which might be one or more km long (at 50 records per km), these data are likely to be strongly clumped in space. If you have not taken measures to account for this spatial clumping of the data, the resulting metrics of performance are likely to be unreliable and probably inflated. We need to see how these records are distributed in space to be able to assess whether the results are useful or not. Results in general: I would have found if much more useful and interpretable to have included both cross-validation and independent test scores in the same table, simplified down to just one or two example metrics: all the rest could go into the Supplementary Material. Also, a question: where can the final map outputs be found? If the aim was to generate mapped predictions for use in environmental management, the outputs need to be accessible. Discussion I found this section to read better than the others. I have not made detailed notes but I would make the same observation about the inferences around stationarity: given the imbalances in the spatial distribution and provenance of your sample and test data, can you really be sure that the differences in model performance you see among regions is attributable to non-stationarity in environmental process rather than artefacts in your input data? ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: Yes: H. Gary Greene Reviewer #2: Yes: David Bowden [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.
|
| Revision 1 |
|
PONE-D-20-40989R1 Comprehensive marine substrate classification applied to Canada’s Pacific shelf PLOS ONE Dear Dr. Gregr, Thank you for submitting your manuscript to PLOS ONE. It is obvious that you have met most of the reviewers' suggestions. A final review does suggest some ways which the manuscript could be improved which we would like you to consider. Please submit your revised manuscript by Sep 23 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Judi Hewitt Academic Editor PLOS ONE Journal Requirements: Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #2: (No Response) ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #2: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #2: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #2: (No Response) ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #2: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #2: Thank you for addressing my earlier review comments and questions. I find the revised manuscript to have a much clearer logical flow and makes an interesting study more understandable. I still find the Introduction to be unnecessarily long, containing some material that I suggest could readily be condensed by referencing existing studies. The length of the Introduction is probably something for the editor to decide at this point but there are also a few points I find rather condescending, as currently worded. For instance, in the 'Model performance' section, I find dismissive generalisations such as "... metrics have evolved little ... with most studies continuing to report Cohen's Kappa ...", "adoption of improved metrics has been glacial ...", and "There is a persistent misconception about how to interpret model performance..." to be overly generalised (I am not a geologist but Kappa is very rarely used in relation to predictive model performance in the literature I am familiar with), didactic, and unnecessary. I also still find the presentation of so many performance metrics to be more confusing than useful, for the most part. Indeed, while trying to interpret the results here I reflected that this illustrates one very good reason why a more refined set set of metrics is "commonly provided" in most published studies; practicality of interpretation. With the slightly revised focus in the title, however, (i.e., the method taking priority over the application) there is an argument for inclusion of more metrics. A few minor comments: 116-117: As worded here, I don't see why it would follow that "higher resolution models would perform better in shallow waters" (which I interpreted as meaning that a high resolution model would work better in shallow water than it would in deeper water). I would guess the intended meaning might be worded as "higher resolution models would perform better than coarser resolution models in shallower waters."? Line 226: "class weights" is unexplained, as yet, and therefore uninterpretable here. Table 2: first column rows are not aligned with others? And "DEMs" is not defined. Line 263: "cross-walked" is a term I've not seen before and only makes sense once you go to Table S1. Table 3: Caption does not include "imbalance". Line 474: Why "not unexpectedly"? If we think our models perform well, why would we not 'expect' them to perform equally well against independent data. Suggest there is no need for this in the sentence and, if retained, the expectation should be supported by references (there are a few recent papers on this subject). Stationarity section: Now the input data are more fully described, particularly with the sample distribution map figure, this argument is better supported. lines 590 onwards: Yes, I strongly agree with the points made in this section. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #2: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 2 |
|
Comprehensive marine substrate classification applied to Canada’s Pacific shelf PONE-D-20-40989R2 Dear Dr. Gregr, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Judi Hewitt Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: |
| Formally Accepted |
|
PONE-D-20-40989R2 Comprehensive marine substrate classification applied to Canada’s Pacific shelf Dear Dr. Gregr: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Judi Hewitt Academic Editor PLOS ONE |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .