Peer Review History
| Original SubmissionMay 28, 2021 |
|---|
|
PONE-D-21-17737 Rise of the War Machines: Charting the Evolution of Military Technologies from the Neolithic to the Industrial Revolution PLOS ONE Dear Daniel Hoyer, I have received two reviewers' report on your manuscript and decided I would not wait for the third reviewer that I had invited. The two reviewers, I'm happy to say, concur that your manuscript is an important and valuable contribution to the cultural evolution literature. I share their point of view. Each reviewer offers specific recommendations that I would ask you to follow closely when revising the paper. Reviewer 1 remarks that reducing the history of military technology to "hard" technologies (e.g. weapons or transportations) neglects important aspects of war tactics: "military doctrines, logistics, tactical skill, organizational structures, and ability to learn". In other words, your study captures the evolution of tactical "hardware", but not that of tactical "software", so to speak. They note that a more qualitative approach could capture this dimension. Please discuss this possible limitation of your study in the revision. Reviewer 1 also asks you to specify how your analysis can distinguish between technological change due to innovation vs. diffusion. Reviewer 2 makes two specific and useful recommendations regarding your statistical analysis. The first is to make sure that the way you bundle military-technology related variables is justified. In other words, you should rule out the possibility that the effects you document are driven by one or a few outlier variables. Please implement the analysis they recommend, i.e., analysing random subsets of the 46 variables. Re-running your models with a subset of military technology variables is something you already do when you substitute "Core MilTech" for "MilTech". Reviewer 1 would like you to generalise this method to multiple, randomly selected subsamples. Reviewer 2 also suggests an alternative analysis for the impact of existing stock of technologies on the progress of military technologies. In addition to these remarks, I have a number of editorial recommendations of my own, detailed below. Provided the revision appears to fulfil the reviewers' requirements and mine, I may not be sending it back for a new round of reviews. My recommendations are either stylistic or related to the way statistical results are reported. Concerning the reporting of results, my main concern is related to the repeated causality claims made in the manuscript. Given the data and methods, the results establish (at best) predictive causality in the sense of Granger. This type of causality cannot be equated with causality simpliciter, among other things because it does not rule our latent confounding effects. Please qualify all claims related to causality or causal inference. I suspect there is a significant mistake in Table 1, which lists both MilTech and MilTech sq. as variables in the best-fitting model. This seems to contradict the description given on l. 301, but more importantly including two versions of the same variable in the same model raises obvious issues (multicollinearity, etc.). I suspect that you in fact tested 2 versions of the model, one with MilTech and the other with MilTech sq. In general the manuscript in its current version assumes too much familiarity with the Seshat database and project, its organisation and its acronyms. Key constructs like "NGA" or "IN" are not explained or even glossed. Other constructs, like the "Scale" variable, will be familiar to readers who already read Seshat-based studies, but are only cursorily explained here (the characterisation given on l. 279 is insufficiently clear). For yet other variables like "SPC1" an explanation is promised but never provided (l. 349). There is also a tendency to use your own abbreviations for standard statistical constructs (e.g. using "delAIC" for ∆AIC or delta AIC). What is missing is not only a basic explanation of the Seshat lingo, but a sense of why the constructs were designed in the way that they are — for instance, why the Scale variable is a better measure of a polity's size than other possible measures. Please submit your revised manuscript by Aug 09 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Olivier Morin Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 2. Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. 3. Thank you for stating the following in the Competing Interests section: "The authors have declared that no competing interests exist." We note that one or more of the authors are employed by a commercial company: "Complexity Science Hub." a) Please provide an amended Funding Statement declaring this commercial affiliation, as well as a statement regarding the Role of Funders in your study. If the funding organization did not play a role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript and only provided financial support in the form of authors' salaries and/or research materials, please review your statements relating to the author contributions, and ensure you have specifically and accurately indicated the role(s) that these authors had in your study. You can update author roles in the Author Contributions section of the online submission form. Please also include the following statement within your amended Funding Statement. “The funder provided support in the form of salaries for authors [insert relevant initials], but did not have any additional role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript. The specific roles of these authors are articulated in the ‘author contributions’ section.” If your commercial affiliation did play a role in your study, please state and explain this role within your updated Funding Statement. b) Please also provide an updated Competing Interests Statement declaring this commercial affiliation along with any other relevant declarations relating to employment, consultancy, patents, products in development, or marketed products, etc. Within your Competing Interests Statement, please confirm that this commercial affiliation does not alter your adherence to all PLOS ONE policies on sharing data and materials by including the following statement: "This does not alter our adherence to PLOS ONE policies on sharing data and materials.” (as detailed online in our guide for authors http://journals.plos.org/plosone/s/competing-interests) . If this adherence statement is not accurate and there are restrictions on sharing of data and/or materials, please state these. Please note that we cannot proceed with consideration of your article until this information has been declared. Please include both an updated Funding Statement and Competing Interests Statement in your cover letter. We will change the online submission form on your behalf. Please know it is PLOS ONE policy for corresponding authors to declare, on behalf of all authors, all potential competing interests for the purposes of transparency. PLOS defines a competing interest as anything that interferes with, or could reasonably be perceived as interfering with, the full and objective presentation, peer review, editorial decision-making, or publication of research or non-research articles submitted to one of the journals. Competing interests can be financial or non-financial, professional, or personal. Competing interests can arise in relationship to an organization or another person. Please follow this link to our website for more details on competing interests: http://journals.plos.org/plosone/s/competing-interests [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Partly Reviewer #2: Yes ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: This is a very valuable paper providing the most systematic large-n analysis to date of the evolution of military technology. It should acknowledge more clearly that large-n studies are just one of the tools needed for a proper explanation, but should definitely be published. (Editor's note: I am here pasting Reviewer 1's attached comments since they may not be automatically added to this action letter for word limit reasons. OM) This is a valuable paper, making a real contribution to a significant debate. My one reservation is that I think this kind of large-n survey usually has to be combined with more detailed work to explain something as complicated as military technologies. Military historians typically find that technology is only one part of effectiveness, and not always the most important part. The most interesting result in this paper is the lack of fit between SocSoph and technological gains (lines 455-56, 532-36), but I wonder how much of this is because technologies are little use without appropriate doctrine (the central point in Stephen Biddle’s book Military Power [2004]). Fig. 5 illustrates this—looking at the scores for Latium around 2kya, they’re not very different from other societies, because Roman military technology wasn’t actually that different from other Mediterranean, Middle Eastern, and European societies (as measured in the categories at lines 287-98 in the supporting material); however, the Roman army’s ability to apply these technologies transformed what they meant in practical terms, and that was probably determined primarily by Roman SocSoph. I also immediately think of the classical Greeks, whose military technologies were very ordinary but their application of them was extraordinary. One difficulty for large-n surveys is that while technological categories are relatively easy to identify in the archaeological record, military doctrines, logistics, tactical skill, organizational structures, and ability to learn aren’t; and even when ancient and medieval writers describe such doctrines, there’s no obvious way to score them. This paper should absolutely be published, because there’ve been few (if any) large-n surveys of this level of sophistication, but the main point it illustrates is perhaps that large-n surveys are only the beginning of the study of warfare, and we have to follow up on their results with analytic narratives and case studies. I also thought that the paper should distinguish more between technological innovation and diffusion. I’m currently reading two case studies of the Comanches, who probably had the best light cavalry in the world in the 18th century. They didn’t invent light cavalry, because there were no horses in North America to invent it with before the Spaniards brought them, but they did adopt and then adapt horse-herding and perfect styles of mounted fighting much better than the Apaches or even the Spaniards and Mexicans. They didn’t adopt or adapt firearms because their light-cavalry tactics made muzzle-loading muskets irrelevant. The Comanches were eventually defeated because Texans adopted and adapted Comanche tactics in the 1830s and combined them with revolvers and, later, breech-loading rifles that had been invented on the US East Coast. Neither the Comanche nor the Texans could have invented revolvers and rifles themselves, but the US Army failed to figure out how to use the new technologies until Texans put them together with tactics in new ways; but the final Comanche defeat in the 1870s depended on the US Army learning from the Texans and then exploiting the scale of US federal infrastructure. These points aren’t just details that can be subsumed within a large-n model; we’re missing what really happened if we only see the coarse-grained technological/geographical narrative. The paper should be clearer that while a large-n survey is a necessary condition for a good explanation, it’s not sufficient, and should be treated as a starting point for other kinds of analysis. One final detail: the throwaway comment on naval warfare on line 69 isn’t adequate. Reviewer #2: This paper examines the long-term evolution of military technologies using the Seshat: Global History Databank. Creating a series of composite variables, which use simpler variables as a proxy for more complex ones such as Information Complexity or Military Technologies, the authors then test several theoretical claims in the literature. Several factors seem to significantly predict the advancement of Military Technology (MilTech), e.g., global population size, cultural similarity, and the spread of iron and cavalry. They also find variables that do not appear to dramatically influence of the level of military technology. Specifically, with the notable exception of phylogeny, characteristics such as the scale and sophistication of a polity appear to be non-significant predictors. This leads them to conclude that military technology evolves, for the most part, as an exogenous variable. Overall, I applaud the authors for doing a very thorough and ambitious investigation in the evolution of military technology. I believe the paper will provide a catalyst for many follow up studies and serve a vital role of stimulating scientific debate. I do, however, have two general points that I’d like to make. For my first point, I want to highlight a potential limitation of this approach, in that aggregation might mask a simpler explanation for the results of your model. That is, your predictors might be predicting only a subset of your 46 binary variables used to create your MilTech measure. One way to deal with this is to create alternative MilTech measures by randomly sampling from your set of 46 variables. What you do here is create multiple MilTech measures composed from different, randomly sampled subsets of your 46 variables. How much does this change your overall results in this study? It might be the case that what your model is predicting is a specific subset of technologies that disproportionately influence the overall result. If the result is not robust to these random subsets, then I believe it diminishes your claim that the variables are predicting military technology per se. Instead, it might be the case that your models are predicting a specific subset of the technologies, as opposed to an aggregate measure of overall military technology. My second point relates to the variable for the existing stock of technologies. Here, you use the existing stock of technology as a proxy for the influence of current technological level on military technological evolution. This is done in two ways: as a temporal autoregressive process and focusing on horse riding and iron smelting. For the temporal autoregressive measure, you could also look at a related approach such as transfer entropy. The advantage of this is that it does not rely on using a single timeseries to predict its future state. Instead, you can use separate timeseries (X and Y), as predictors of one another. This tells you how much uncertainty is reduced in the future values of Y by knowing the past values of X given the past values of Y. It is a way of measuring the influence of one timeseries process (X) on another timeseries process (Y). I feel that this would help you disentangle the directionality of the relationship between some of your variables (as the measure is non-symmetric and X|Y does not equal Y|X). So, for instance, you could have the existing stock of technology as one timeseries and the miltech variable as another timeseries. You would predict that the information flow goes from the existing stock to miltech, but not necessarily the other way around. Finally, a minor point is that for the results in Table 1, you should probably use scientific notation for extremely small p-values (especially for the MilTech and MilTech.sq p-values where you just have 0). In fact, after writing this comment, I noticed you already did this in your supplementary materials (so it should be easy to address). ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.
|
| Revision 1 |
|
Rise of the War Machines: Charting the Evolution of Military Technologies from the Neolithic to the Industrial Revolution PONE-D-21-17737R1 Dear Dr. Hoyer, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Olivier Morin Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: |
| Formally Accepted |
|
PONE-D-21-17737R1 Rise of the War Machines: Charting the Evolution of Military Technologies from the Neolithic to the Industrial Revolution Dear Dr. Hoyer: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Olivier Morin Academic Editor PLOS ONE |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .