Peer Review History
| Original SubmissionFebruary 26, 2020 |
|---|
|
PONE-D-20-05599 Use of a controlled experiment and computational models to measure the impact of sequential peer exposures on decision making PLOS ONE Dear Mr. Sarkar, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. We recommend that it should be revised taking into account the changes requested by the reviewers. To speed the review process, the revised manuscript will only be reviewed by the Academic Editor in the next round. We would appreciate receiving your revised manuscript by May 18 2020 11:59PM. When you are ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. To enhance the reproducibility of your results, we recommend that if applicable you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols Please include the following items when submitting your revised manuscript:
Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out. We look forward to receiving your revised manuscript. Kind regards, Baogui Xin, Ph.D. Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 1. Please provide additional details regarding participant consent. In the ethics statement in the Methods and online submission information, please ensure that you have specified (1) whether consent was informed and (2) what type you obtained (for instance, written or verbal, and if verbal, how it was documented and witnessed). If your study included minors, state whether you obtained consent from parents or guardians. If the need for consent was waived by the ethics committee, please include this information. 2. Please provide additional information about the participant recruitment method and the demographic details of your participants. Please consider adding such information as a) the recruitment date range (month and year), b) a description of any inclusion/exclusion criteria that were applied to participant recruitment, c) a table of relevant demographic details, d) a statement as to whether your sample can be considered representative of a larger population, e) a description of how participants were recruited, and f) descriptions of where participants were recruited and where the research took place. 3. Thank you for including your competing interests statement; "The authors have declared that no competing interests exist." We note that one or more of the authors are employed by a commercial company: Sandia National Laboratories
Please also include the following statement within your amended Funding Statement. “The funder provided support in the form of salaries for authors [insert relevant initials], but did not have any additional role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript. The specific roles of these authors are articulated in the ‘author contributions’ section.” If your commercial affiliation did play a role in your study, please state and explain this role within your updated Funding Statement. 2. Please also provide an updated Competing Interests Statement declaring this commercial affiliation along with any other relevant declarations relating to employment, consultancy, patents, products in development, or marketed products, etc. Within your Competing Interests Statement, please confirm that this commercial affiliation does not alter your adherence to all PLOS ONE policies on sharing data and materials by including the following statement: "This does not alter our adherence to PLOS ONE policies on sharing data and materials.” (as detailed online in our guide for authors http://journals.plos.org/plosone/s/competing-interests) . If this adherence statement is not accurate and there are restrictions on sharing of data and/or materials, please state these. Please note that we cannot proceed with consideration of your article until this information has been declared. Please include both an updated Funding Statement and Competing Interests Statement in your cover letter. We will change the online submission form on your behalf. Please know it is PLOS ONE policy for corresponding authors to declare, on behalf of all authors, all potential competing interests for the purposes of transparency. PLOS defines a competing interest as anything that interferes with, or could reasonably be perceived as interfering with, the full and objective presentation, peer review, editorial decision-making, or publication of research or non-research articles submitted to one of the journals. Competing interests can be financial or non-financial, professional, or personal. Competing interests can arise in relationship to an organization or another person. Please follow this link to our website for more details on competing interests: http://journals.plos.org/plosone/s/competing-interests 3. Thank you for providing the following data availability statement: “All data from our online controlled experiments with human participants cannot be released due to ethical limitations. The data for our simulation of diffusion has been obtained from a publicly available dataset in https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0150989” Please note, PLOS journals require authors to make all data necessary to replicate their study’s findings publicly available without restriction at the time of publication. When specific legal or ethical restrictions prohibit public sharing of a data set, authors must indicate how others may obtain access to the data. For more see: https://journals.plos.org/plosone/s/data-availability Before we proceed, please address the following points: a. Please clarify if you are able to share anonymized data for all figures and tables in your manuscript. If there are no restrictions, please upload the minimal (anonymized) data set necessary to replicate your study findings as Supporting Information files or to a stable, public repository and provide us with the relevant URLs, DOIs, or accession numbers. For a list of recommended repositories, please see our Data Availability page (http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories). You also have the option of uploading the data as Supporting Information files, but we would recommend depositing data directly to a data repository if possible. b. If there are acceptable restrictions in place on the public sharing of the data underlying your study, please kindly clarify in detail the reasons for data restriction (e.g., data contain potentially sensitive information, data are owned by a third-party organization, etc.) and who has imposed them (e.g., an ethics committee). c. Please also provide a non-author point of contact (e.g., data access committee, ethics committee, or other institutional body) where data requests may be made. Note that it is not acceptable for the authors to be the sole named individuals responsible for ensuring data access. If data are owned by a third party, please indicate how others may request data access. Thank you for your assistance. With the information you provide we can update your data availability statement on your behalf. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Partly Reviewer #2: Partly Reviewer #3: Yes ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: I Don't Know Reviewer #2: Yes Reviewer #3: No ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: No Reviewer #2: No Reviewer #3: No ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: The paper is trying to make a point that sequential exposure of information pattern has important impact on the human decision. My major concern is that the justification of selecting the specific controlled experiment and the whole procedure of the experiment is not clear. Will the conclusion still holds if applied to other games? So the finding seems not very convincing. Reviewer #2: Use of a controlled experiment and computational models to measure the impact of sequential peer exposures on decision making PONE-D-20-05599 This is an interesting study with a good design of the experiment. The paper focuses on the social influence of peers on human decision-making. It is especially valuable the effort to test the sequential impact of others in the decision-making of participants, and how this sequence evolves over time (uniform; linear cascade; delayed cascade; and early cascade). However, I have a number of concerns that are described below. Regarding the literature that the authors reviewed, I miss some important previous efforts that are very connected to the research study. It is very recommendable that the authors establish the relation between this literature (see below) and their study, especially to delimitate the specific contribution of the paper. -Literature about the dual-process theory and heuristics in how signals of the environment influence the decision-making (e.g., Kahneman, 2003; Sanjari, Jahn, & Boztug, 2017). Signals from others could have a heuristic role, limiting a deliberative or optimal decision-making. -Literature related to the influence of the group on the decision-making of the individual (conformity). The classical experiments conducted by Asch (1956) demonstrated the influence of the group on bad decision-making. In addition, the decision-making was analyzed sequentially, as it is in the present paper. Other examples exist for the use of technology (e.g., Hertz & Wiese, 2018). I agree with the idea that the main contribution of this study is the consideration of the sequence in the decision-making. The manuscript will improve very much if the introduction and the rationale of hypotheses focus much more on the evolution of decision-making based on the signals that come from others. The authors have different types of possible temporal evolution (uniform; linear cascade; delayed cascade; and early cascade) that could be anticipated and described in more detail before the methodology section. The authors have the opportunity to propose and test mechanisms of temporal evolution that depend on how other actors in the environment behave. Because this could be the relevant contribution of the paper, it is necessary that “time” (connected with the actions of others) has a more prominent role. Accordingly, I also strongly suggest to reword the hypotheses. I think that the experiment will improve very much if, congruently with the importance of “time” I stressed above, the authors add a computation of latent growth curves (Bliese & Ployhart, 2002; Pitariu & Ployhart, 2010), or other similar analysis, to explore the form of the decision-making evolution (lineal, quadratic, cubic). This also permits to check the change in the decision-making depending on the change in the behavior of others. I will appreciate more information about participants. Could you please provide more information about the characteristics of participants, demographic information, etc.? Were there people who declined to participate? I know there is a test for biases in the Appendix, but if there were people who declined to participate, has their influence been controlled in any way? Why there is a difference in the size of the groups? More clarification is needed. I miss much more elaboration in the “Conclusions” section (maybe “Discussion”). I will appreciate it if the authors can extend it considering the importance of time and evolution of decision-making according to the behavior of others. Please, review typos (e.g., “the6mber” on page 7). Ash, S. (1956). Studies of independence and conformity: A minority of one against a unanimous majority. Psychological Monographs, 70 Bliese, P. D., & Ployhart, R. E. (2002). Growth modeling using random coefficient models: model building, testing, and illustrations. Organizational Research Methods, 5, 362-387. Hertz, N., & Wiese, E. (2018). Under Pressure: Examining Social Conformity With Computer and Robot Groups. Human Factors, 60, 1207–1218. Kahneman, D. (2003). A perspective on judgment and choice: mapping bounded rationality. The American psychologist, 58 9, 697-720. Pitariu, A. H., & Ployhart, R. E. (2010). Explaining change: theorizing and testing dynamic mediated longitudinal relationships. Journal of Management, 36, 405-429. Sanjari, S. S., Jahn, S., & Boztug, Y. (2017). Dual-process theory and consumer response to front-of-package nutrition label formats. Nutrition Reviews, 75, 871–882. Reviewer #3: The key contributions and research questions are to understand how patterns of influence impact adoption decisions and how peer influence impacts suboptimal decision making. This is a very important and timely topic of study. The RCT and ABM both provide unique insight to how information signals influence decision making. The two types of decisions, one and adoption decision and the other a decision to share information, seem like distinctly different types of decision-making. It would be helpful if the authors spent more time explaining the link between the two experiments. This almost feels like it could be two different papers, one on the RCT and one on the ABM. Please carefully proofread the document. There are many complex sentences with grammatical or typo errors that make it difficult to read, including in the abstract. My review focuses mainly on the RCT as this is my expertise. I am only familiar with ABM, but not an expert, and will note that I don’t think the ABM portion is very accessible as written. More clarity and definition of terms would be helpful. Additionally, the authors should address limitations of both studies, as well as the limitations in comparing results across experiments. The paper would benefit from a stronger conclusion with a more succinct overarching takeaway, rather than a nuanced finding, and in layman’s terms. In general, the conclusion falls flat compared to the discussion and analysis sections. Using layman’s terms and including clear statements, in layman’s terms, of the highlighted key findings and the real-world implications of those findings would make the study more accessible and relevant to non-experts. General: Abstract. Last sentence is unclear. End of intro, 4 bullet points on findings. The sentences are long and a bit hard to follow. This would be more impactful, especially to non-expert readers if they were more succinct summary statements, followed by more in-depth discussion. Generally, the introduction doesn’t clearly lay out the study. The paragraph from lines 68-95 begins to address this, but gets into more detail than is needed for the intro without addressing the main structure and reasoning behind the different pieces of the work, e.g., a framework and AMT game are mentioned, but the ABM doesn’t come up until bullet three of findings. The number of groups, variables, and timing create a fairly complex set of experimental conditions. A schematic or flow chart of the experiment would be very helpful to understand the conditions and timing. Additionally, section 4.1 has some redundancy and circling back around to the same points makes it seem more complex and harder to follow. A more linear explanation of the parts would improve clarity. Section 4.1. In lines 311-312, are the signaler and peers referred to bots? In line 314, peers are signified as (bots). If these are the same peers from 312, please indicate at first reference. Line 316, please explain “do not share any topology”. Please state explicitly if incentives were provided in all 18 stages. How was the incentive level selected? $0.02 seems low to motivate optimality. Section 5.1 states that EC and LC “prevent more attacks.” However, the table column labeled “Average number of attacks prevented” has the lowest number for EC and LC rows. Table 1 description should clarify “The lower attacks” as “lower number of attacks prevented”. Please include whether the difference in attacks prevented is statistically significant in Table 1. Section 5.2: Line 379 states suboptimal technology C, how does the letter relate to provider or decision numbers? Starts using the terminology “received decision”; what does that mean? In line 382, what “ratio”. Not sure what “highest” indicates; highest what? Why are the different groups “sent” a different decision? Meaning of figure 5 is unclear. An example of how to read this would be helpful. Line 396 states 2-sampled t-test. Why wasn’t an ANOVA performed first followed by pair-wise tests when the ANOVA result was significant? ANOVA is standard practice for multiple treatment groups. Line 417. Please show statistical results for both NM and UM consistently. Also, a reminder of what “decision 5” means and why it matters would help with following the results. Please use consistent language across all three results, e.g. decision 5, optimal choice, technology C, etc. These results are difficult to follow and compare. A table showing the results of the tests for each group and relationship to Hypotheses would be helpful. Section 5.3. This is hard to follow. Section 5.2 indicates that the EC pattern has a different impact than the other PoIs. The paragraph starting at line 461 also indicates differences, but 5.3 contradicts this. Line 476 indicates a different experimental setup for influencing the peer social signal versus one suboptimal choice. The experimental description did not make it clear that there were two setups of this nature. Section 5.4. Line 489. The “e.g.” example doesn’t really clarify the statement since it doesn’t compare two groups. Figure 8. Is the number of influencers or the number of influence signals? Consistent terminology is key for clarity given the complexity of the experiment. s is defined as signal quantity in the definition line 493. Section 5.5. A comparison with how the outcome between EC and DC supports, confirms, relates to, etc. the EC v DC comparison in section 5.4 would be helpful, especially for those of us with limited time to work through the implications. Section 6. It would be useful to link the ABM to the RCT more tightly. As it stands, the tie between the two experiments isn’t strong enough. This seems like two different studies, tangentially related, crammed into one paper. Since the target behavior has changed, i.e., selection of tech with clear benefit/performance v sharing info with no clear benefit, it’s not obvious that results of the RCT are applicable to the premise of the ABM. The use of “optimal” decision for sharing information with no clear utility, seems like a stretch. Here optimal seems like a stand-in for “factual.” Section 6.1 As stated, line 620-623 indicates that the same data was used for training and validating, which is self-reinforcing. Please clarify. Lines 623-6 are unclear. Section 6. I’m familiar with ABMs, but not an expert. This section was hard to follow. The results are stated as having two results, but there seem to be a number of results in each bullet point. The significance/implications of the results are not obvious for the layperson and requires more development and explanation. Appendix. Including the survey and definitions of variables would be helpful. Were the significances tested in Appendix A corrected for multiple-comparisons? There seem to be a lot of variables tested. Appendix B is self-referential. Minor More definitions would be helpful for the reader, e.g., opacity problem line 95 Line 134. The paper organization skips section 2. Line 154 consider referencing Beck, Lakkaraju & Rai 2017 on info frequency. Line 346 refers to its own section. Section 6.3.1. Line 685 has a “2)” but there isn’t a 1) Section 6.3.3. Equation 4 introduces a sub-u (t), but doesn’t define it. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No Reviewer #3: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files to be viewed.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 1 |
|
Use of a controlled experiment and computational models to measure the impact of sequential peer exposures on decision making PONE-D-20-05599R1 Dear Dr. Sarkar, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Baogui Xin, Ph.D. Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: |
| Formally Accepted |
|
PONE-D-20-05599R1 Use of a controlled experiment and computational models to measure the impact of sequential peer exposures on decision making Dear Dr. Sarkar: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Professor Baogui Xin Academic Editor PLOS ONE |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .