Peer Review History
| Original SubmissionNovember 9, 2021 |
|---|
|
PONE-D-21-35635Sure-Thing vs. Probabilistic Charitable Giving: Experimental Evidence On the Role of Individual Differences in Risky and Ambiguous Charitable Decision-MakingPLOS ONE Dear Dr. Schoenegger, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Apr 03 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Junhuan Zhang, PhD Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 2. Please provide additional details regarding participant consent. In the ethics statement in the Methods and online submission information, please ensure that you have specified (1) whether consent was informed and (2) what type you obtained (for instance, written or verbal, and if verbal, how it was documented and witnessed). If your study included minors, state whether you obtained consent from parents or guardians. If the need for consent was waived by the ethics committee, please include this information. If you are reporting a retrospective study of medical records or archived samples, please ensure that you have discussed whether all data were fully anonymized before you accessed them and/or whether the IRB or ethics committee waived the requirement for informed consent. If patients provided informed written consent to have data from their medical records used in research, please include this information. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Partly Reviewer #2: Partly ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: No Reviewer #2: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: Summary and Overall Evaluation In this article, the authors examined the effect of several individual difference measures, among them, risk and ambiguity preferences, numeracy, empathy, optimism, and donation motivation. The main dependent variable is if and if then how much participants were willing to donate from their experimental endowment to one of two different types of charities, which the authors call sure-thing or probabilistic charities. Another dependent variable was the donation to an individual charity (either sure-thing or probabilistic) at a later stage of the experiment. In addition, the authors examined whether an intervention, namely advertising the EV-maximizing principle as a decision criterion, has an effect on donation choices and whether presenting anonymous and context-free donations rather than real-world charities with concrete description of their cause lead to different behavior than the main condition. As results, none of the individual difference measures significantly affected the choice of donating to a sure-thing or probabilistic charity, and only empathy and donation type affected overall donation in the choice to donate to an individual charity in some models. The paper is clearly written and reports all methods, hypotheses and analyses in an understandable fashion. I appreciate that the study was pre-registered and I think the general research question is important and interesting. Moreover, the individual difference measures are all taken from previous research and seemed to be reasonable choices to measure the respective constructs. However, I am not fully convinced of the classification into sure-thing and probabilistic charity, of the theoretical underpinning of the hypotheses, and the power of the study. I will discuss each of these issues in more detail in the next section. Major Comments 1. The main dependent variable is the choice between so-called sure-thing and probabilistic charities. In total there were six charities, three classified as sure-thing and three classified as probabilistic. This classification is rather ad-hoc and neither rooted in empirical evidence, nor based on principled theoretical arguments. Whereas I might agree from reading the description that this classification could be justified, the current study did not establish that all or even the majority of participants perceived the charities as assumed in the classification. Instead, there is a lot of information in the descriptions of the charities and participants might focus on different information than impact probability. Second, participants could perceive other information about the charity as varying in riskiness. For example, the direct money transfer could be perceived as risky with respect to the ultimate use of the donated money. Finally, the charities in the different conditions could also differ on other dimensions than riskiness. For example, the sure-outcome charities all have a direct impact on individual people, whereas the probabilistic charities have a more indirect impact on a large amount of people. To be fair, the authors mention concerns about the classification of charities into sure-thing and probabilistic in the discussion. However, the provided evidence in favor of their classification are not convincing as they do not directly speak towards a difference in perceived riskiness between the two types of charity. Instead, a straight-forward way to examine this question empirically would be to ask participants either from the old participant pool if available or a new one, how much they rate the respective charities on several dimensions including the riskiness of the project. 2. Since there were different charities administered in the stimulus material, it would make sense to estimate regressions with stimulus random effects, instead of OLS. Other models that cluster errors on the stimulus level might also be possible. Similar to the logistic, this could be done as a robustness test. 3. More effort should be spent on the theory of why the examined individual differences should be correlated with the choice between the sure-thing and the probabilistic charity. In particular, why should risk preference be related to this choice, if—as mentioned by the authors—previous literature (Vives & FeldmanHall, 2018) did not find a connection between risk-preference and prosocial behavior, that the exact probabilities of success of a charity are most likely ambiguous and that risk- and ambiguity preferences are not strongly correlated? Similarly, the connection between optimism or numeracy and the main DV is not build on theoretical considerations nor on previous findings that suggest such a connection. A strong foundation of the hypotheses would considerably strengthen the importance of the article. 4. There is some uncertainty about the power of the regression results as the pre-registered power analyses were based on the assumption that 2/3 donated, but the true number of donations was only 1/3. I appreciate that this is mentioned in the discussion, but I think this aspect deserves a bit more attention. In particular the EV-max intervention and the context-free manipulations with a much smaller sample size than the main dependent variable, might not have enough power to draw definite conclusions. Instead, it might make sense to put these analyses in an appendix and label these analyses as inconclusive. For the main DV, I appreciate that the authors calculate equivalent tests to examine the measurement uncertainty around the true effect. In my view, a better or at least complementary approach would be to calculate Bayes factors to evaluate the evidence for the Null (see Jarosz & Wiley, 2014; Rouder & Morey, 2012). Bayes Factors could clearly state whether enough evidence has been collected to conclude that there is no effect of an individual difference measure on the choice between the sure-thing or the probabilistic charity. In case that the evidence is inconclusive, it might make sense to collect more data. Minor Comments I would suggest to describe the hypotheses in the introduction in terms of the alternative hypothesis. Presenting hypotheses as the Null makes the text wordier and more complex than necessary. On page 5 it is stated that: “Our research builds on this literature but is importantly different, primarily because of our focus on donations to actual charities and not on pro-social behaviour in abstract games.” And on page 6: “Specifically, we focus on donation behaviour between real world charities that are made with an earned endowment, in contrast to abstract laboratory game pro-social decisions and hypothetical choices.” These comments suggest that there is basically no research about charitable giving and all research about social preferences is only done in abstract experimental paradigms. I think this is a wrong impression and literature about charitable giving should be properly cited in these situations. Possible literature and review articles of which some are also cited in the text are Bekkers and Wiepking (2011), Karlan and List (2007), and Vesterlund and Sonnevi (2007). References Bekkers, R., & Wiepking, P. (2011). A literature review of empirical studies of philanthropy: Eight mechanisms that drive charitable giving. Nonprofit and voluntary sector quarterly, 40(5), 924-973. Jarosz, A. F., & Wiley, J. (2014). What are the odds? A practical guide to computing and reporting Bayes factors. The Journal of Problem Solving, 7(1), 2. Karlan, D., & List, J. A. (2007). Does price matter in charitable giving? Evidence from a large-scale natural field experiment. American Economic Review, 97(5), 1774-1793. Rouder, J. N., & Morey, R. D. (2012). Default Bayes factors for model selection in regression. Multivariate Behavioral Research, 47(6), 877-903. Vesterlund, L., & Sonnevi, G. (2006). 24. Why Do People Give?. In The nonprofit sector (pp. 568-588). Yale University Press. Vives, M. L., & FeldmanHall, O. (2018). Tolerance to ambiguous uncertainty predicts prosocial behavior. Nature communications, 9(1), 1-9. Reviewer #2: While I was reading the paper, I found the research questions interesting and important. I liked it a lot. However, when I read though the experiment design, I found, unfortunately, the design in the Main choice and context-free choice is not appropriate to answer the research questions. The concern is that the current design asks the participants to make two choices: which type of charity to donate to and how much to donate, while these two decisions are correlated with each other. Consequently, the experiment is not well controlled. To make this more clear, I’ll first describe what is an ideal design and then point out the issues about the current design. To answer the research questions or test the hypotheses, we can either test whether they prefer to donate to sure-thing charity or probabilistic charity or test how much they would like to donate given sure-thing charity or probabilistic charity. In the former case, we need to control for the donation level, i.e. if they donate, they donate the same amount of money. In the latter case, we randomly control for the charity type. The latter case is of course the Final Choice design in the current paper which I fully agree is the correct way. The issue with the current design is that, theoretically, there should be a point where one is indifferent between donate a certain amount of money under sure-thing charity and another amount under probabilistic charity. So what we watched in current experiment is just one of the two choices they are indifferent from. Therefore, it is hard to tell from their preference for the charity type. To extend the above point a bit more and perhaps it helps to make it more clear, suppose there are some amount of people who would donate zero no matter what type of charity they are assigned or chose in the Main choice (this happens I guess for sure because there are only 35.8% of participants made a donation). For these people, economically, their choice of the charity type is invalid as the cost is zero (they will not donate anyway). This is an extreme case. Now return to the paper, to save it, first, I think the Final Choice design is good, the authors may want to rely more on the data generate from the Final Choice. But at the same time take it in mind that this is last task which may suffer from order or experimenter demand effect. Second, if the authors really want to use the data from the Main Choice, they should at least control for the donate amount in the regression. Though I still think this is not valid enough to test the hypotheses. In table 4, to test the hypothesis using the Final Choice data, they should add in interactive term between treatment variable (sure-thing or probabilistic) and the major explanatory variables to test the difference in difference, that is whether the major explanatory variables can explain the donation difference under sure-thing or probabilistic charity (the major question this paper intends to answer). Minor points The paper, especially the abstract can be shortened to make it more readable. It is helpful to report R2 in the regression to show how much can be explained by the factors measured in this study. Typos: by conducting an a priori power analysis ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 1 |
|
PONE-D-21-35635R1Sure-Thing vs. Probabilistic Charitable Giving: Experimental Evidence On the Role of Individual Differences in Risky and Ambiguous Charitable Decision-MakingPLOS ONE Dear Dr. Schoenegger, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Jun 30 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Junhuan Zhang, PhD Academic Editor PLOS ONE Journal Requirements: [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: All comments have been addressed Reviewer #2: (No Response) ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: No ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: No ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: In general, the authors incorporated all my comments and improved the paper considerably. I would like to thank and congratulate them for their work and their constructive responses to the raised comments. I think the paper can be published with only minor revisions: 1. I very much like the additional study and the demonstration that participants indeed perceived the one class of charities as riskier and more ambiguous than the other. Given the importance of this result in judging the whole validity of the study with respect to the research question, I feel this study deserves much more space in the main manuscript. I think it should get a small methods and results section (maybe called study to validate the stimuli) and should also be mentioned already in the Methods of the main study when the different charities are introduced as stimuli. Importantly, while the additional study helps to support the claim that the charities differ in perceived riskiness, it does not exclude the possibility that the groups of charities also differed on other dimensions (e.g., individual vs. group as recipients etc.). I think this is an important limitation that should be mentioned in the discussion. 2. It is great that the authors conducted a Bayes Factor analysis. However, it was not exactly clear to me which models the authors compared. If it were the full model with all predictors vs. the null model with only an intercept this should be clearly reported and interpreted accordingly. Given the hypotheses, it would make sense to compare all individual models with one predictor from the five individual difference measures each against the null model with just an intercept. That way, the evidence for the individual null hypotheses could be unambiguously assessed. 3. Again, I think the authors did a good job in adding more literature and theoretical arguments to justify why the selected measures should correlate with donation behavior in their study. However, I also noticed that most of the studies they cite are based on different experimental designs. For example, Kleber et al. (2013) found an effect of numeracy on the effect of the number and share of helped people. This feature is not central to the current manipulation of interest. Similarly, in Cettolin et al. (2017) risk preferences are measures with certainty equivalents in decisions from description, whereas the current study uses an experience-based risk elicitation task with a different dependent variable. We already know that behavioral elicitation methods for risk preference correlate little with each other (see Frey et al., 2017). Thus, it is not straight-forward to assume the same underlying relation from different elicitation methods. Ultimately, I think this is not a weakness of the current study, but rather of the field as a whole. I would suggest the authors mention this problem in the discussion and call for more specific cognitive models and theories that build stronger connections between elicitation methods, constructs, and cognitive processes influencing behavior. References: Frey, R., Pedroni, A., Mata, R., Rieskamp, J., & Hertwig, R. (2017). Risk preference shares the psychometric structure of major psychological traits. Science Advances, 3(10), e1701381. Reviewer #2: I appreciate the authors effort to give more weight to Final Choice. The paper has been improved a lot for sure. However, I’m not sure whether the authors understand my major concern well. The point is that you have two moving parts in your Main Choice design. Such a design flaw makes it unable to test your hypotheses and hence is totally invalid. The reasoning is simple: when one can choose both the type of the charity and the amount to donate, one can choose two different compositions that one is indifferent from. For example, if I’m indifferent between $1/sure thing charity and $0.5/probabilistic charity, I can choose randomly from the two. Overall, you would find roughly 50% choose sure thing and 50% choose probabilistic charity. This means you cannot test the preference for charity type under such a design. This is point one. Point two, suppose people are more likely to donate more under sure-thing charity, then when you excluded 0 donation as you preregistered, you exclude more people who chose probabilistic charity (you can test this with your data). Th reason is that there must be some people who are indifferent between $positive amount/sure thing charity and $0/probabilistic charity, and these people chose randomly between the two. In the observed results, you excluded those who chose probabilistic charity but not the other type. Point 3, because of the above issue, if you regress charity type preference on other variables, you have to control for the amount donated, as an ex-post control, because you did not control for the donate amount in the experiment. I’m not clear what endogeneity this may cause. But if there is any, I don’t believe the data from Main Choice can prove anything. There is no problem if you want to report the analyses as you preregistered. But a preregistration doesn’t mean your design and analysis (Main Choice) is not problematic. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: Yes: Sebastian Olschewski Reviewer #2: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 2 |
|
PONE-D-21-35635R2Sure-Thing vs. Probabilistic Charitable Giving: Experimental Evidence On the Role of Individual Differences in Risky and Ambiguous Charitable Decision-MakingPLOS ONE Dear Dr. Schoenegger, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please explain the limitations about this kind of study using web-based sample. Please submit your revised manuscript by Sep 30 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Junhuan Zhang, PhD Academic Editor PLOS ONE Journal Requirements: Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #2: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #2: (No Response) ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #2: (No Response) ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #2: (No Response) ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #2: (No Response) ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #2: I thank the authors for their excellent second round revisions which I believe make the paper strong and publishable. It is smart to connect the data from the Main and Final treatments. It is now clear and important to point out the flaws in the Main treatment and putting more weight to the Final treatment. Good work! ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #2: No ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 3 |
|
Sure-Thing vs. Probabilistic Charitable Giving: Experimental Evidence On the Role of Individual Differences in Risky and Ambiguous Charitable Decision-Making PONE-D-21-35635R3 Dear Dr. Schoenegger, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Junhuan Zhang, PhD Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: |
| Formally Accepted |
|
PONE-D-21-35635R3 Sure-Thing vs. Probabilistic Charitable Giving: Experimental Evidence on the Role of Individual Differences in Risky and Ambiguous Charitable Decision-Making Dear Dr. Schoenegger: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Junhuan Zhang Academic Editor PLOS ONE |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .