Peer Review History
| Original SubmissionOctober 25, 2022 |
|---|
|
PONE-D-22-29409Individual differences in self-reported lie detection abilities.PLOS ONE Dear Dr. Fernandes, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Jan 05 2023 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Peter Karl Jonason Academic Editor PLOS ONE Journal requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 2. Please include your full ethics statement in the ‘Methods’ section of your manuscript file. In your statement, please include the full name of the IRB or ethics committee who approved or waived your study, as well as whether or not you obtained informed written or verbal consent. If consent was waived for your study, please include this information in your statement as well. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Partly ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: No Reviewer #2: Yes Reviewer #3: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: No ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: The present research used a correlational design to search for variables that may explain the gap between above-average self-assessed lie-detection ability and the average performance of lie detection. They used a large sample in two studies to again show that the bias exists and that people overestimate their lie-detecting ability. The perceived lie-detection ability was compared with different self-report scales, and in most cases, negative results were obtained. The outcomes are essential to enrich our knowledge and guide the exploration of the bias in additional directions. Furthermore, the perceived lie-detection ability is vital in face-to-face communication and may guide behavior. Undoubtedly, the topic deserves more empirical attention, and any addition to our limited knowledge in this domain is welcome. Nevertheless, I have some comments that can and should be addressed in a revision. Any ability is defined on a continuous scale. The dichotomous yes/no response is meaningless and redundant. People may succeed more or less in lie detection. No one is perfect, and no one is 100% inaccurate. Further, the dichotomization triggers odd results, such as 9 participants who answered that they were not able to detect lies and at the same time indicated that they succeeded in about 50-75% of their lie detection attempts. Furthermore, 4 participants who answered yes to the yes/no lie detection ability question indicated success in only 0-25% of their attempts (Table 2). The total frequency line (bottom) demonstrates the overestimated lie-detection bias, and the dichotomization adds nothing in this respect. The 0 to 100% scale used in Study 2 is more sensitive than the scale used in Study 1. Therefore, why not benefit from the advantage and use parametric statistics? Instead, the authors dichotomized the scale to use a non-parametric chi-square analysis (line 362, on). Note that the sample consisted of 386 participants, which calls for parametric statistics. Further, the contradiction between answering no to the absolute lie detection ability and receiving an above-chance level score when reporting the lie detection ability persists. Therefore, the manipulation check in Table 5 is meaningless. In addition, only the bottom line in Table 6, which shows the lie-detection bias, is relevant. In sum, I suggest removing the absolute yes/no lie-detection ability test. Line 476: In study 2, we also asked them (the participants) to report their ability in comparison to others. Unfortunately, the results were not reported here. The authors indicated that these results would be reported elsewhere. My question is, why? This is an essential addition to the present study, and the comparison between answering with and without reference to others is interesting and important (to my knowledge, the comparison to other people results in lower absolute scores). Minor points: Line 55: The current view about frequent lying is that not many people lie frequently, and most people reported not lying in the previous 24 h. (Daiku et al. 2021; Halevi et al. 2014; Serota and Levine 2015). Daiku, Y., Serota, K. B., Levine, T. R. (2021). A few prolific liars in Japan: Replication and the effects of Dark Triad personality traits. PLoS ONE 16(4): e0249815. https://doi.org/10.1371/journal.pone.0249815 Halevy, R., Shalvi, S., and Verschuere, B. (2014). Being honest about dishonesty: Correlating self-reports and actual lying. Human Communication Research, 40, 54-72. Serota, K. B., & Levine, T. R. (2015). A few prolific liars: Variation in the prevalence of lying. Journal of Language and Social Psychology, 34(2), 138-157. Line 91: … self-reported confidence in own lie detection abilities was unrelated to individuals' actual lie detection abilities… add a reference (for example, Elaad and Gonen-Gal, 2022). Elaad, E. & Gonen-Gal, Y. (2022). Face-to-face lying: Effects of gender and motivation To deceive. Frontiers in Psychology: Forensic and Legal Psychology, 13: Article 820923. doi: 10.3389/fpsyg.2022.820923. Line 214: Relative self-reported lie detection ability. Relative to what? To others? to the average person? The authors relate to the extent or degree of the perceived lie detection ability not to relativeness. This should be corrected. Lines 230-231, 297: How were the questionnaires completed? Individually or in groups? If in groups, how big were the groups? Please be specific. Line 236: Separate detect and changes. Line 282: Replace Is with It. Line 361: Why is the power set at 0.80? Line 371: Change relative to relatively sensitive (The 0 to 100 scale is more sensitive than the categorial scale used in study 1). Line 377: The scale ranges from 0 to 100. Remove the percent sign (100%). Line 488: Elaad and Reizer (2015) used the Big Five, which is a different scale than the Big Six that was used in the present study. Line 424: Not adding actual lie-detection performance is indeed a limitation. The current trend is to compare lie-detection ability scale scores with actual behavior (See Elaad and Gonen-Gal, 2022). Reviewer #2: This paper examined individual differences (primarily of personality) on self-reported lie detection ability. The Introduction makes a solid argument for the need for the study. Individual difference mostly had few significant or strong relationships with self-reported lie detection ability. This is interesting as some have argued that they should. I do not consider the lack of strong or significant effects to be a problem, they are what they are. The paper reports 2 studies that appear to have been properly executed with samples of a good size and well-selected measures. Data are reported in a way that address the questions of the studies, although I’d like to see tables showing the intercorrelation of all predictor variables in the supplementary material to aid in the interpretation of the regressions. The introduction is clear and readable. I have no suggested changes Study 1 Method Participant numbers don’t add up or there may be an error. It is stated in the Participants section that 525 were recruited. In the Data preparation section, it is stated that 239 were excluded, leaving 487 = these figures don’t match. Additionally, it is stated in the Participants that there were 487 participants (95 men) from one university, and this is given as the final sample size in the Data preparation section – is this a coincidence or an error in copying numbers? Results I can’t find how gender was coded, thus the mean and any direction of relationships cannot be interpreted by a reader. Study 2 Method The participants (700) minus exclusions (234) does not add to the total final sample (386). There was an age exclusion but the numbers are not stated. Results Why is gender omitted in the Study 2 regression analysis? Discussion One issue to be aware of it that a recent meta-analysis of social desirability scales suggests they are of no value (Lanz et al, 2022). Although social desirability is one of only 2 significant predictors in Study 2, it is worth asking the extent to which this is truly meaningful given the new analysis of the relevant measures. Lanz, L., Thielmann, I., & Gerpott, F. H. (2022). Are social desirability scales desirable? A meta‐analytic test of the validity of social desirability scales in the context of prosocial behavior. Journal of Personality, 90(2), 203-221. Reviewer #3: Verse 122. This research information should be in the method part, not in the theoretical introduction. Verse 150 . Why is the sample of respondents (although numerous) exclusively students? After all, it is known that this is a very specific group when it comes to self -report research . From verse 157 . Why such a long description of individual scales and tools when you can include all the most important information in a table. A lot of information is duplicated. Table 1 : Cronbach 's alpha on the " psychopathy " scale is too low for analysis. Similarly, too low " Cronbach 's alpha " is on the in group trust level scale . From verse 218 . Why is there so much information about the procedure and participants in the text itself? There is a diagram illustrating the procedure at the end, and the necessary information can be included in a table. This takes up a lot of space. From verse 243 . Maybe I'm repeating myself, but why such a long description again when it can be included in a few sentences or a clear table? Table 3 Should scales with such low reliability be included in the analysis ( Psychopathy and in - group trust)? From verse 292. Another test of students, and in psychology at that. Is this group representative in the self report? From verse 299 . Again, why are they introducing so many new research tools plus including the ones they used in the previous study. Do they want to measure everything in this article? Table 4 . Again, we have quite poor reliability on several scales: in trust group , empathic concerns , perspective taking , general score of social desirability . From verse 338 . Do you really need to re-detail what is contained in the table? Verse 406 . Isn't 21 self -report variables too much for one article? Verse 415 . They put in so many variables and only 13 percent of the variance is explained? It's probably a very small number. Maybe a bad theory? Verse 442 . Of the 21 variables, only 2 are significant predictors . Verse 447 . Since such a small number of significant variables was unexpected, perhaps it was better to start the theories earlier instead of coming to such conclusions only now after a huge amount of work. Verse 483-485 . I don't understand this line of reasoning. On what theoretical basis is such a conclusion? Verse 529 . Instead of a larger sample of respondents, maybe it's better not to study only psychology students? Overall, there is a lot of chaos in this article and it lacks a cohesive structure. The authors tackle an interesting problem, but they are actually lost in the large amount of research and variables introduced that don't really explain the phenomenon of detecting lies in other people. After reading this article, it is basically unknown what these individual differences in the title would consist of. The article is too long-winded. It seems like it could be half as long. It contains numerous repetitions and doubles. In the theoretical part, the authors briefly cite a lot of variables and studies related to lie detection, the description of which, instead of explaining this phenomenon, only complicates its understanding. In addition, in the theoretical part, they mix theory with the method that should be in the next part. Again, there is the problem of text systematization and structure In the methodological part, the selection of samples is puzzling. Maybe students (including psychology) are not necessarily a representative group for such a common phenomenon as lying and its detection. Then there is the issue of a large number of questionnaires and variables for one article. The article contains numerous mental abbreviations that are not always understandable to the reader. The structure of sentences, which is sometimes too complicated, does not help to understand the article. In conclusion, it seems that the authors put a lot of work into this article, but it is too vague for the reader. The manuscript deals with an interesting topic, but in the manuscript too often the names can refer to "lie detection" which the authors have not studied. "lie detection" is not possible to examine by self-report. It can be assumed that when the authors wrote about "lie detection", they did not mean it specifically. It should be made clear in every part of the manuscript that the theory and the study itself are not about "lie detection", because using the methods presented in the manuscript variable cannot be evaluated. It will also be advisable to reduce the number of variables to only those that have theoretical justification. The revised text will certainly be very interesting. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: Yes: Guy J. Curtis Reviewer #3: No ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
| Revision 1 |
|
Individual differences in self-reported lie detection abilities. PONE-D-22-29409R1 Dear Dr. Fernandes, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Peter Karl Jonason Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #2: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #2: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #2: I Don't Know ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #2: No ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #2: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #2: The authors have addressed my comments on the original paper and done a good job of addressing the comments of the other revierwers. My only hesitation is that the Supporting/supplementary materials were not avalable to me as a review and I was interested to see the tables because, without them, the results of Study 1 are very sparce. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #2: Yes: Guy J. Curtis ********** |
| Formally Accepted |
|
PONE-D-22-29409R1 Individual differences in self-reported lie detection abilities. Dear Dr. Fernandes: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Peter Karl Jonason Academic Editor PLOS ONE |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .