Peer Review History
| Original SubmissionJune 1, 2020 |
|---|
|
PONE-D-20-16507 UNSW Face Test: A screening tool for super-recognizers PLOS ONE Dear Dr. Dunn, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. As you can see there are various wide ranging opinions from the Reviewers, however, broadly speaking there is a positive disposition to this paper. The most important areas to work on are to improve the precision in the language throughout the manuscript and ensure that every claim made is based on the data collected. I will not reiterate the Reviewer's comments as they are quite detailed. Please submit your revised manuscript by Sep 11 2020 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols We look forward to receiving your revised manuscript. Kind regards, Peter James Hills, PhD Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 2. Please provide additional details regarding participant consent, and whether this was obtained from participants for all parts of your study. In the Methods section, please ensure that you have specified (1) whether consent was informed and (2) what type you obtained (for instance, written or verbal). If your study included minors, state whether you obtained consent from parents or guardians. If the need for consent was waived by the ethics committee, please include this information. 3. We note that Figure [1] includes an image of a participant in the study]. As per the PLOS ONE policy (http://journals.plos.org/plosone/s/submission-guidelines#loc-human-subjects-research) on papers that include identifying, or potentially identifying, information, the individual(s) or parent(s)/guardian(s) must be informed of the terms of the PLOS open-access (CC-BY) license and provide specific permission for publication of these details under the terms of this license. Please download the Consent Form for Publication in a PLOS Journal (http://journals.plos.org/plosone/s/file?id=8ce6/plos-consent-form-english.pdf). The signed consent form should not be submitted with the manuscript, but should be securely filed in the individual's case notes. Please amend the methods section and ethics statement of the manuscript to explicitly state that the patient/participant has provided consent for publication: “The individual in this manuscript has given written informed consent (as outlined in PLOS consent form) to publish these case details”. If you are unable to obtain consent from the subject of the photograph, you will need to remove the figure and any other textual identifying information or case descriptions for this individual. 4. Please note that according to our submission guidelines (http://journals.plos.org/plosone/s/submission-guidelines), outmoded terms and potentially stigmatizing labels should be changed to more current, acceptable terminology. For example: “Caucasian” should be changed to “white” or “of [Western] European descent” (as appropriate). 5. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes Reviewer #4: Yes Reviewer #5: Yes Reviewer #6: No ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes Reviewer #4: Yes Reviewer #5: Yes Reviewer #6: No ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes Reviewer #4: Yes Reviewer #5: Yes Reviewer #6: No ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes Reviewer #4: Yes Reviewer #5: Yes Reviewer #6: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: This manuscript introduces a new test, the UNSW-Face Test that can be used for online screening of super recognizers – people with extraordinarily good face recognition ability. The test has good psychometric properties for this purpose, is correlated with other measures of face recognition, and not correlated with a car recognition test. The UNSW-FT looks to be a useful addition to the existing set of standardized face tests. I have only minor comments which I have listed below. The authors didn't report many separate analyses of the old-new vs sorting task data, but it seems there might be some additional interesting differences between the two sub-tests. If there are, maybe those analyses could be placed into an appendix or the supporting materials. Line 27: “that” should be “than”. L64: The authors mention that existing tests are unsuitable for online testing, but the CFMT at least has been used successfully in a number of online studies (Wilmer et al., 2010; Shakeshaft et al., 2015). Are there other reasons that make the new test more suitable than existing tests? L241: Do the authors have RT data to assess whether individual participants' RTs were so long it suggests they were using an unusual strategy (e.g., taking photos of the faces)? RTs might provide a means to jettison questionable participants. L313: The authors suggest the UNSW-FT is more strongly correlated with face memory measures and that is plausible, but differences in the sensitivity and reliability of the CFMT and the GFMT may also contribute to the stronger correlations with CFMT. Do the authors have evidence or does data exist (e.g., pre-existing data on reliability of CFMT vs GFMT) that speaks to the influence of these different factors on the correlations? L440: Since the M-Turk participants were all from the US, it doesn't sound right to call them "Africans", "East Asian". I see the analogy to "Caucasians", but that is commonly used to refer to an ethnic group whereas "Africans" sounds like someone from Africa to my ears regardless of their ancestral history (e.g. a white person from Kenya). Reviewer #2: The manuscript introduces the UNSW Face Test, an online test of face processing ability as an initial screening tool for SRs. The analyses presented aimed to show that the test 1) is of sufficient difficulty to minimise ceiling effects, 2) shows consistency with current tests used to identify SRs, 3) captures performance specific to face processing and not other general cognitive ability, and 4) has retest reliability. The manuscript was written in a clear and concise manner and I feel that the aims have been achieved. I only have the following minor comments which the authors may find useful. Minor comments: 1)Line 64: It is unclear why existing tests are unsuitable for online testing. 2)Line 88 onwards: I get the point about general ability, but it seems that the two components of the test were designed to capture different aspects of face processing. Might there be any utility/interest in looking at performance across the two components separately, given that different applications require an individual to be good at specific types of processing? 3)Line 127: What were the criria used to select the foils? 4)Line 133: I'm not 100% certain of the technical details of image processing, but I think the specification of image resolution is only useful if all original images were resampled from a higher resolution. Might be good to make it clear that the image quality for all images were generally similar. 5)Effects of participant demographic: Since the stimuli used were images of undergraduate students and most are likely to be within the Young adult -Early middle age. Maybe this could have influenced the age effect. 6)I assume the same exclusion criteria as the online sample were used for the other samples? 7)I might have missed this, but I can only see ethics details for the use of the stimuli. Of particular concern is that data from the adolescent age group is reported for the online samples. I'm not familiar with the data protection laws of the different countries, but wanted to highlight this in case it might be an issue. Reviewer #3: The authors here provide a new test of face recognition ability (UNSW face test) that can be used to help detect super-recognizers). The test is freely available for researchers to use. Normative data and demographic norms are provided. The authors report significant correlations between the UNSW test and the CFMT + and GFMT, and argue that the UNSW face test can be used to detect general face recognition ability. A new, challenging, test of face recognition ability is an important development to progress the field of face recognition. I believe that the test will be a useful resource to advance the study and recruitment of super-recognizers. Comments 1. Line 171, description of participant groups. The wording ‘targeting high performers on the GFMT and CFMT+’ suggested to me that all online participant groups had either already completed or would complete the CFMT and GFMT. After reading the methods section at the end of the manuscript my understanding is that this is not the case. Please refine the explanation here. 2. Line 193 – normative data. Was there a reason why normative data was based on the data from mechanical Turk instead of from other participant groups? Norm is lower than online samples. Potential issues here if lower norm used in SR recruitment if mechanical Turk not best matched normative group. Could be linked to motivation but arguably applied SRs would be more similar to the most motivated control group. 3. Line 308. Is there a reason why convergent reliability is based only on lab sample when online sample 2 also completed CFMT and GFMT? 4. In text report of Fig 6. Suggest that reference to variation in performance at individual level is added – these differences are apparent in the scatterplots, but only group level correlations addressed in text. Acknowledgement of variability in individual level results important especially for applied use of the screening test. Would also be relevant to acknowledge individual spread if this was also found for results reported in Table 2 (through in text comment or addition of scatterplot). 5. Line 312 – UNSW face test more strongly correlated with CFMT. It would be interesting to see the breakdown of correlations for Part 1 and Part 2 of the UNSW face test with GFMT and CFMT+. Did part 1 correlate more strongly with CFMT as these tests are more similar? 6. Line 338 – explained why the sample was used. I found this sentence really helpful and would suggest that where sensible similar sentences are added for the above results sections as it was sometimes confusing to keep track of which group was used and why others were not included. Minor - Line 27, typo. ‘that’ -> ‘than’ - Line 385. Helpful to add context for why criteria can vary (different operational context etc). Otherwise reader may be unsure about the level of performance that is required to be defined as an SR. A sentence on the lack of clarity over definition of SR may suffice. Reviewer #4: This is a good paper that makes a valuable contribution to the field. I expect that the UNSW Face Test will be widely adopted and that the paper will attract many citations. I recommend publication. Two major strengths of the new test, given its purpose, are the unusually high level of difficulty and the use of ambient images. The inclusion of test-retest reliability and relation to other cognitive tests make the scores much easier to interpret. The abstract would be improved by the following minor changes: - change "available for free for scientific use" to "available free for scientific use" - change "an important tool for their initial recruitment" to "an important step in initial recruitment" - change "before completing confirmatory testing" to “before confirmatory testing” - change “normative data on the test” to “normative data on our screening test” - change "stricter selection criteria that" to "stricter selection criteria than" Other comments: Page 3. Line 55. “identifying super-recognizers based on self-report alone is unreliable”. Zhou & Jenkins (2020) found that high performers consistently underestimate their performance, both in absolute terms (estimated accuracy) and in relation to others (estimated rank). Zhou, X., & Jenkins, R. (2020). Dunning–Kruger effects in face perception. Cognition, 203, 104345. Page 3. Line 56. Change “This coupled with” to “This constraint, coupled with” Page 4. Lines 64–66. “However, existing standardised tests of face identification ability are unsuitable for online testing. For example, the CFMT and GFMT are carefully calibrated psychometric tests intended to be reliable measures of a person’s ability.” It would be helpful to briefly explain why these properties make the tests unsuitable for online testing. They sound like desirable properties on the face of it. Page 4. Lines 69–75. This argument is not well developed. In particular, the sentence beginning “Further” seems to run against the preceding sentence. The paragraph would benefit from a thoughtful rewrite. Page 7. Line 136. Change “Participants’ scores” to “A participant’s score” Page 8. Lines 155–162. Avoid excessive repetition of “participants”. e.g. “Next, they sort”, “resulting in a maximum score of 80”. Page 10: Lines 214–217. Briefly explain what the percentage values in the main text refer to, and their relation to Table 1. Page 14. Lines 296–298. One possible explanation is that the high task difficulty forced participants to guess on a high proportion of trials. There is no reason that ‘pure’ guesses should be correlated across attempts. Page 17. Lines 353–355. This is an interesting result. It could be taken to mean that face recognition ability peaks around age 30. However, it could also be taken to mean that it is easier to recognise faces that are (roughly) your own age (i.e. the ‘other-age effect’). I don't think the current data, on their own, can distinguish these two possibilities. It might be worth briefly commenting on the ambiguity, if only for the sake of future systematic reviewers and meta-analysts. Reviewer #5: In this very interesting study the authors present an online test for screening large samples in order to identify persons with extremely high face identity processing skills (“super-recognizers”). Data were collected from a total of five samples, two of these were tested in the lab. Overall, the entire data set includes ~24,000 participants. Some samples additionally performed other face tests (Cambridge Face Memory Test; Glasgow Face Matching Test), as well as tests tapping into other, i.e. non face-related tasks (Cambridge Car Memory Test; Matching Familiar Figures Test). The authors conclude from the data that their “UNSW Face Test” is difficult, not prone to ceiling effects, largely specific to face processing and therefore well suited to pre-select potential super-recognizers online, which can then be further tested in the lab for confirming or ruling out face super-recognition skills. This tool would be useful both for basic research and applied purposes. Overall, I think the authors present and offer (the test is freely available for other researchers and institutions) an extremely useful resource, from which many labs can benefit. The manuscript is clearly and concisely written, and the design and the results are straightforward. Nevertheless, I have some questions and comments, which are outlined below: Major (note that page numbers refer to manuscript pages, not the complete pdf file): Page 8, line 161: It would be nice if there were not only results for overall accuracy, but also for hits and false alarms. Page 10, line 10: Please add infos on sample size to the legend of Fig. 2. Page 10, line 214: It is pretty obvious, but it would still make reading easier if it was made clear that the percentages refer to the differences. Page 14, line 291: Test-retest reliability is not that great, and might be distorted by using the same version within one week. Maybe the authors could avoid potential repetion effects by calculating test-retest reliabilty for (post-hoc) contructed paralell versions? Page 15, table 2: Isn’t it surprising that the correlation between UNSW FT Time 2 and CFMT is (numerically) larger than the correlation between UNSW FT Time 1 and UNSW FT Time 2? At the same time, I was surprised that the correlation between UNSW FT and CFMT+, which is an established tool for identifying super-recognizers, is rather low, and lower than for the “normal” CFMT. Do the authors have any explanations for these findings? Page 20, line 435: Why was it decided to take the normative sample from US residents, when all other samples were from Australia? Page 20, line 440: I think the categorization of participants is not precise and in some cases probably wrong: Is a person, whose ancestors were born and have lived for generations out of Africa, automatically “African”? What is meant here is probably the looks. Along these lines, the term “ethnicity” may also be wrong in this context, because it includes a cultural component. Of course, in real life, there is often an association between physical facial appearance and cultural background, but there is no causal relationship. Using the terms race (which is per se a problematic and inprecise term) and ethnicity interchangably, when acutally talking about the physical appearance of a person, means equalizing culture and biology, which seems very problematic to me. Page 21, line 452: I would also pass a similar criticism (i.e. lack of precision) on the term “mixed-race”? How much and what kind of mix has to be there to count as “mixed-race”? In most regions of the world most people are more or less mixed, isn’t it? Minor: Page 5, line 102: delete “and” (or add “were” after “and”) Page 6, line 119: add group or sample after “demographic” Page 6, line 119: “for” example (rather than “or”) Reviewer #6: See review comments in the attached document. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: Yes: Nabil Hasshim Reviewer #3: No Reviewer #4: No Reviewer #5: No Reviewer #6: Yes: Meike Ramon & Jeff Nador [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.
|
| Revision 1 |
|
UNSW Face Test: A screening tool for super-recognizers PONE-D-20-16507R1 Dear Dr. Dunn, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Peter James Hills, PhD Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: All comments have been addressed Reviewer #3: All comments have been addressed Reviewer #4: All comments have been addressed Reviewer #5: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #3: Yes Reviewer #4: (No Response) Reviewer #5: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #3: Yes Reviewer #4: (No Response) Reviewer #5: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #3: Yes Reviewer #4: (No Response) Reviewer #5: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #3: Yes Reviewer #4: (No Response) Reviewer #5: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: I'm happy with the revisions the authors have made. My only suggestion is that they consider adding information to the abstract that provides some information about how the test works (e.g., unfamiliar faces, match-to-sample sorting, recognition memory). Reviewer #3: (No Response) Reviewer #4: (No Response) Reviewer #5: The authors have addressed all my comments satisfactorily. The only remaining detail is that the wording with respect to the description of the participants should also be adjusted in Figure 1 of the appendix. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: Yes: Brad Duchaine Reviewer #3: No Reviewer #4: No Reviewer #5: No |
| Formally Accepted |
|
PONE-D-20-16507R1 UNSW Face Test: A screening tool for super-recognizers Dear Dr. Dunn: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr Peter James Hills Academic Editor PLOS ONE |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .