Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

The Jena Eyewitness Research Stimuli (JERS): A database of mock theft videos involving two perpetrators, presented in 2D and VR formats with corresponding 2D and 3D lineup images

Abstract

Empirical investigations into eyewitness identification accuracy typically necessitate the creation of novel stimulus materials, which can be a challenging and time-consuming task. To facilitate this process and promote further research in this domain, we introduce the new Jena Eyewitness Research Stimuli (JERS). They comprise six video sequences depicting a mock theft committed by two different perpetrators, available in both two-dimensional (2D) and 360° format, combined with the corresponding lineup images presented in 2D or three-dimensional (3D) format. Images of one suspect and eight fillers are available for each lineup. We evaluated lineup fairness by using mock eyewitness paradigm and noted a Tredoux’s E of 4.687 for Perpetrator 1 and 5.406 for Perpetrator 2. Moreover, no bias towards the perpetrators was observed in the lineups. We incorporated 360° videos and 3D lineup images to encourage the adoption of innovative data formats in experimental investigations of eyewitness accuracy. In particular, compatibility with Virtual Reality (VR) makes JERS a promising tool for advancing eyewitness research by enabling researchers to construct controlled environments that offer observers an immersive experience. JERS is freely accessible for the use of academic purposes via the Open Science Framework (OSF).

1 Introduction

Eyewitness identifications of suspected criminals are often crucial evidence in criminal proceedings. However, it is frequently observed that eyewitnesses exhibit suboptimal accuracy in identification tasks [15], and misidentifications of eyewitnesses represent a major factor contributing to wrongful convictions [6]. Since the 1970s, international experimental psychology research has been dedicated to investigating factors that impact person identification, with the aim of mitigating the occurrence of erroneous identifications [24, 7].

The development of appropriate stimulus materials for such experiments typically demands a substantial investment of effort, for instance, actors have to be recruited, crime scenes have to be recorded, matching fillers have to be found, lineup images have to be taken and pilot tested. To encourage and facilitate eyewitness research, we here present the Jena Eyewitness Research Stimuli (JERS). This database offers six video sequences of a mock theft involving two different perpetrators in 2D and 360° format, as well as corresponding lineup images in 2D or 3D format. Decades of research in eyewitness and memory studies provide contrasting results on the effects of acute stress on memory [811]. The impact of eyewitness stress on lineup accuracy remains unclear [12]. For a more realistic, controlled, and feasible manipulation of stress, JERS includes two 360° videos for usage in VR. According to our current state of knowledge, JERS represents the first publicly available database of stimuli useable for research on eyewitnesses. An overview of all included materials can be found in Table 1.

2 Stimulus development

At the onset of database development, we recruited two dissimilar-looking young adult males to portray the perpetrators in the mock theft scenarios. Both males engage in amateur theatrical performances during their leisure time. Perpetrator 1 was 21 years old at the time of the initial shooting and lineup image production. Perpetrator 2 was 31 years old. The project commenced with the recording of Video 1 (in Sept 2020) and capturing images of the perpetrators and filler individuals for the lineup (Oct 2020—Jan 2023). After a period of two years, we recorded Video 2 (in Sept 2022) featuring the same perpetrators. Subsequent paragraphs provide further elaboration on the development of the stimulus materials.

2.1 Videos

All videos were recorded in 360° format using a GoPro™ MAX and edited with the software Adobe™ Premiere Pro 2020 afterwards. This format enables users to look around 360° within the video via a head-mounted display of VR glasses (no further interaction within the video via movement possible). In this way, VR technology creates a simulated 3D experience of reality, and thus is becoming an increasingly important tool for the experimental study of person perception and social interaction [13]. A head-mounted display, placed in front of the user’s eyes, constitutes a dual-screen display accompanied by headphones. The two small screens are positioned individually in front of each eye to allow the presentation of separate computed images that emulate a perception of spatial depth. In addition, head-tracking sensors cause an orientation regarding the user’s current perspective by recording the head movement [14]. Consequently, VR is associated with high immersion and strong perceptions of authenticity regarding the experience [14, 15]. Compared to other media forms, using VR glasses can suggest a strong illusion of a fictional situation to participants that previous experimental psychology concepts could not induce [15]. Accordingly, VR also enables the simulation of risky or ethically problematic situations (e.g. eyewitness experiences) in a safe and controllable laboratory environment [13, 16, 17].

2.1.1 Video 1.

Video 1 shows a mock theft scene in a park, captured from the perspective of a bystander who witnessed the crime. The video features the observer on a lawn with seven other individuals (4 females, 3 males; average age = 21.43, SD = 3.46, min = 16, max = 25). All recorded conversations are in German. Perpetrator 1 and Perpetrator 2 appear during the course of the video. Perpetrator 2 initiates a social interaction by approaching a seated individual and requesting a light for his cigarette. The seated individual searches for a lighter in their pocket but is unable to find one and offers an apology. Meanwhile, Perpetrator 1 kneels and covertly picks up an invisible object from the seated individual’s blanket, subsequently stashing the object in his pants pocket. Perpetrator 1 informs Perpetrator 2 that he possesses a lighter at his residence, and the two depart. Subsequently, the target person realizes the theft of their wallet, rises from the ground, vocalizes "Hey, my wallet!" and starts to pursue the two perpetrators, who are now fleeing the scene. The intention of the victim’s loud shouting in the 360° format of the video is to capture the participant’s attention towards the crime and the perpetrators. This sequence was captured twice, once with a male and another with a female victim, to facilitate potential exploration of any influences of victim gender. All actors and actresses involved received a financial allowance of € 20 for the shoot. The two videos have a length of 1:45 min and 1:47 min respectively. The perpetrators are shown in the various sequences and formats from a greater distance (> 3m) and can be seen between 45 and 49 s. Their faces can be seen between 12 and 35 s. For detailed information on the viewing time per actor see S1 Table in S1 File. We considered this time long enough to recognize the perpetrators, as experimental research has shown that even shorter observation times (5 s or more) can be sufficient for correct face recognition [18].

2.1.2 Video 2.

To facilitate international implementation and to remove language-specific dialogue as well as to offer videos showing only one perpetrator, four new video sequences were recorded two years later. These videos depict a mock theft occurring in a waiting room. Actors were the same perpetrators as in Video 1. The exhibited items of interior decoration, including a sofa, multiple chairs, and assorted plants, are visually discernible within the recorded footage. At the onset of the video, depending on the video sequence, either Perpetrator 1, Perpetrator 2, or both (age now: 22 and 33, respectively), enter the room. The perpetrators proceed to survey the room before selecting a book from a nearby side table and occupying the sofa to peruse its contents. Upon noticing a backpack leaning against the sofa, their focus shifts from the book to the backpack. The specific sequence of events captured in the video determines which perpetrator is responsible for opening the backpack and retrieving a wallet. The ensuing course of action entails the perpetrators leaving the room. All videos have a length ranging between 1:15–2:29 min. The perpetrators are shown in the various sequences and formats from a closer distance (< 3m) and can be seen between 72 and 143 s. Their faces can be seen between 53 and 96 s. For detailed information on the viewing time per actor see S1 Table in S1 File. Both actors received a financial allowance of € 20 for this second shoot.

2.2 Lineup sample

Lineup images were produced for Perpetrator 1 and Perpetrator 2 in 2D and 3D formats. We then photographed eight different German fillers for each lineup, in order to enable an eight-person lineup in which the perpetrator could be present or absent. All images (2 perpetrators, 16 fillers) were shot at the Institute of General Psychology and Cognitive Neuroscience at the Friedrich Schiller University Jena in a standardized form. The fillers were recruited via announcements, flyers and through social networks. All participants received a financial allowance of € 15.

2.2.1 2D images.

The images were captured using a Canon™ 80D camera, resulting in a 2D format. All participants were seated in front of a green background, wearing a plain black round-neck shirt. Head-and-shoulders-images were obtained of each person from the following perspectives: profile shot left, half profile shot left (45° view), frontal shot, half profile shot right (45° view), profile shot right. The camera and exposure setup are shown in Fig 1. We customized and edited the images using GIMP™ 2.10.32 and Adobe™ Photoshop 23.4.2. Fig 2 shows examples of the 2D lineup images.

thumbnail
Fig 2. Examples of 2D lineup images.

The depicted face shows a filler for Perpetrator 2.

https://doi.org/10.1371/journal.pone.0295033.g002

2.2.2 3D images.

One 3D object file was generated for each of the two perpetrators and all fillers (n = 16). Using DI3Dcapture™ (Dimensional Imaging, Glasgow, UK, version 5.2.1), each face was captured by four cameras simultaneously, producing four images for each face that were then interpolated to create a 3D object (.obj). Fig 3 shows the setup of the DI3Dcapture™ system. Example 3D images can be seen in Fig 4. The editing of the 3D objects was performed utilizing the software program Blender 3.2.1.

thumbnail
Fig 3. Setup of DI3Dcapture™ system for 3D image creation.

https://doi.org/10.1371/journal.pone.0295033.g003

thumbnail
Fig 4. Examples of 3D images (filler for Perpetrator 2;.obj-files possess 3D mobility compared to this figure).

https://doi.org/10.1371/journal.pone.0295033.g004

2.3 Testing lineup fairness

The nominal size pertains the number of persons presented in the lineup, while the functional size refers to the number of individuals in the lineup who bear a significant resemblance to the actual suspect [19].

2.3.1 Nominal size.

In terms of nominal size, we know that a higher number of fillers produces a lower random risk of identifying a suspect [20]. According to [21], and reflecting common practice in the field, it is advisable to include only one suspect per lineup, with a minimum of five appropriate fillers (see also [22]). Meanwhile, a meta-analysis conducted by Juncu and Fitzgerald [23] has revealed that the inclusion of a greater number of lineup members can enhance the discriminability of suspects. Based on their findings, the authors of the study also recommend a minimum of five fillers to be included in the lineup. Given the significant variations in the recommended minimum number of lineup members across different countries [24], we included lineups with one suspect and eight fillers for the present stimuli. As such, the utilization of these stimuli may be considered appropriate in various countries.

2.3.2 Functional size and lineup bias.

Research has shown that distinctive faces are more easily remembered and recognized than non-distinctive, typical faces [25, 26]. To promote fairness and enhance the functional size of a lineup, it is recommended to avoid face images that are highly distinctive [18]. Moreover, it is important to ensure that all fillers in the lineup match any characteristics described by the witness regarding the perpetrator [21, 27]. As such, fairness is a crucial factor to consider in the design of a lineup. In a fair lineup, all face should have similar a priori probabilities of being selected, with none of the faces, including the suspect or any fillers, standing out as being more likely to be chosen, particularly when the target is absent [2830].

A test of fairness is only available regarding Video 1. Here we used a mock eyewitness paradigm [31, 32]. Accordingly, six independent raters (3 male, 3 female; mean age = 27.5, SD = 3.3, min = 24, max = 32; all born and raised in Germany) were instructed via an anonymous online survey (in Oct 2020) to describe the faces of both perpetrators in their own words on the basis of a sequence of Video 1. They were requested to imagine themselves being called to the police station as a witness and asked to provide a description of the perpetrators. The attributes hair or beard style and its colors, skin color, glasses, age, gender, or special unchangeable distinctive features were given as examples to help verbalizing descriptions. The stimulus used was a 27-second 2D video sequence showing both perpetrators. Raters were allowed to watch the sequence as many times as necessary. All raters’ descriptions were summarized and combined into one modal description for each perpetrator, in which only descriptions that were mentioned by multiple raters were selected. As the majority of raters incorrectly reported an absence of beards for both perpetrators, all raters’ descriptions were included. This approach was taken to avoid a lineup bias caused by this absent-beard description. An overview of the summarized and modal descriptions can be found in S2-S5 Tables in S1 File.

In accordance with the match to description strategy, fillers were recruited based on the provided modal descriptions. Subsequently, 130 mock witnesses (not part of the participants described in Section 2.4.1) were instructed to evaluate the respective sets of images (simultaneous format) via an anonymous online survey (Feb—Mar 2023) by choosing the face that best matched the modal description given for each perpetrator. Note that this sample considerably exceed the minimum sample size of 30–50 that is often recommended in this context, which we considered advantageous because choice distributions should become more representative with larger numbers of witnesses (for a similar approach, cf. [33]). The sample was 72.3% female (n = 94) and 27.7% male (n = 36) with a mean age of 30.6 years (SD = 9.4, min = 18, max = 63). The majority of witnesses were born and raised in Germany (93.8%, n = 122), with 3.8% in Austria (n = 5) and 2.3% in Switzerland (n = 3). In order to determine the effective size of the lineup, Tredoux’s E was calculated [27, 34]. This value can vary between 1 and 9 for the present lineups. Here, a value of 1 represents a very unfair and a value of 9 a very fair lineup (ideal performance equivalent to nominal lineup size; cf. [35]). Tredoux’s E was 4.687 for the lineup of Perpetrator 1 and 5.406 for the lineup of Perpetrator 2. Detailed data concerning the choice frequencies of the raters is shown in Table 2. This shows that some fillers were selected more often than others. Note that in many real lineup situations, certain fillers are also falsely identified more often than others (e.g., [36]). This situation seems inevitable to some degree, especially when there still exists no commonly accepted metric of facial similarity, and when actual lineup fillers are selected from a limited subset of real people.

thumbnail
Table 2. Distribution of mock witness choices in the lineups (n = 130).

https://doi.org/10.1371/journal.pone.0295033.t002

We examined a possible bias against the perpetrators in each lineup using two measures. First, the proportion of mock witnesses who chose the original perpetrator to match the modal description was tested for significance in relation to the probability that the respective perpetrator was selected by chance using a binomial distribution. In the online survey, 9 witnesses chose Perpetrator 1 and 17 participants chose Perpetrator 2. Given a sample size of n = 130, a significance level of 0.05, and a random probability of 0.11, the acceptance range of the binomial distribution falls between the values 8 and 22. Accordingly, the proportion of mock witness choices does not differ significantly from chance in either lineup.

The second measure considers the percentages of the proportions of mock witness choices and chance probability. Ideal performance corresponds with chance, meaning 11% for the present nine-member lineup. The percentages of mock witness choices were 6.9% for Perpetrator 1 (95% CI = 2.6%, 11.3%) and 13.1% for Perpetrator 2 (95% CI = 7.3%, 18.9%). Therefore, there seem to be no bias towards the perpetrators in the lineups. The S1 File contain further details on the characteristics of the mock witnesses, as well as additional data on age and the allocation of lineup members from the mock witness paradigm (Table 2) to the available database (see S6 and S7 Tables in S1 File).

Given the substantial differences between the task of mock witnesses and the task of actual eyewitnesses (c.f., [37, 38]), as well as the potential impact of lineup composition variables such as nominal size and target presence or absence, which can be flexibly manipulated using JERS, we would advise future users of the JERS stimuli to achieve a comprehensive fairness evaluation of any specific lineup version they use based on the lineup data collected (e.g. resultant lineup size, see [37]).

2.4 Additional material: Induction of stress

The utilization of a VR technology offers new opportunities for experimental psychological research owing to its ability to engender a heightened sense of authenticity and realism in user experiences [14, 15]. To investigate the impact of stress on the accuracy of eyewitness testimony using a more realistic yet straightforward approach, we recorded two additional videos enabling the manipulation of the participants’ perceived stress level. These include one “high stress” video with two scary characters designed to increase stress levels. It depicts an old, sparsely illuminated basement of an apartment building, accompanied by dark, ominous background music to create a foreboding atmosphere. The video features intermittent periods of darkness. At some point, two individuals wearing white masks appear at various distances from the observer. The final scene of the video shows one of these masked characters running towards the observer. The second “low stress” video shows three forest scenes with the sound of birds in the background, which is intended to have a calming effect on participants. Both videos are in 360° format and viewable with VR glasses. The scary video has a length of 2:48 min and the forest video of 2:49 min. Fig 5 presents various screenshots illustrating the two films.

thumbnail
Fig 5. Screenshots of additional material from the “high stress” video with two scenes from the basement with (top image, 1st) and without (2nd) the scary character, and from the “low stress” video (3rd, and bottom image, 4th) with two different forest scenes.

https://doi.org/10.1371/journal.pone.0295033.g005

2.4.1 Validation of stress induction.

To validate the manipulation of stress level induced by the two videos, we used the Valence and Arousal scales of the Self-Assessment Manikin [39]. For this purpose, 49 participants (10 males, 39 females; mean age = 24.88, SD = 10.28, min = 18, max = 56; additional descriptive data can be found in S8 Table in S1 File) were recruited to watch the “low stress” video followed by the “high stress” video (Nov 2022—Jan 2023). OculusTM Rift VR glasses and a desktop computer with an NVIDIA GeForce GTX 1060 6GB graphics card were used to present these stimuli. Valence and Arousal measures were assessed at three different time points: prior to viewing the videos (t1), after watching the “low stress” video (t2), and after watching the “high stress” video (t3). After data collection, the authors did not have access to information that could have identified individual participants. The results displayed in Fig 6 indicate that significant differences in Valence and Arousal were observed between t1 and t2, t2 and t3, and t1 and t3. Detailed results can be found in S9 Table in S1 File. Furthermore, Fig 7 illustrates that watching the different forest scenes seemed to lead to higher pleasure and calmness, whereas the scary video seemed to induce higher displeasure and arousal. Note that to validate the present stress induction, we only assessed participants’ subjective perception of their emotions. Given the multifaceted nature of stress (in terms of subjective, behavioral, and physiological responses), further validation of stress induction by these videos might be desirable and could be collected when using these videos in the future. These could include assessing physiological stress indicators (e.g., heart rate, blood pressure, skin conductance or cortisol levels), or behavioral measures (e.g., quantification of avoidance behavior via 3D motion tracking).

thumbnail
Fig 6. Group differences in Valence and Arousal prior to viewing the videos (t1), after watching the “low stress” video (t2), and after watching the “high stress” video (t3); n = 49, two-tailed paired tested.

https://doi.org/10.1371/journal.pone.0295033.g006

thumbnail
Fig 7. Data distribution (n = 49) prior to viewing the videos (t1), after watching the “low stress” video (t2), and after watching the “high stress” video (t3).

https://doi.org/10.1371/journal.pone.0295033.g007

2.5 Ethics statement and research on human participants

The research reported in this article has been approved by the Ethics Committee of the Faculty for Social and Behavioral Sciences of Friedrich Schiller University (Reg.-Nr. FSV 20/035). All participants provided written informed consent to their participation; moreover, all participants appearing in the images or videos of the stimulus database provided written informed consent to the non-commercial use of these stimuli in scientific research, and to their inclusion and publication in the context of this database. The individual in this manuscript has given written informed consent (as outlined in PLOS consent form) to publish these case details.

3 Potential use of the stimuli

There is high and increasing demand for, and use of, lineup stimuli in psychological eyewitness research. For instance, a Web of Science search of scientific articles using the keyword ’lineup’ (conduced on 22 Mai 2023) returned more than 1,200 papers published since January 1st, 2000, as compared with 360 papers published in the years 1975–1999. However, as the creation and validation of appropriate stimuli take substantial time and effort, we hope that by providing JERS to the research community, we will facilitate further eyewitness research and encourage the use of new data formats in eyewitness accuracy studies. Especially the utilization of VR videos and 3D lineup images appears to be inadequately represented despite its potential, which may be attributed to the perceived high levels of expertise, equipment and resources required for generating such stimuli. JERS is freely accessible for the use of academic purposes via the Open Science Framework (OSF; https://osf.io/c29h6/; DOI: 10.17605/OSF.IO/C29H6).

The importance of conducting eyewitness research stems from the recognition that eyewitness reports may be vulnerable to inaccuracies, despite their considerable influence in criminal justice proceedings [24]. Consequently, it is crucial to comprehend the precision of eyewitness reports and facilitate their interpretation to guarantee equitable verdicts. JERS provides a diverse range of possibilities for conducting eyewitness accuracy research, encompassing both laboratory and online settings. The availability of different video variants allows the investigation of various factors in addition to lineup perpetrator identification, such as the impact of the victim’s gender, the presence of single or multiple perpetrators (e.g., see [40], or the distinction between 2D and 360° videos shown under conventional versus VR conditions (e.g., [41, 42]. Video 1 with long-distance shots of the perpetrators represents a realistic scenario of observing a theft from the perspective of an unrelated witness sitting in a park. Also, to allow for more specific investigations that require a longer time for facial encoding (e.g., recognition of facial expressions, emotions, or specific facial features), Video 2 shows extended illustrations of the perpetrators from a short distance. Please note when using the videos that these theft scenes are not necessarily representative of other crimes, especially those involving more violence or weapons. Also noteworthy is the circumstance that Video 2 was recorded two years after the lineup images were captured, so differences in the appearance of the perpetrators are visible. Accordingly, this enables the investigation of eyewitness accuracy despite changes in appearance, for instance due to different hair lengths. A real-life example could be that an individual who has already committed a crime, and of whom the police already have photographs, becomes a new suspect, a situation in which witnesses may be shown the older images.

Furthermore, while we appreciate that some researchers who wish to use the JERS stimuli to investigate effects or stress or emotions on eyewitness memory might use their own or customized material (e.g., for a large stimulus database, see [43], the integration of additional materials to induce stress into JERS has potential advantages of compatibility, including seamless integration into VR presentation settings.

In fact, VR represents a promising tool for eyewitness research, as it enables researchers to create controlled environments that provide observers with an immersive experience [1315]. The use of VR facilitates the creation of realistic situations, allowing participants to feel as eyewitnesses to a crime or event.

In addition, we emphasize that our present inclusion of 3D lineup images opens a promising avenue for research, and we hope that researchers using the JERS will find this a useful resource which warrants further investigation. Law enforcement agencies are increasingly turning to various forms of 3D image usage, in addition to the commonly employed 2D photographs of suspects [44]. For instance, in Tokyo, the Japanese police had begun using 3D images when photographing arrested individuals already in 2016 [45], while in Germany, researchers have been working on the GES-3D project (2012–2015) to develop a multi-biometric system that utilizes 3D facial images for the identification of crime suspects, even from low-quality photo or video data [46]. Consequently, more widespread use of 3D image recordings as lineup images appears to be only a matter of time. From an experimental perspective, 3D images of faces permit researchers to display lineup faces across a large range of viewpoints as required by their specific research question. From an applied perspective, the potential for additional information available in 3D images [47] to enhance eyewitness identification accuracy (compared with 2D images) will be of primary interest.

Supporting information

S1 Checklist. STROBE statement—checklist of items that should be included in reports of observational studies.

https://doi.org/10.1371/journal.pone.0295033.s001

(PDF)

Acknowledgments

The authors express their gratitude to Rebecca Wientzek for her valuable support in capturing the photographs, and to Anne Fischer for her diligent editing of the images. U.K. has been supported by a PhD fellowship “Thuringian Landesgraduiertenstipendium”.

References

  1. 1. Brewer N., & Wells G. L. (2011). Eyewitness Identification. Current Directions in Psychological Science, 20(1), 24–27. https://doi.org/10.1177/0963721410389169.
  2. 2. Bruce V., Burton M. & Hancock P. (2012). Remembering faces. In Lindsay R. C. L., Ross D. F., Read J. D& Toglia M. P(Eds.), The Handbook of Eyewitness Psychology. Volume 2, Memory for People (pp. 87–100). Routledge: New York, Abingdon.
  3. 3. Lindsay R. C. L., Ross D. F., Read J. D. & Toglia M. P. (2012). The Handbook of Eyewitness Psychology. Volume 2, Memory for People. Routledge: New York, Abingdon.
  4. 4. O´Donohue W. T. & Levensky E. R. (2004). Handbook of Forensic Psychology. Elsevier Academic Press: San Diego, California, USA.
  5. 5. Wixted J. T., Mickes L., & Fisher R. P. (2018). Rethinking the Reliability of Eyewitness Memory. Perspectives on Psychological Science, 13(3), 324–335. pmid:29716454
  6. 6. Innocence Project (2023, March 24). Eyewitness Identification Reform. https://innocenceproject.org/eyewitness-identification-reform/.
  7. 7. Wells G. L., & Olson E. A. (2003) Eyewitness Testimony. Annual Review of Psychology, 54(1). 277–295. pmid:12209024
  8. 8. Deffenbacher K. A., Bornstein B. H., Penrod S. D., & McGorty E. K. (2004). A meta-analytic review of the effects of high stress on eyewitness memory. Law and Human Behavior, 28(6), 687–706. pmid:15732653
  9. 9. Marr C., Sauerland M., Otgaar H., Quaedflieg C. W. E. M., & Hope L. (2021a). Mitigating the negative effects of retrieval stress on memory: an arousal reappraisal intervention. Memory, 29(3), 330–344. pmid:33686922
  10. 10. Price H. L., Tottenham L. S., Hatin B., Fitzgerald R. J., & Rubínová E. (2022). Effects of stress on eyewitness identification in the laboratory. Applied Cognitive Psychology, 36(1), 191–202. https://doi.org/10.1002/acp.3910.
  11. 11. Sauerland M., Raymaekers L. H. C., Otgaar H., Memon A., Waltjen T. T., Nivo M., et al. (2016). Stress, stress‐induced cortisol responses, and eyewitness identification performance. Behavioral Sciences & the Law, 34(4), 580–594. pmid:27417874
  12. 12. Marr C., Sauerland M., Otgaar H., Quaedflieg C. W. E. M., & Hope L. (2021b). The effects of acute stress on eyewitness memory: an integrative review for eyewitness researchers. Memory, 29(8), 1091–1100. pmid:34309476
  13. 13. Pan X. N., & Hamilton A. F. D. (2018). Why and how to use virtual reality to study human social interaction: The challenges of exploring a new research landscape. British Journal of Psychology, 109(3), 395–417. pmid:29504117
  14. 14. Engelmann N. (2018). Virtual Reality Gaming: Potential der Technologie für die Welt der digitalen Spiele [Virtual Reality Gaming: Potential of the technology for the world of digital games]. Tectum Wissenschaftsverlag. https://dx.doi.org/10.5771/9783828870611.
  15. 15. Madary M. & Metzinger T. K. (2016). Recommendations for Good Scientific Practice and the Consumers of VR-Technology. Frontiers in Robotics and AI, 3(Suppl. 3), 235.https://doi.org/10.3389/frobt.2016.00003.
  16. 16. Fromberger P., Jordan K. & Müller J. L. (2014). Anwendung virtueller Realitäten in der forensischen Psychiatrie. Ein neues Paradigma? [Use of virtual reality in forensic psychiatry. A new paradigm?]. Der Nervenarzt, 85(3), 298–303. https://doi.org/10.1007/s00115-013-3904-7.
  17. 17. Głomb K. (2022). How to improve eyewitness testimony research: theoretical and methodological concerns about experiments on the impact of emotions on memory performance. Psychological research, 86(1), 1–11. pmid:33604724
  18. 18. Sporer S. L., Sauerland M. & Kocab K. (2014). Personenidentifizierung [Person identification]. In Bliesner T., Lösel F& Köhnken G. (Eds.), Lehrbuch Rechtspsychologie (pp. 156–182). Verlag Hans Huber: Bern.
  19. 19. Wells G. L., Leippe M. R., & Ostrom T. M. (1979). Guidelines for empirically assessing the fairness of a lineup. Law and Human Behavior, 3(4), 285–293.
  20. 20. Köhnken G. (1990). Fehlerquellen im Gegenüberstellungsverfahren [Error sources in the lineup procedure]. In Köhnken G.& Sporer S. L. (Ed.), Identifizierung von Tatverdächtigen durch Augenzeugen [Identification of suspects by eyewitnesses] (pp. 157–177). Verlag für Angewandte Psychologie.
  21. 21. Wells G. L., Kovera M. B., Douglass A. B., Brewer N., Meissner C. A., & Wixted J. T. (2020). Policy and procedure recommendations for the collection and preservation of eyewitness identification evidence. Law and Human Behavior, 44(1), 3–36. pmid:32027160
  22. 22. Wells G. L., Small M., Penrod S., Malpass R. S., Fulero S. M., & Brimacombe C. A. E. (1998). Eyewitness identification procedures: Recommendations for lineups and photospreads. Law and Human Behavior, 22(6), 603–647. https://doi.org/10.1023/A:1025750605807.
  23. 23. Juncu S., & Fitzgerald R. J. (2021). A meta-analysis of lineup size effects on eyewitness identification. Psychology, Public Policy, and Law, 27(3), 295–315. https://doi.org/10.1037/law0000311.
  24. 24. Fitzgerald R. J., Rubínová E., & Juncu S. (2021). Eyewitness identification around the world. In Methods, measures, and theories in eyewitness identification tasks (pp. 294–322). Routledge.
  25. 25. Shepherd J. W., Gibling F. & Ellis H. D. (1991). The effects of distinctiveness, presentation time and delay on face recognition, European Journal of Cognitive Psychology, 3(1), 137–145. https://doi.org/10.1080/09541449108406223.
  26. 26. Dewhurst S. A., Hay D. C. & Wickham L. H. V. (2005). Distinctiveness, typicality, and recollective experience in face recognition: A principal components analysis, Psychonomic Bulletin & Review, 12(6), 1032–1037. pmid:16615324
  27. 27. Sporer S. L. & Sauerland M. (2008). Personenidentifizierung [Person identification]. Forensische Psychiatrie, Psychologie, Kriminologie, 2(1), 28–36. https://doi.org/10.1007/s11757-008-0057-9.
  28. 28. Carlson C. A., Gronlund S. D., & Clark S. E. (2008). Lineup composition, suspect position, and the sequential lineup advantage. Journal of Experimental Psychology: Applied, 14, 118–128. pmid:18590368
  29. 29. Gronlund S. D., Carlson C. A., Dailey S. B., & Goodsell C. A. (2009). Robustness of the sequential lineup advantage. Journal of Experimental Psychology: Applied, 15, 140–152. pmid:19586253
  30. 30. Malpass R. S., Tredoux C. G., & McQuiston-Surrett D. (2007). Lineup Construction and Lineup Fairness. In Lindsay R. C. L., Ross D. F., Read J. D. & Toglia M. P(Eds.), The Handbook of Eyewitness Psychology, Volume 2, Memory for People (pp. 155–178). Lawrence Erlbaum & Associates.
  31. 31. Doob A. N., & Kirshenbaum H. M. (1973). Bias in police lineups—Partial remembering. Journal of Police Science and Administration, 1, 287–293.
  32. 32. Malpass R. S., & Lindsay R. C. L. (1999). Measuring lineup fairness. Applied Cognitive Psychology, 13, 1–7. https://doi.org/10.1002/(SICI)1099-0720(199911)13:1+<S1::AID-ACP678>3.0.CO;2-9.
  33. 33. Dobolyi D. G., & Dodson C. S. (2013). Eyewitness confidence in simultaneous and sequential lineups: A criterion shift account for sequential mistaken identification overconfidence. Journal of Experimental Psychology: Applied, 19(4), 345–357. pmid:24188335
  34. 34. Tredoux C. G (1998). Statistical inference on measures of lineup fairness. Law and human behavior, 2(22), 217–237.
  35. 35. Carlson C. A., Jones A. R., Whittington J. E., Lockamyeir R. F., Carlson M. A. & Wooten A. R. (2019). Lineup fairness: propitious heterogeneity and the diagnostic feature-detection hypothesis. Cognitive research: principles and implications, 4(1), 20. pmid:31197501
  36. 36. Bruce V., Henderson Z., Greenwood K., Hancock P. J., Burton A. M., & Miller P. (1999). Verification of face identities from images captured on video. Journal of Experimental Psychology: Applied, 5(4), 339–360. https://doi.org/10.1037/1076-898X.5.4.339.
  37. 37. Quigley-McBride A., & Wells G. L. (2021). Methodological considerations in eyewitness identification experiments. In Methods, measures, and theories in eyewitness identification tasks (pp. 85–112). Routledge.
  38. 38. Menne N. M., Winter K., Bell R. & Buchner A. (2023). Measuring lineup fairness from eyewitness identification data using a multinomial processing tree model. Scientific Reports, 13, 6290 (2023). pmid:37072473
  39. 39. Bradley M. M., & Lang P. J. (1994). Measuring emotion: the self-assessment manikin and the semantic differential. Journal of behavior therapy and experimental psychiatry, 25(1), 49–59. pmid:7962581
  40. 40. Megreya A. M., & Bindemann M. (2012). Identification accuracy for single- and double-perpetrator crimes: Does accomplice gender matter? British Journal of Psychology, 103, 439–453. pmid:23034106
  41. 41. Bindemann M., Fysh M. C., Trifonova I. V., Allen J., McCall C., & Burton A. M. (2022). Face Identification in the Laboratory and in Virtual Worlds. Journal of Applied Research in Memory and Cognition, 11(1), 120–134. https://doi.org/10.1016/j.jarmac.2021.07.010.
  42. 42. Nyman T. J., Antfolk J., Lampinen J. M., Korkman J., & Santtila P. (2020). Eyewitness identifications after witnessing threatening and non-threatening scenes in 360-degree virtual reality (or 2D) from first and third person perspectives. Plos One, 15(9), Article e0238292. pmid:32877440
  43. 43. Schaefer A., Nils F., Sanchez X., & Philippot P. (2010). Assessing the effectiveness of a large database of emotion-eliciting films: A new tool for emotion researchers. Cognition & Emotion, 24(7), 1153. https://doi.org/DOI:10.1080/02699930903274322.
  44. 44. Weinert S. (2015). Die Verletzung von Qualitätssicherungsvorschriften im Strafverfahren. Eine Lücke in der Lehre von den Beweisverwertungsverboten. [The violation of quality assurance rules in criminal proceedings. A gap in the theory of the prohibition of the use of evidence] Nomos.
  45. 45. BBC (2016, January 25). Tokyo police roll out 3D mug shots. https://www.bbc.com/news/blogs-news-from-elsewhere-35401052.
  46. 46. Bundesministerium für Bildung und Forschung (BMBF, 2023, March 24). GES-3D: Multi-Biometrische Gesichtserkennung. https://www.sifo.de/sifo/de/projekte/schutz-vor-kriminalitaet-und-terrorismus/biometrie/ges-3d/ges-3d-multi-biometrische-gesichtserkennung.html?nn=263698.
  47. 47. Zeng, J., Qiu, X., Shi, S., Bian, X., and Zhu, H. (2020). 3D Imaging Techniques for Forensic Identification of Human Images. In Proceedings of the 4th International Conference on Computer Science and Application Engineering (pp. 1–5). Association for Computing Machinery, New York, NY, USA. https://doi.org/10.1145/3424978.3425104.