Figures
Abstract
To foster academic integrity in students and future scholars, it is essential to understand how their integrity behaviours evolve throughout their educational trajectory and across various academic integrity topics. While much research has examined students’ perception of and engagement in plagiarism and other forms of clear-cut cheating, grey-zone practices have largely been neglected, and comparisons across educational levels are rare. This paper presents a comprehensive overview of European students’ conceptions of and engagement with less clearcut aspects of academic integrity, and the potential effects of academic integrity training. The study draws on a large-scale survey of 3,297 students from Denmark, Ireland, Portugal, and Switzerland, covering three educational levels (upper secondary, Bachelor, and PhD). The survey examined perceptions of and engagement in likely grey-zone and non-compliant practices across three dimensions of academic integrity: i) Plagiarism and citation practice, ii) Collaborative practices, and iii) Data collection and analysis. Responses were analysed using descriptive statistics and regression analyses. Results showed that participants at higher educational levels were better at identifying likely non-compliant practices related to plagiarism and citation, and they were less likely to have engaged in such practices during their current studies. Progress along the educational trajectory was less pronounced regarding collaborative practices and practices related to data collection and analysis. In particular, 14% of the PhD level participants admitted having deleted deviating data “based on a gut feeling that they were inaccurate” and 20% admitted to keeping inaccurate records. All participants had a low level of competence in identifying grey-zone practices, and strikingly, their competences did not improve along their educational trajectory. Academic integrity training was not consistently correlated with any group of participants’ competences regarding likely grey-zone practices, although it was positively correlated with upper secondary and PhD participants’ competences concerning certain likely non-compliant practices. These results call for a different approach to academic integrity training. In particular, they call for more comprehensive approaches that include grey-zone as well as non-compliant practices, and address a broad range of questionable behaviours, not only plagiarism.
Citation: Johansen MW, Goddiksen MP, Clavien C, Hogan L, Olsson IAS, Santos JB, et al. (2026) Academic integrity across educational levels: Exploring students’ engagement with grey-zone and non-compliant practices in four European countries. PLoS One 21(3): e0342227. https://doi.org/10.1371/journal.pone.0342227
Editor: Syed Hamid Hussain Madni, University of Southampton, MALAYSIA
Received: February 17, 2025; Accepted: January 19, 2026; Published: March 4, 2026
Copyright: © 2026 Johansen et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: This paper analyses data from a large-scale survey. The full dataset from the survey is available at: https://doi.org/10.17894/ucph.66293057-46c5-4d02-a116-dfd597ce5a78 All data in survey are purely quantitative and anonymous by design. No qualitative or interview data are analyzed in this paper.
Funding: The study was funded by European Union’s Horizon 2020 research and innovation programme under grant agreement No 824586. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
1. Introduction
Over the past decades, a number of key areas of focus have emerged in the study of academic integrity among students in higher education. For upper secondary and undergraduate university students, different types of cheating – especially plagiarism – have been a major area of focus [e.g., 1–9]. These studies show that cheating – and plagiarism in particular – is common among upper secondary students. In an early study, Schab [1] found that 76% of a population of American upper secondary students admitted to having plagiarised, and 68% admitted to using cheat sheets. Roughly comparable levels of transgressions were also found in later studies and studies from other countries [e.g., 6, 10–12], with Taiwanese students as the exception, reporting markedly lower levels (<3%) of plagiarism [7]. The level of exam-based cheating among undergraduate university students is reported to be somewhat lower. Based on a survey of more than 64,000 American and Canadian undergraduate college students, McCabe [3] reported that around 21% admitted to committing some form of severe exam-related cheating, such as copying from another student, using crib notes, or helping others to cheat. Hopp & Speil [9] found that more than 20% of a population of Austrian college students admitted to plagiarising. As an outlier, Curtis and Tremayne [8] observed that about 64% of a population of undergraduate students from Western Sydney University admitted to having engaged in some form of plagiarism at least once. The study did, however, also find that levels of cheating were on the decline compared with earlier studies of the same population.
In addition to investigations of the prevalence of cheating, students’ motivation to, perception of, and attitude towards cheating have also been areas of focus [e.g., 3, 12–23]. As an example, McCabe [3] investigated students’ and faculty members’ perception of the seriousness of a selection of different forms of cheating, such as copying from other students or using crib notes. Results showed that fewer students considered these behaviours to be moderate or serious cheating compared to faculty members. A slightly different approach was taken by Roig [13], where participants were given different paraphrases of a short text and for each paraphrase asked to determine whether it complied with regulations. Results showed that 40–50% of the participants believed some of the non-compliant paraphrases were acceptable.
Other types of problematic behaviour, such as outsourcing assessments to third parties (known as ‘contract cheating’), fraud, and corruption (e.g., bribing teachers) have also been given some attention [e.g., [24–26]].
Questionable authorship practices have been an important additional area of focus for PhD level students. To mention two recent studies, Helgesson and colleagues [27] reported that 46% of a population of medical PhD students said that the standard ICMJE authorship criteria had not been fully respected in at least one of the papers included in their dissertation. Similarly, Goddiksen and colleagues [28] found that 34% of a cross-faculty population of PhD students in Europe believed that they had granted at least one undeserved authorship to a person in power during their PhD studies.
Questions about falsification and fabrication of data have received relatively little attention in the literature on student integrity (see [29] for a meta-analysis). One exception is Johansen & Christiansen [30], who found that 64% of a population of Bachelor students in chemistry programmes at Danish universities had deleted outliers or discarded experiments only because the results seemed wrong or because they felt that there was a fault in the measurement. Furthermore, the study showed that large groups of students had been encouraged by a teacher to delete anomalous data.
As a final theme, various approaches to academic integrity training have been reported. McCabe and Treviño [31] found the introduction of honour codes to be an effective tool in reducing cheating among university and college students. Later research, however, revealed that honour codes are only effective in reducing cheating if they are part of a larger culture supporting integrity [5:106, 32]. This result resonates well with a large body of studies showing contextual factors such as culture and the perception of peer behaviour to be major explanatory factors in students’ integrity behaviour and attitudes [e.g., 33, 34].
Another line of research has sought to develop and evaluate specific educational interventions, but with mixed results. Some studies report a clear effect of simple interventions such as online courses [e.g., 35], while others show a limited effect [e.g., 36]. In one of few effect-control group studies, Goddiksen and colleagues [37] compared a traditional text-based training module with training utilising an interactive online platform. Both interventions significantly improved participants’ understanding of academic integrity, but contrary to expectations, no difference between the effect of the two modes of training could be observed.
In a meta-analysis of 30 studies reporting the effect of training interventions, Katsarov and colleagues [38] demonstrated that interventions are most effective if they allow for a constructive approach where learners are taught to apply guidelines to complex cases. As an important limitation to this line of research, a meta-analysis Stoesz & Yudintseva [39] found that few of the effect studies investigated whether the evaluated intervention led to long-term improvement in student behaviour.
As the short review above shows, the literature on academic integrity in higher education is comprehensive and addresses several key issues, primarily: prevalence of cheating, attitudes towards it, integrity training approaches, and questionable authorship practices among PhD students. However, certain important areas remain underexplored. For example, the vast majority of the studies focus only on a single educational level. This makes it difficult to systematically map and compare students’ behaviour and understanding of academic integrity across levels. If we want to improve the academic integrity of students and future academics, it is imperative to know how their integrity behaviour changes throughout their educational trajectory, especially during the formative years from upper secondary to graduate school. Such an overview can provide valuable background knowledge when planning interventions at specific levels. It may also help to identify gaps and topics for which students are likely to lack the training they will need when progressing to the next educational level.
Furthermore, as discussed by Goddiksen and colleagues [23], most of the current academic integrity literature focuses on practices that are (at least from the teachers’ point of view) clearly non-compliant. While it is important that students and academics know the rules and regulations well enough to handle such clear-cut situations correctly, these do not exhaust the landscape of academic integrity. Many of the integrity issues students encounter in their daily lives fall within a ‘grey zone’, where simply knowing the rules is not enough to handle the issues ethically. In these cases, it is also necessary to consider the context and to be aware that there may not always be a single correct answer to an ethical dilemma [cf. 23, 30, 40].
In this study, we will address these gaps in the literature by reporting from a large-scale survey of upper secondary, Bachelor, and PhD students in Europe. The main aims of the study are: 1) to provide a systematic and extensive overview of European students’ conceptions of and engagement with academic integrity and 2) to analyse the relationship between students’ integrity-related behaviour and academic integrity training as well as other demographic background variables (gender, age, country of study).
The study aims to be comprehensive in terms of both educational levels and types of integrity issues covered. Regarding educational levels, the study includes upper secondary, Bachelor, and PhD students, thus enabling systematic comparisons across these three levels. Regarding types of integrity issues, the study addresses likely grey-zone as well as likely non-compliant actions, and it covers three dimensions of academic integrity in roughly equal measures: i) plagiarism and citation practice, ii) collaborative practices, and iii) data collection and analysis.
The comprehensive overview and analysis of students’ perception and behaviour and the relationship with academic integrity training will allow us to identify possible blind spots and areas of potential improvement in the current approaches to academic integrity for upper secondary, university, and PhD students in Europe.
2. Materials and methods
The study is based on a subset of data from a questionnaire-based survey undertaken as part of the project INTEGRITY (https://h2020integrity.eu/). The survey aimed to map academic integrity within the European Economic Area (EEA) across educational levels from upper secondary to PhD, and across different fields of study. Only data from questions concerning demographic background, academic integrity training, competences and conceptions of academic practices, and own questionable practices are included in this study.
Four studies based on parts of the survey data have previously been published [23, 28, 40, 41]. These studies explore Bachelor students’ conceptions of academic integrity [23], upper secondary students’ conceptions of and academic integrity behaviour [40], PhD student’s experiences with authorship attribution [28], and upper secondary and Bachelor student’s experiences with text matching software [41]. This study adds to these by presenting and analysing data concerning own behaviour from Bachelor and PhD students and conception of rules from PhD students not previously analysed. Furthermore, in this study, results and analyses are compared across educational levels to reach the first research objective. No further manuscripts based on the survey data are planned, in press or in review.
2.1. Ethics
The study was reviewed and approved by the Research Ethics Committee for Science and Health at the University of Copenhagen prior to the pilot tests (ref. no. 504–0043/18–5000). Participation in the study was voluntary and anonymous. Participants were not compensated for participating. For participants above the age of 18, written informed consent to participate was obtained via the first question in the questionnaire (see S1 File). For participants below the age of 18, parental consent was also collected and retained by the participating institutions. Additional information regarding the ethical, cultural, and scientific considerations specific to inclusivity in global research is included in the Supporting Information S5 File.
2.2. Participants and data collection
Data were collected in nine EEA countries or areas, using an anonymous online survey from January 2020 to December 2020. The overall project aimed to have at least 200 participants from each of the study levels. To achieve this, different recruitment strategies were used for the different study levels.
For the upper secondary level, we used an institution-level sampling design. A complete list of upper secondary institutions was compiled for each country, and institutions were randomly drawn from the list and invited to participate until a target of approximately 200 participants was met (see Johansen and colleagues [40] for further details).
For Bachelor level, we used a programme-level sampling design. For each country, a complete list of Bachelor programmes was compiled, and programmes were randomly drawn from the list and invited to participate. In addition to the target of approximately 200 participants, we also aimed to include at least 45 participants from each of the three scholarly fields: humanities, social sciences, and STEMM (Science, Technology, Engineering, Mathematics, and Medicine). Only students who had completed at least one year of their studies (60 ECTS) were invited to complete the survey (see Goddiksen and colleagues [23] for further details).
For PhD level, the recruitment strategy differed from country to country due to large variation in institutional organisation, as well as the number of PhD students. In addition to the target of approximately 200 participants, we aimed to include at least 45 participants from each field (humanities, social sciences, and STEMM). For countries and faculties with few potential participants, total population recruitment was carried out, while a programme-level sampling design was used in countries and faculties with many potential participants (see [28] for further details).
For the analysis in this paper, we included countries with at least 200 participants from each study level. This target was reached for Denmark, Ireland, Portugal, and the French-speaking part of Switzerland, but not the German-speaking part of Switzerland, Germany, Hungary, Lithuania, the Netherlands, or Slovenia (Table 1). Consequently, the final dataset for this study consists of 3,297 participants from the four EEA countries or areas shown as non-shaded in Table 1. Note that we only reached 199 Bachelor level participants in the French-speaking part of Switzerland, but we considered this number to be sufficiently close to the target to include the area in the study. See [23,28,40] for detailed information about each of the three populations.
2.3. Materials and measures
The questionnaire was developed using a qualitative interview study (see [23,42] for details). The questionnaire was pilot tested, translated into the dominant language of each of the nine participating countries and implemented as an anonymous online survey using the platform SurveyXact ver. 12.9 (https://www.surveyxact.com/). The pilot testing involved both qualitative and quantitative tests of the questionnaire. The translation was carried out following a ‘translation back-translation’ protocol (c.f. [40]).
The questionnaire was adaptive, such that the questions presented to a particular participant would depend on their educational level and background. For Bachelor and PhD students, only the participants who stated that they worked with data were presented with all the questions relating to data, and the wording of the questions was designed to fit the type of data they primarily used (qualitative, quantitative, historical sources, or works of art). Similarly, the wording of the questions was designed to fit the educational level of the participants. See S1 File for details. Here, we focus on the parts of the survey reported in this study. These can be divided into four main categories.
- 1. Demographics. These questions included standard demographic details (age, gender identity, country of study) and study-specific details (name of educational institution, educational level, and for Bachelor and PhD students, their general area of study and the type of data they primarily used).
- 2. Competences and conceptions of academic practices. Participants’ competence in relation to academic integrity was explored through two different approaches: a) a direct test of competences in relation to the use of others’ texts and b) a survey of participants’ conceptions of rules of academic practice and how these rules apply to specific situations.
In the direct test of competences, we used a scenario-based approach inspired by Roig [13]. Participants were presented with a short text (41 words) and were told that a friend wanted to use it in an assignment/paper they were currently writing. Participants were then shown four different paraphrases of the text, and for each one they were asked whether their friend had acted in an acceptable way by using the paraphrase in question. The four paraphrases can be described as follows (see S1 File for details):
- Paraphrase 1: A direct copy with no citation marks and no reference to the original.
- Paraphrase 2: Some insignificant words had been changed to synonyms. There was no reference to the original.
- Paraphrase 3: Same as paraphrase 2 but with a reference to the original.
- Paraphrase 4: A more substantial rewriting with a reference to the original.
To assess the participants’ conceptions of rules, they were presented with descriptions of 11 academic practices: four relating to citation and plagiarism, four to collaboration, and three to data collection (Table 2). For each academic practice, participants were asked if they believed the practice was “against the rules and regulations that apply to you”. Participants could answer by choosing one of the options: “Yes, it is a serious violation”, “Yes, but it is not a serious violation”, “No, it is not against the rules”, “The rules are unclear”, “It depends on the situation”, and “I don’t know”. The first three of these options indicate a belief that the rules and regulations apply to the practice. The fourth and fifth option, on the other hand, indicate the belief that the rules do not readily apply to the practice.
The academic practices were constructed such that half were likely non-compliant, while the other half were likely ‘grey-zone’ practices in the sense that an evaluation of the practice would depend on contextual information. For instance, paying someone to write an assignment for you is likely non-compliant, whereas it is likely to depend on the context and precise instruction whether or not it is against the rules to let one member of a group do all the writing on a group project while the other members contribute to the analysis and literature search. Table 2 presents the full list of academic practices and our categorisation of these (see also [23] for elaboration).
- 3. Academic integrity training. The questionnaire included two questions to assess the amount and type of academic integrity training participants had received.
Firstly, participants were asked if they had taken courses dedicated to rules and/or ethically correct behaviour in relation to the themes introduced in the questionnaire during their current or previous studies. The answer options were: “Yes, one or more dedicated courses”, “Yes, one or more lectures”, “Yes, one or more dedicated e-learning sessions”, and “No”. Multiple answers were possible. After the data collection was complete, we identified an ambiguity in the French translation, making it difficult to distinguish between the first two options in this version of the questionnaire. As a consequence, we merged the two options in all analyses.
Secondly, participants were asked if they had learned about rules and/or ethically correct behaviour in relation to the themes introduced in the questionnaire through other means closer to daily practice. Answer options here were: “Yes, through supervisors/teachers in other courses that commented on my written work or assignments”, “Yes, through courses not dedicated exclusively to such issues”, “Yes, through discussions with fellow students”, “Yes, through discussions with senior staff outside regular courses” (for upper secondary students: “Yes, through discussions with teachers outside regular classes”), “Yes, through self-study”, “Yes, by following the procedures that are common in my field of study” (option not presented to upper secondary students), “Yes, through discussions with friends and family outside my institution”, “Yes, other”, “No”, and “I don’t know”.
- 4. Questionable academic practice. In this study, we understand questionable academic practices (QAPs) as practices that are non-compliant but not classified as academic dishonesty under the FFP definition (for a discussion of the ambiguity related to the term, see [30]). To assess the participants’ own behaviour, we presented them with a number of QAPs and asked if they had engaged in any of them during their current course of education. For each practice, the answer options were: “Yes, many times”, “Yes, a few times”, “Yes, once”, “No”, “I prefer not to answer”, “Not applicable”, and “I don’t know”. The wording and type of QAPs presented to each participant depended on their educational level and use of data. Table 6 shows the QAPs covered in this study. Two of the practices were directly inspired by [43]. For further details, see S1 File.
2.4. Data analysis
Descriptive statistics are presented for the measures of dedicated and practical training. For the measures focusing on direct tests of competence, rule conception, and self-reported questionable behaviour, descriptive statistics are presented across the three study levels.
Multivariable logistic regression was carried out to identify factors that are associated with the 11 measures of rule conception. These measures were treated as dependent variables and were collapsed into binary variables. For the 5 measures of rule conception that were treated as likely non-compliant (see Table 2) we assigned the response options “Yes, it is a serious violation” and “Yes, but it is not a serious violation” the value = 1, while all other response options (“The rules are unclear”, “It depends on the situation”, “No, it is not against the rules”, and “I don’t know”) received the value = 0. For the 6 measures of rule conception that were treated as likely grey-zone academic practice (see Table 2) we assigned the response options “The rules are unclear”, “It depends on the situation” the value = 1, while all other response options (“Yes, it is a serious violation” and “Yes, but it is not a serious violation”, “No, it is not against the rules”, and “I don’t know”) received the value = 0. Even though the rule conception measures have 6 response option, we decided that a collapsing into binary dependent variables was the best approach here, because the response options are categorical in nature, and no orderings can be discerned. Two variables were inserted as categorical predictors: country of study and gender. Three variables were inserted as continuous variables: age, dedicated training, and practical training. The variable dedicated training was constructed as a variable with three levels ranging from (0 = no dedicated training) to 2 (both types of dedicated training). The variable practical training was constructed as a variable with four levels ranging from 0 (no practical training received) to 3 (all three types of training were received). Regressions were run separately for each study level.
Multivariable ordered logistic regression was carried out to identify factors that were associated with the participants’ own (self-reported) questionable behaviour. The six measures of self-reported questionable behaviour were treated as dependent variables The response options of these variables received the following values in the regressions: “No” = 0, “Yes, once” = 1, “Yes, a few times” = 2, and “Yes, many times” = 3. Participants giving other responses (“I prefer not to answer”, “Not applicable” and “I don’t know”) were excluded from the analysis. The same variables (mentioned above regarding the multivariable analysis of rule conception) were inserted as predictors. These regressions were also run separately for each study level.
In supporting information (S4 File), model results are reported for all regressions carried out. We report the likelihood ratio chi2 test statistics for the full model, goodness of fit tests, and the Odds Ratio (OR) and p-value for all parameters. For the goodness of fit assessment, we use the Hosmer-Lemeshow goodness of fit test [44] to evaluate model fit in the logistic regressions. In the ordered logistic regressions, we use three diagnostics to assess goodness of fit: the ordinal Hosmer-Lemeshow test (Ordinal HL), the Pulkstenis–Robinson chi-squared tests (PR), and the Lipsitz likelihood-ratio test (Lipsitz) [45]. For all goodness of fit tests, if the p-value >0.05, we considered the regression model to possess a poor fit. This was the case in 8 of the models. In those cases, to achieve an acceptable fit, we either removed the control variables gender, age (or both), or recoded the questionable behaviour outcome variable with a poor model fit from an ordered to a binary (0 = ”no”; 1 = ”yes”). These procedures are described in the relevant regression models in supporting information S4 File.
We were particularly interested in identifying possible links between practical and dedicated training, respectively, and the students’ rule conceptions and questionable behaviours. Therefore, for all regressions where practical and dedicated training predicted conceptions and behaviours at a statistically significant level (p < 0.05) we calculated marginal effects to evaluate the direction of the association. Stata’s margins (dy/dx) command was used to calculate marginal effects, and the other variables in the regressions were set at their mean value (Stata’s atmeans sub-command). The marginal effects are presented in supporting information S4 File. In addition, we also report the magnitude of the effect from these two variables on the dependent variables where we follow Sullivan and Feinn [46] and treat OR of 1.5 as a small effect, OR of 2.0 as a medium effect, and OR of 3 as a large effect.
3. Results
3.1. Training
The data on academic integrity training show a clear gradient indicating that participants at more advanced study levels were more likely to report that they had received dedicated training somewhere along their educational trajectory (Table 3). While only 39% of upper secondary level participants said they had attended a course or lecture on academic integrity, this number was 60% for Bachelor level participants and 68% for PhD level participants. Despite this gradient following the educational trajectory, it is worth noting that many students had not received any training; the majority of upper secondary and almost one third of PhD students who participated in the study reported that they had not received any form of dedicated academic integrity training during their current or previous studies. It is also worth noting that the numbers are based on the participants’ own perception, and they may have actually received a different amount of training than reported.
The data on practical training show a similar gradient indicating that participants at more advanced study levels were more likely to have received at least one type of practical training compared to participants at less advanced levels (Table 4). Among the upper secondary level participants, 59% had received practical training, whereas 72% of participants at Bachelor level and 80% of participants at PhD level reported that they had received such training. Furthermore, when comparing to Table 3, it is worth noting that more participants at all study levels reported that they had received practical compared to dedicated training.
3.2. Conceptions and competences
3.2.1. Direct test of competences.
As a direct test of their competences in paraphrasing and citation practice, participants were shown four different paraphrases of a short text and asked to evaluate whether each was acceptable (see Sec. 2.3 for details).
Once again, we see a clear gradient in the data, indicating that participants at more advanced study levels correctly classified the specific examples better than those at less advanced levels (Fig 1, see S2 File for further details). The first paraphrase (direct copy with no reference) is clearly unacceptable, yet 41% of the upper secondary level participants considered it to be either acceptable or completely acceptable, whereas only 22% and 13% of the Bachelor and PhD level participants, respectively, considered it to be so. Conversely, the fourth paraphrase is (arguably) acceptable, but only 54% of the upper secondary students were able to identify it as such, while 69% of Bachelor and 77% of PhD level participants identified it correctly.
The two answer options “Unacceptable” and “Completely unacceptable” are represented collectively as “Form of unacceptable”. Similarly, the answer options “Acceptable” and “Completely acceptable” are represented as “Form of acceptable”.
3.2.2. Rule conception.
The participants’ conceptions of the rules were tested by presenting them with 11 different academic practices (see Sec. 2.3 for details).
For the practices relating to citation and the use of others’ texts, the first two practices were likely non-compliant, whereas the last two were likely grey-zone practices (Fig 2, see S2 File for further details). Consequently, to demonstrate a good understanding of the rules, participants should choose a form of “Yes” when asked about the first two practices, and “The rules are unclear” or “It depends on the situation” when asked about the last two practices.
“Please indicate whether you believe the following actions go against the official rules and regulations that apply to you in relation to plagiarism”. “Yes, it is a violation” covers the two answer options “Yes, it is a serious violation” and “Yes, but it is not a serious violation”. “Undecided/it depends” covers the two options “The rules are unclear” and “It depends on the situation”.
We see a clear trend in the data for the first two practices, such that participants at more advanced levels were more frequently able to identify them correctly as non-compliant; 79% of upper secondary level participants identified copying an entire page as non-compliant, compared to 90% of Bachelor and 95% of PhD level participants. In fact, only 54% of upper secondary level participants considered it a serious violation of the rules. Similarly, 64% of upper secondary level participants were able to identify copying a short paragraph as non-compliant, compared to 83% of Bachelor and 91% of PhD level participants.
A different picture emerges for the two grey-zone practices. Concerning the practice of changing 10% of the words in a paragraph, 17% of upper secondary level participants answered, “The rules are unclear” or “It depends on the situation”, while 12% and 14% of Bachelor and PhD level participants, respectively, chose one of these two options. Similarly, 20% of upper secondary level participants viewed copying a central point formulated in half a sentence as a grey-zone practice, compared to 12% of Bachelor and 14% of PhD level participants.
Only one of the collaborative practices included in the survey was likely non-compliant, while the other three were likely grey-zone practices (see Table 2). As PhD students would not, in our judgement, identify with these situations and understand them as part of their studies, only upper secondary and Bachelor level participants were asked these questions.
Bachelor level participants were more likely to classify the non-compliant practice of paying someone to write an assignment for you correctly (95%) by choosing one of the two ‘yes’ options than upper secondary level participants (81%; see Fig 3, see S2 File for further details). We observed little difference between the two study levels for the three grey-zone practices, with 18%, 22%, and 25% of upper secondary level participants choosing the “The rules are unclear” or “It depends on the situation”, compared to 23%, 20%, and 23% of Bachelor level participants choosing these options.
Two of the practices relating to data collection were likely non-compliant, while “Not mentioning that you removed a number of deviating data points from a dataset when the cause of the deviation was known” was likely a grey-zone practice.
“Please indicate whether you believe the following actions go against the official rules and regulations that apply to you in relation to working with others and assigning authorship”. “Yes, it is a violation” covers the two answer options “Yes, it is a serious violation” and “Yes, but it is not a serious violation”, while “Undecided/it depends” covers the two options: “The rules are unclear” and “It depends on the situation”.
It is worth noting that the distribution of answers given by the upper secondary level participants was roughly the same for all three practices (Fig 4, see S2 File for further details). Furthermore, for each practice around a quarter (25% to 27%) of the participants from this group answered, “I don’t know”. This indicates that the upper secondary level participants had little or no comprehension of the integrity issues related to data handling practices. The PhD level participants were more competent in identifying the two non-compliant practices (with 90% and 81% answering a version of “yes”, respectively) than Bachelor level participants (84% and 65%). However, the two populations had roughly the same level of competence for the grey-zone practice, with 9% of PhD and 8% of Bachelor level participants answering, “The rules are unclear” or “It depends on the situation”. Therefore, the participants were once again more competent in identifying likely non-compliant than likely grey-zone practices, and there was no (noticeable) increase in grey-zone competence along the educational trajectory.
“Please indicate whether you believe the following actions go against the official rules and regulations that apply to you in relation to plagiarism”. Only Bachelor and PhD level participants who had indicated that they worked with data were asked this question. “Yes, it is a violation” covers the two answer options “Yes, it is a serious violation” and “Yes, but it is not a serious violation”. “Undecided/it depends” covers the two options: “The rules are unclear” and “It depends on the situation”. *Not the exact formulation of the question. See Table 5.
Regarding the association between the participants’ academic integrity training and their conceptions of rules, five general trends are worth highlighting in the results of the regression analysis (Table 5). 1) For upper secondary level participants receiving practical training was consistently correlated with a correct understanding of rules concerning likely non-compliant actions at the small to medium effect size level (OR range: 1.36–1.95). In the marginal effects analyses (S4 File 1.1−3 and 1.5-6) more practical training is consistently associated with better performance. 2) No form of training was consistently associated with Bachelor level participants’ competence. 3) There is a weak trend linking dedicated training to PhD level participants’ handling of likely non-compliant actions regarding data at the small effect size level (OR range: 1.68–1.73). More training is associated with better performance (see marginal effects analyses: S4 File 3.1–3.2). 4) As a dominant trend, the country of study was significantly linked with participants’ understanding of rules related to likely non-compliant actions in particular at the Bachelor level. 5) None of the dependent variables included in the analysis were (consistently) correlated with an understanding of rules related to likely grey-zone practices.
3.3. Own practice
For the participants’ own engagement in QAPs, we see a clear trend in the data, such that participants at more advanced study levels were less likely to report that they had engaged in the practices during their current studies (see Table 6). The trend can be observed for all six QAPs included in the questionnaire.
It is worth noting that plagiarism (“copied shorter passages without marking them as quotes”) was a widespread practice among upper secondary level participants (58%), whereas it was significantly reduced at Bachelor level (23%) and almost eradicated at PhD level (8%). On the other hand, the two questionable collaborative practices (adding students as undeserved co-authors and receiving unauthorised assistance) were relatively common among both upper secondary (61% and 80%) and Bachelor (45% and 56%) level participants. Similarly, although the two questionable data collection practices (deleting anomalous data and keeping inadequate records) were relatively uncommon among the upper secondary (48%) and Bachelor level participants (29% and 26%), the decline in frequency along the educational trajectory is small, and these two practices were the most common questionable forms of behaviours among PhD level participants (14% and 20%).
In the analysis of possible connections between academic integrity training and engagement in QAPs (Table 7), two patterns are worth highlighting. Firstly, training is not consistently associated with engagement in the QAPs included in the survey, and where it is, the strength of the association is modest. Starting with the upper secondary level, training has a mixed link with engagement in QAP for 2 of the 4 QAPs probed about. Specifically, practical training decreases the likelihood that upper secondary participants delete or ignore unusual or deviating data, while dedicated training unexpectedly increases the likelihood that they receive help from others. Both associations are at the small effect size level (ORs: 0.73 and 1.24). For the other two study levels, the few cases where training is associated with QAP, it consistently decreases the likelihood of engagement in QAPs at the small effect size level (OR range: 0.65 and 0.80) (see further details in the marginal effects analysis S4 File). Secondly, for upper secondary and Bachelor level participants, country of study is significantly correlated with the likelihood of participants engaging in QAPs.
4. Differences between countries
The regression analyses (Tables 5 and 7) show that country of study was statistically significantly associated with conception of the rules and own engagement in questionable practices, particularly for the upper secondary and Bachelor level participants. The association was not clear for PhD level participants. It is well known that there are major differences between the educational systems in the European countries included in the survey [see, e.g., 47]. From the descriptive statistics (S2 File), we can identify clear differences in the amount of dedicated training participants from the different countries had received. In Denmark, the level of training was relatively high, where 57% of upper secondary, 72% of Bachelor and 82% of PhD level participants reported to have received some form of dedicated training. In Portugal, shares of participants who reported having received dedicated training were respectively 30%, 29% and 45% for the same three groups.
The differences were much smaller for practical training. The highest level of practical training was reported in Switzerland, where 69% of upper secondary, 80% of Bachelor and 81% of PhD level participants reported having received some form of practical training. The country with lowest level of practical training was Ireland, where the reported shares for the same three groups were respectively 49%, 74% and 82%. Interestingly, there was only little difference between the countries concerning PhD level participants’ practical training (the reported shares ranged from 79% to 82%).
As a weak trend, upper secondary and Bachelor level participants from Ireland and Portugal tended to be more in doubt about how to handle likely non-compliant actions. As an example, 25% of the Irish and 20% of the Portuguese upper secondary level participants chose one of the options “The rules are unclear”, “It depends on the situation”, or “I don’t know” when asked if it was against the rules to copy an entire page from an external source without a reference. Only 8% and 10% respectively of the Danish and Swiss upper secondary level participants chose one of these three options for this question. For Bachelor level participants shares were 7% and 12% respectively for Ireland and Portugal and 4% for both Denmark and Switzerland. As the training level is generally higher in Denmark and Switzerland for these two study levels this trend corresponds well with the results concerning likely non-compliant actions reported above (Table 5). There are, however, also examples where the trend does not hold; Swiss upper secondary level participants are for instance, more in doubt than Irish upper secondary participants about how to handle “Paying someone to write an assignment for you”.
Turning to self-reported questionable practices, there are often clear differences between the answers from the four countries involved, but there are no strong trends. As a weak trend, upper secondary and Bachelor participants from Ireland and Portugal tended to use the “I don’t know” option more than participants from the two other countries, and Bachelor level participants from Portugal generally reported to have engaged more often in the surveyed questionable practices (for four of the six practices participants from Portugal had the highest share of engagement relative to participants from the other three countries).
5. Discussion and conclusion
In this study we set out to provide a systematic and extensive overview of European students’ conceptions of and engagement with academic integrity and to analyse the relationship between students’ integrity-related behaviour and academic integrity training as well as demographic background variables (gender, age, country of study). Regarding the participants’ conceptions of rules, we saw a clear trend indicating that participants at more advanced study levels were more competent in identifying non-compliant practices than participants at less advanced levels (see Figs 2–4). The study was not a longitudinal study, but we designed it such that three independent groups of students were surveyed, one for each study level. As a result, we cannot determine whether individual students became more competent along their educational trajectory or if the average increase in competence at more advanced levels was due to a selection process; it may be that only the most competent students from each level progressed to the next.
Analysis of the association between academic integrity training and participants’ competence in handling non-compliant actions suggests that practical training was significantly associated with better performance for the population of upper secondary students and that dedicated training was associated with better performance for PhD students for practices related to data. No form of training was consistently associated with Bachelor level participants’ competence. When evaluating the relationship between performance and training, it is important to note that country of study was generally correlated to the participants’ competence regarding likely non-compliant practices. This result can be interpreted in several ways. For example, it may confirm the effect of contextual factors as recognised in the literature. On the other hand, we cannot rule out that the result reflect a ‘hidden’ effect of training: students in some countries may do better because they have learned about the rules during their education, even though they do not recall having done so.
The situation is markedly different for grey-zone practices, where there is no observable trend that advanced students are more competent. On the contrary, participants at all levels struggled to identify grey-zone practices, and their competences did not increase along their educational trajectory. In fact, for the two citation practices, the opposite was true: 17% and 20% of the upper secondary level participants were able to identify the examples as grey-zone practices, whereas for both practices, only 12% of the Bachelor and 14% of the PhD level participants were able to do so (Fig 2). Furthermore, none of the background variables – including training – were consistently correlated with competence concerning grey-zone practices. In short, the participants in our study had a low level of competence regarding grey-zone practices, their competences did not improve along their educational trajectory, and the academic integrity training they had received was not associated with their competences in handling these practices. Grey-zone practices simply seem to sit in a blind spot on the academic integrity map for students in higher education. We believe this is a worrying and marked result that calls for further dedicated investigation. If correct, these results will have clear implications when designing future academic integrity training. To cover the academic integrity issues that students face, the training should not only include non-compliant practices, but it should also address grey-zone practices.
If we turn to self-reported questionable practices, the positive message in the data presented above is that efforts to reduce plagiarism seem to have paid off. Participants at more advanced study levels were not only more competent in identifying non-compliant practices relating to plagiarism than participants from less advanced study levels, but they were also more competent in handling specific scenarios related to citation practice (Fig 1). More importantly, the data also showed a marked decrease in participants’ self-reported engagement in questionable practices related to plagiarism along their educational trajectory. Although 58% of upper secondary students reported that they had plagiarised shorter passages, only 8% of the PhD level participants admitted to having done so (Table 6). In other words, participants at more advanced levels had a better knowledge of the rules, were more competent in handling specific situations, and engaged less in questionable citation behaviour than students at less advanced levels.
Regarding the other two dimensions of academic integrity covered in this study, the message is less positive. Questionable collaborative practices were widespread among upper secondary and Bachelor level participants, with many admitting to granting (undeserved) ‘authorship’ credit to other students and to receiving unauthorised help (Table 6). The PhD students were not asked these questions in this way as they would not be meaningful to this population. For this reason, PhD level participants were asked a slightly different set of questions concerning authorship, reported and analysed in [28]. The results showed that more than 28% of PhD level participants said that they had “allowed research group leaders, supervisors, or others in power to become co-authors of papers, even though they did not make a significant contribution to them” at least once during their PhD studies. Although the power dynamic is radically different for the PhD students, the results in this study indicate that upper secondary and Bachelor students have few competences in handling the challenges and dilemmas related to questionable collaborative practices and that they are accustomed to breaking the rules governing this integrity dimension. If this is indeed the situation, one might expect PhD students based on their previous education to be unprepared for the challenges of authorship attribution we see in academia (for further discussion of this aspect, see [28]). This result highlights the need for a reform of integrity training, with more attention paid to issues related to collaborative practices.
Where plagiarism and questionable collaborative practices are mainly problematic because they undermine the meritocratic nature of science, questionable practices connected to data collection and analysis can be seen as attacks on the integrity of the scientific record and thus on the trustworthiness of science as an institution. In other words, questionable data practices constitute a serious threat to science, arguably more severe than the other types of questionable practices discussed here. Seen in this light, it is discouraging that upper secondary level participants seemed to have very limited competences in identifying non-compliant data collection practices, and although we see a trend in our data suggesting that questionable data collection practices were less frequent for participants at more advanced study levels, the numbers were still relatively high. Among the PhD level participants, 14% admitted to deleting deviating data “based on a gut feeling that they were inaccurate” and 20% admitted to keeping inaccurate records (Table 6). These two items in our questionnaire were inspired by similar items used in a survey of 3,247 working scientists in which a comparable proportion (15.5% and 27.5%) of the participants admitted to engaging in the practices within the last 3 years [43]. Such a high prevalence of questionable data practice is clearly worrying and, based on responses from the PhD level participants in particular, our data do not suggest that the problem has been solved or that it is even in decline. The level of training was negatively associated with PhD level participants’ engagement in the QAP related to data collection, but not with the QAPs related to data analysis and record keeping (Table 7). This result is clearly concerning and again calls for a more comprehensive approach to integrity training.
The data presented in this study have clear implications for the design of integrity training. Although it is important to address plagiarism and other clear-cut, non-compliant practices, other aspects of academic integrity deserve more attention. In particular, we saw no improvement in participants’ understanding of grey-zone practices along their educational trajectory, and although the frequency of questionable practices related to collaborative and data collection practices declined along the educational trajectory, they were still worryingly high, even among PhD level participants. It is especially concerning that dedicated training, with a few exceptions, was not associated with either better rule understanding or less engagement in QAPs. This suggests that current methods of teaching academic integrity to the groups of students surveyed in this study may not be sufficiently effective.
These results call for a revision of academic integrity training at all educational levels. We suggest that training is approached in a more encompassing way, including grey-zone situations as well as clear non-compliant actions, and a broad range of behaviours, including questions and dilemmas related to authorship, collaboration, and data collection and analysis. How these competences should be taught is an open question that clearly calls for further investigation.
As the question of educational reform falls outside the scope of this paper, we will limit ourselves to mentioning two suggestions based on the literature. First, a comprehensive meta-analysis by Katsarov and colleagues [38] suggests that engaging and experiential learning approaches tend to be more effective. This aligns well with the results reported in this study showing that practical training generally was more effective than dedicated training. As we have defined it, practical training is closer to students’ everyday practice and will thus tend to be more engaging and experiential than mere theoretical training. It is, however, an open question how such engaging practices can be implemented in dedicated training. Second, to make students better suited to handle grey-zone situations, we suggest that teachers supplement training of the basic rules with training that addresses the underlying values of scientific integrity. This could for instance, be done using a practice-near “explicit-reflective framework” as suggested by Johansen & Christiansen [30] or by applying a phronesis framework as discussed in [48]. These reforms are well aligned with theories of moral development, such as Kohlberg’s stage theory [e.g., 49], which associates moral development with a shift from rule- and authority-based reasoning toward a more principled understanding of underlying moral values. Further analysis of academic integrity based on these theories and later developments, such as models of moral behaviour [e.g., 50], may be of value when designing educational reform.
Finally, it should be noted that this study has its limitations. Our study population included groups of students from four European countries, and we cannot know if or to what extent the results are applicable to other countries. In fact, the analysis shows that country of study was a significant factor for several of the issues we addressed. For this reason, caution should be used if the results are applied to countries not included in the study. Furthermore, the study was designed to offer a comprehensive overview, and we therefore included only a small number of behaviours and situations for each of the different types. For instance, only one grey-zone situation relating to data management was included in the survey (Table 2), and important questions connected to statistical analysis such as cherry-picking and p-hacking [51] are not directly covered in the survey. This limitation was the result of a necessary trade-off, and we recommend that dedicated and more detailed studies are carried out, not least concerning p-hacking and similar questionable practices connected to the analysis of quantitative data. In addition, the effect of training was measured using the participants’ perception of the amount of training they had received, which may differ from the actual amount. It should also be noted that the regression analysis presented in Tables 5 and 7 is the result of an explorative examination involving a large number of possible connections. This kind of exploratory analysis can be misleading as there is a high risk of false positive findings. We addressed this limitation by considering only general trends rather than singular correlations in the analysis. Lastly, our study is based on self-reported data. As engagement in questionable academic practices is a socially unacceptable behaviour, we would expect the results in section 3.3 to be underreported by the participants due to social desirability bias. There are ways to mitigate this bias [e.g., 52]. However, we are not too concerned about this problem, since in this study we are mainly interested in the difference between educational levels and between various types of questionable behaviour. As far as all absolute results are affected by the bias to roughly the same extent the differences reported in our analysis will only be modestly affected by the social desirability bias. The reader should keep in mind that the shares reported in Table 6 are likely underreported. The questionable practices in question may be more common than reported.
Supporting information
S1 File. Questionnaire overview: Contains an overview of the questionnaire.
https://doi.org/10.1371/journal.pone.0342227.s001
(PDF)
S2 File. Descriptive statistics: Contains the descriptive statistics relevant for this paper.
https://doi.org/10.1371/journal.pone.0342227.s002
(PDF)
S3 File. Results from regression tables: Contains raw outputs tables from the regression analyses.
https://doi.org/10.1371/journal.pone.0342227.s003
(PDF)
S4 File. Marginal effects from regression analysis: Contains marginal effects from the two training variables (‘dedicated training’ and ‘practical training’) where these were statistically significant (p < 0.05).
https://doi.org/10.1371/journal.pone.0342227.s004
(PDF)
S5 File. Inclusivity in global research checklist.
https://doi.org/10.1371/journal.pone.0342227.s005
(PDF)
Acknowledgments
The authors would like to thank all the participants in the study and everyone who helped with the recruitment of participants. We also thank our partners in INTEGRITY for their help and sparring, and Céline Schöpfer, Una Quinn and Marcus Tang Merit who played important roles in data collection and design of the study. Finally, we thank Sarah Layhe for language editing.
References
- 1. Schab F. Schooling without learning: thirty years of cheating in high school. Adolescence. 1991;26(104):839–47. pmid:1789171
- 2. McCabe DL. Academic dishonesty among high school students. Adolescence. 1999;34(136):681–7. pmid:10730693
- 3. McCabe DL. Cheating among college and university students: A North American perspective. IJEI. 2005;1(1).
- 4. McCabe DL, Trevino LK, Butterfield KD. Cheating in Academic Institutions: A Decade of Research. Ethics & Behavior. 2001;11(3):219–32.
- 5.
McCabe DL, Treviño L, Butterfield K. Cheating in college: Why students do it and what educators can do about it. Baltimore: Johns Hopkins University Press. 2012.
- 6. Galloway MK. Cheating in Advantaged High Schools: Prevalence, Justifications, and Possibilities for Change. Ethics & Behavior. 2012;22(5):378–99.
- 7. Chang C-M, Chen Y-L, Huang Y, Chou C. Why do they become potential cyber-plagiarizers? Exploring the alternative thinking of copy-and-paste youth in Taiwan. Computers & Education. 2015;87:357–67.
- 8. Curtis GJ, Tremayne K. Is plagiarism really on the rise? Results from four 5-yearly surveys. Studies in Higher Education. 2019;46(9):1816–26.
- 9. Hopp C, Speil A. How prevalent is plagiarism among college students? Anonymity preserving evidence from Austrian undergraduates. Account Res. 2021;28(3):133–48. pmid:32744060
- 10. Sureda-Negre J, Comas-Forgas R, Oliver-Trobat MF. Academic Plagiarism among Secondary and High School Students: Differences in Gender and Procrastination. Comunicar: Revista Científica de Comunicación y Educación. 2015;22(44):103–11.
- 11. Pramadi A, Pali M, Hanurawan F, Atmoko A. Academic Cheating in School: A Process of Dissonance Between Knowledge and Conduct. Mediterranean Journal of Social Sciences. 2017;8(6):155–62.
- 12. Šorgo A, Vavdi M, Cigler U, Kralj M. Opportunity Makes the Cheater: High School Students and Academic Dishonesty. CEPSj. 2015;5(4):67–87.
- 13. Roig M. Can undergraduate students determine whether text has been plagiarized?. Psychol Rec. 1997;47(1):113–22.
- 14. Bacha NN, Bahous R, Nabhani M. High schoolers’ views on academic integrity. Research Papers in Education. 2012;27(3):365–81.
- 15. Bertram Gallant T, Van Den Einde L, Ouellette S, Lee S. A systemic analysis of cheating in an undergraduate engineering mechanics course. Sci Eng Ethics. 2014;20(1):277–98. pmid:23494143
- 16. Childers D, Bruton S. “Should It Be Considered Plagiarism?” Student Perceptions of Complex Citation Issues. J Acad Ethics. 2015;14(1):1–17.
- 17. Griebeler M de C. “But everybody’s doing it!”: a model of peer effects on student cheating. Theory Decis. 2018;86(2):259–81.
- 18. Stephens JM. Bridging the Divide: The Role of Motivation and Self-Regulation in Explaining the Judgment-Action Gap Related to Academic Dishonesty. Front Psychol. 2018;9:246. pmid:29545762
- 19. Chu SKW, Hu X, Ng J. Exploring secondary school students’ self-perception and actual understanding of plagiarism. Journal of Librarianship and Information Science. 2019;52(3):806–17.
- 20. Yu H, Glanzer PL, Johnson BR. Examining the relationship between student attitude and academic cheating. Ethics & Behavior. 2020;31(7):475–87.
- 21. Waltzer T, Dahl A. Students’ perceptions and evaluations of plagiarism: Effects of text and context. Journal of Moral Education. 2020;50(4):436–51.
- 22. Waltzer T, Dahl A. Why do students cheat? Perceptions, evaluations, and motivations. Ethics & Behavior. 2022;33(2):130–50.
- 23. Goddiksen MP, Willum Johansen M, Armond AC, Centa M, Clavien C, Gefenas E, et al. Grey zones and good practice: A European survey of academic integrity among undergraduate students. Ethics & Behavior. 2023;34(3):199–217.
- 24. Bretag T, Harper R, Burton M, Ellis C, Newton P, Rozenberg P, et al. Contract cheating: a survey of Australian university students. Studies in Higher Education. 2018;44(11):1837–56.
- 25. Newton PM. How Common Is Commercial Contract Cheating in Higher Education and Is It Increasing? A Systematic Review. Front Educ. 2018;3.
- 26. Julián M, Bonavia T. Understanding unethical behaviors at the university level: a multiple regression analysis. Ethics & Behavior. 2020;31(4):257–69.
- 27. Helgesson G, Holm S, Bredahl L, Hofmann B, Juth N. Misuse of co-authorship in Medical PhD Theses in Scandinavia: A Questionnaire Survey. J Acad Ethics. 2022;21(3):393–406.
- 28. Goddiksen MP, Johansen MW, Armond AC, Clavien C, Hogan L, Kovács N, et al. “The person in power told me to”-European PhD students’ perspectives on guest authorship and good authorship practice. PLoS One. 2023;18(1):e0280018. pmid:36634045
- 29. Mahmud S, Ali I. Evolution of research on honesty and dishonesty in academic work: a bibliometric analysis of two decades. Ethics & Behavior. 2021;33(1):55–69.
- 30. Johansen MW, Christiansen FV. Handling Anomalous Data in the Lab: Students’ Perspectives on Deleting and Discarding. Sci Eng Ethics. 2020;26(2):1107–28. pmid:32166525
- 31. McCabe DL, Trevino LK. Academic Dishonesty: Honor Codes and Other Contextual Influences. The Journal of Higher Education. 1993;64(5):522.
- 32. McCabe DL, Treviño LK, Butterfield KD. Honor Codes and Other Contextual Influences on Academic Integrity: A Replication and Extension to Modified Honor Code Settings. Research in Higher Education. 2002;43(3):357–78.
- 33. O’Rourke J, Barnes J, Deaton A, Fulks K, Ryan K, Rettinger DA. Imitation Is the Sincerest Form of Cheating: The Influence of Direct Knowledge and Attitudes on Academic Dishonesty. Ethics & Behavior. 2010;20(1):47–64.
- 34. Malesky A, Grist C, Poovey K, Dennis N. The Effects of Peer Influence, Honor Codes, and Personality Traits on Cheating Behavior in a University Setting. Ethics & Behavior. 2021;32(1):12–21.
- 35. Cronan TP, McHaney R, Douglas DE, Mullins JK. Changing the Academic Integrity Climate on Campus Using a Technology-Based Intervention. Ethics & Behavior. 2016;27(2):89–105.
- 36. Stephens JM, Watson PWSJ, Alansari M, Lee G, Turnbull SM. Can Online Academic Integrity Instruction Affect University Students’ Perceptions of and Engagement in Academic Dishonesty? Results From a Natural Experiment in New Zealand. Front Psychol. 2021;12:569133. pmid:33679506
- 37. Goddiksen MP, Allard A, Armond ACV, Clavien C, Loor H, Schöpfer C, et al. Integrity games: an online teaching tool on academic integrity for undergraduate students. Int J Educ Integr. 2024;20(1).
- 38. Katsarov J, Andorno R, Krom A, van den Hoven M. Effective Strategies for Research Integrity Training—a Meta-analysis. Educ Psychol Rev. 2021;34(2):935–55.
- 39. Stoesz BM, Yudintseva A. Effectiveness of tutorials for promoting educational integrity: a synthesis paper. Int J Educ Integr. 2018;14(1).
- 40. Johansen MW, Goddiksen MP, Centa M, Clavien C, Gefenas E, Globokar R, et al. Lack of ethics or lack of knowledge? European upper secondary students’ doubts and misconceptions about integrity issues. Int J Educ Integr. 2022;18(1).
- 41. Goddiksen MP, Johansen MW, Armond ACV, Centa M, Clavien C, Gefenas E, et al. The dark side of text-matching software: worries and counterproductive behaviour among European upper secondary school and bachelor students. Int J Educ Integr. 2024;20(1).
- 42. Goddiksen MP, Quinn U, Kovács N, Lund TB, Sandøe P, Varga O, et al. Good friend or good student? An interview study of perceived conflicts between personal and academic integrity among students in three European countries. Account Res. 2021;28(4):247–64. pmid:33003951
- 43. Martinson BC, Anderson MS, de Vries R. Scientists behaving badly. Nature. 2005;435(7043):737–8. pmid:15944677
- 44. Hosmer DW, Lemesbow S. Goodness of fit tests for the multiple logistic regression model. Comm in Stats - Theory & Methods. 1980;9(10):1043–69.
- 45. Fagerland MW, Hosmer DW. Tests for goodness of fit in ordinal logistic regression models. Journal of Statistical Computation and Simulation. 2016;86(17):3398–418.
- 46. Sullivan GM, Feinn R. Using Effect Size-or Why the P Value Is Not Enough. J Grad Med Educ. 2012;4(3):279–82. pmid:23997866
- 47.
Hörner W, Döbert H, Reuter LR, Kopp B. The Education Systems of Europe. 2nd ed. Chambridge: Springer International Publishing. 2015.
- 48. Goddiksen MP, Gjerris M. Teaching phronesis in a research integrity course. FACETS. 2022;7:139–52.
- 49.
Kohlberg L. The Philosophy of Moral Development: Moral Stages and the Idea of Justice. Harper & Row. 1981.
- 50.
Rest JR, Barnett R. Moral development: Advances in research and theory. Praeger. 1986.
- 51. Simonsohn U, Nelson LD, Simmons JP. P-curve: a key to the file-drawer. J Exp Psychol Gen. 2014;143(2):534–47. pmid:23855496
- 52. Prelec D. A Bayesian truth serum for subjective data. Science. 2004;306(5695):462–6. pmid:15486294