Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Does increasing social presence enhance the effectiveness of writing explanations?

  • Leonie Jacob ,

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Project administration, Writing – original draft

    l.jacob@iwm-tuebingen.de

    Affiliation Leibniz-Institut für Wissensmedien, Tübingen, Germany

  • Andreas Lachner,

    Roles Conceptualization, Formal analysis, Methodology, Supervision, Writing – review & editing

    Affiliation University of Tübingen, Tübingen, Germany

  • Katharina Scheiter

    Roles Conceptualization, Funding acquisition, Supervision, Writing – review & editing

    Affiliations Leibniz-Institut für Wissensmedien, Tübingen, Germany, University of Tübingen, Tübingen, Germany

Abstract

Writing explanations has demonstrated to be less effective than providing oral explanations, as writing triggers less amounts of perceived social presence during explaining. In this study, we investigated whether increasing social presence during writing explanations would aid learning. University students (N = 137) read an instructional text about immunology; their subsequent task depended on experimental condition. Students either explained the contents to a fictitious peer orally, wrote their explanations in a text editor, or wrote them in a messenger chat, which was assumed to induce higher levels of social presence. A control group retrieved the material. Surprisingly, we did not obtain any differences in learning outcomes between experimental conditions. Interestingly, explaining was more effortful, enjoyable, and interesting than retrieving. This study shows that solely inducing social presence does not improve learning from writing explanations. More importantly, the findings underscore the importance of cognitive and motivational conditions during learning activities.

Introduction

Generating explanations is regarded as a successful strategy to enhance students’ understanding, as it triggers generative processes associated with deep learning [18]. Seminal studies on learning by explaining started to investigate the role of explaining in interactive settings, such as during collaborative learning or tutoring, in which the explainer received feedback from the recipient, for instance, in form of direct questions [6,8]. Results showed that students who explained learned material were engaged in deeper learning processes and showed higher learning outcomes compared to restudying [58]. Recent research replicated the beneficial of explaining in non-interactive settings, in which no recipient was present; thus, students explained learned material to a fictitious peer which also resulted in higher learning outcomes [1,9,10]. Moreover, learning by explaining was more often shown to be effective when students were required to generate oral explanations instead of written ones [1113]. A possible reason for the benefit of oral explaining might be the difference between perceived social presence during explaining. In this context, Jacob, Lachner, and Scheiter [13] provided first evidence that writing explanations induces lower levels of social presence during explaining than providing oral explanations, and reduces the quality of the generated explanations [1113]. Social presence is a central concept in discourse theory, and is commonly defined as the extent to which a person feels that a communication partner is present during a mediated conversation, such as in online learning environments [1418]. In this context, social presence not only holds true for real persons, but also for virtual or even fictitious communication partners [1921]. Given that writing explanations is a learning activity that can easily be implemented in learning contexts, the question arises whether and how writing explanations can be made more effective. As previous studies highlighted the role of social presence, we investigated whether inducing social presence would increase the effectiveness of writing explanations regarding students’ comprehension. Additionally, as recent studies showed that learning by explaining affected (meta-)cognitive and motivational factors [22,23], we explored whether perceived mental effort and subjective difficulty as central facets of perceived cognitive load [24,25], and students’ monitoring accuracy as a metacognitive factor [3] differed across experimental conditions. Further, we investigated whether students’ enjoyment and interest as crucial facets of students’ valence-related motivational orientations [26] varied among conditions.

Learning by explaining to fictitious peers

Learning by explaining is a generative learning activity which aims at enhancing students’ meaningful learning [27]. In line with the generative learning theory, the process of explaining as a generative act may elicit cognitive (e.g., mental effort) and metacognitive (e.g., monitoring) processes, which should contribute to students’ comprehension: First, and in line with Mayer’s SOI model [28], when explaining students need to select the most relevant information if the provided materials and to organize the information in a coherent way. Then, they need to connect the new contents with their already existing knowledge to integrate them into their long-term memory [27,29,30]. Through this connection, students are able to provide explanations that include further details and information that go beyond the giving materials, which results in new knowledge and meaningful learning [2734]. This process additionally triggers students to monitor whether they understood all relevant contents correctly or whether they need to restudy specific information. As a consequence, students’ metacognitive monitoring may become more accurate when explaining, which has also been observed for other generative learning activities such as keyword generation or gap filling [3,4,35,36]. Learning by explaining is commonly implemented in interactive learning settings in which students explain learned contents to present and interactive peers; this setting allows students to exchange ideas and thought, which additionally enhances their understanding [6,7,31,3747]. Interestingly, recent research started to investigate the effectiveness of explaining to a fictitious peer and reported promising results [1,2,4,12,13,23]. For instance, Fiorella and Mayer [1] conducted an experiment in which university students first studied a text with the expectation to either answer a test (test expectancy) or to explain the content to a fictitious peer (explaining expectancy) after a learning phase. Then, they were randomly assigned to the experimental conditions: They either restudied the material (no explaining condition) or explained the contents to a fictitious peer by generating a video (explaining condition). Results demonstrated that explaining expectancy had an effect only on students’ short-term retention (d = 0.55, medium effect), whereas the actual act of explaining learned material resulted in better performance regarding their long-term retention (d = 0.56, medium effect, see also 22, for replications).

In another study, Hoogerheide, Visee, Lachner, and van Gog [23] demonstrated that learning by explaining did not only result in higher learning outcomes but also affected students’ motivation during learning. The authors conducted a study with three conditions: Students either explained learned material to a fictitious peer (explaining condition) or wrote a summary about the contents (summarizing condition). A control group restudied the materials (restudy condition). Results showed that students in the explaining condition outperformed students in the restudy condition. Interestingly, students’ enjoyment mediated the explaining effect, as students enjoyed explaining more than summarizing, which, in turn, yielded better comprehension. Additionally, students who explained the materials reported higher levels of invested mental effort during the learning task compared to restudying, which, however, was not linked to higher learning outcomes [23].

Furthermore, several studies indicated that generating explanations additionally supports students’ monitoring accuracy, which is a crucial metacognitive facet of successful learning [3,4]. For instance, during reorganizing and connecting learned contents for generating an explanation, students might detect unsolved problems, misunderstandings, or missing information that prevent them from understanding the contents deeply [27]. This detection helps students to judge their current understanding more accurately and supports them in restudying information which they did not understand clearly to reach their learning aims [48]. In a study by Fukaya [3], for instance, students first read five different texts with the intention to either explain the contents or to write keywords about the texts. A control group only had the intention to explain the content but did not actually generate an explanation. Results showed a difference among conditions: Students who actually generated an explanation judged their comprehension more accurately than students who only had the intention to explain or who wrote keywords (d = 0.91, large effect). Thus, the actual act of explaining seems to increase students’ monitoring accuracy.

Even though serval studies indicated beneficial effects of learning by explaining to fictitious others on students’ comprehension and monitoring skills, little is yet known about the underlying mechanism of why explaining is effective. Recently, researchers emphasized the role of social presence during explaining [1113,49], as higher levels of social presence (which also may arise from virtual of even fictitious characters) are linked to central components of learning, such as cognition or motivation [14,1821,50,51]. On the one hand, the social presence of a fictitious communication partner may engage students to adapt their knowledge to the audience’s needs [52,53], for instance, by providing further details and elaborations that go beyond the contents of the learning material. Such audience-adjustments may result in deeper elaborative processes and contribute to meaningful learning [54]. On the other hand, from a motivational perspective [23], the social presence of a fictitious person may also increase the feeling of relatedness [55]. Higher levels of relatedness may yield higher levels of enjoyment and investments during providing an explanation, and, in turn, contribute to comprehension [23,55,56].

Learning by writing explanations

Although prior research documented beneficial effects of explaining compared to retrieving, recent studies demonstrated that generating oral explanations is more beneficial than writing an explanation. In an experiment by Hoogerheide, Deijkers, Loyens, Heijltjes, and van Gog [12] university students first read an instructional text about syllogistic reasoning, and then either wrote an explanation to a fictitious peer or restudied the learning material. In contrast to prior research on oral explaining, results indicated that writing an explanation did not result in higher learning outcomes than restudying. Therefore, the authors directly compared the influence of the explanatory modality in a second experiment. Results showed that only students who explained orally (d = 0.43, medium effect), but not in written form (d = 0.19, small effect) outperformed students who restudied the learning material. However, there was no significant difference between oral and written explanations regarding students’ learning outcome. Additionally, similar to Hoogerheide, Visee, Lachner, and van Gog [23], results indicated that students who generated an explanation in oral (d = 1.96, large effect) or in written form (d = 0.93, large effect) invested higher levels of mental effort during the learning task compared to students who restudied the material. Relatedly, Lachner, Ly, and Nückles [11] provided students with a Wikipedia article about combustion engines; after studying it, students were asked to explain the contents to a fictitious peer in either oral or written form. The findings revealed that students who generated an oral explanation reached higher scores in the comprehension posttest compared to students who wrote an explanation (d = 0.67, medium effect), which could be explained by more elaborated explanations given in the oral condition. The authors attributed the findings to the fact that they used more difficult learning materials than Hoogerheide, Deijkers, Loyens, Heijltjes, and van Gog [12]. Against this background, Jacob, Lachner, and Scheiter [13] conducted a further experiment to resolve the conflicting findings regarding the explanatory modality. Results revealed an interaction effect between learning activity and text difficulty (d = 0.31). The effect of explaining was only significant in the high-difficult condition, but not in the low-difficult condition. Thus, the explaining effect only held true when students learned from difficult but not from less difficult material. More interestingly, students who explained the contents to a fictitious peer orally outperformed students who wrote an explanation. Again, this effect was only significant when the learning material was difficult, but not when it was less complex. Additionally, perceived social presence and the richness of explanations mediated the explanatory effect: Students who explained orally perceived a stronger presence of the fictitious peer (measured by the number of personal references) compared to students who wrote an explanation. Higher levels of social presence, in turn, was associated with richer explanations (measured by the number of mentioned concepts) which resulted in better learning outcomes, at least for the difficult text. Apparently, the social presence during explaining accounted for the superiority of oral explaining. Furthermore, students who explained orally judged their current understanding more precisely than students who wrote an explanation and invested more mental effort than students who retrieved the materials [13,23].

These findings are in line with literature in applied linguistics, in which it is generally argued that writing is a rather solitary process, since the writer is normally separated from the audience regarding time and place [57,58]. Due to the lack of a social presence during written activities, the writer in contrast tends to adopt a knowledge-telling perspective, and as such only retrieves the content without adapting the content to a particular audience’s needs [59]. These detrimental effects may increase, as writing often induces higher levels of cognitive load than speaking, which can be considered as an automated and less demanding process compared to writing. Overall, the lower levels of audience adjustments may be less conducive to learning [52], as, for instance, students provide lower amounts of examples to elaborate the content.

The present study: Inducing social presence to enhance writing explanations

We conducted an experiment to investigate whether learning by writing explanations could be supported by inducing social presence during explaining. We used validated experimental materials [13,60], and provided university students with a text about immunology during the study phase. Afterwards, they were randomly assigned to one of four conditions: They explained the contents in oral form (oral condition), wrote their explanations in a text editor (standard written condition), or wrote them in a chat messenger program (chat condition, see Fig 1). In this messenger program, students could see a profile picture and a message from the fictitious peer, which we assumed would induce higher levels of social presence [18]. Students in the control condition were asked to retrieve the contents (retrieval condition). We stated the following hypotheses, which were preregistered on AsPredicted.org (https://aspredicted.org/3nu5m.pdf).

thumbnail
Fig 1. Simulated mockup messenger chat for chat condition with induced social presence.

Mockup messenger chat with a profile picture and message from the fictitious student Lisa. Students in the chat condition could send text messages which appeared in the chat afterwards. We received written approval to publish the picture of the corresponding individual who completed the consent form for publication in a PLOS journal.

https://doi.org/10.1371/journal.pone.0250406.g001

In line with previous evidence, we hypothesized that students who generate an explanation (oral or written form) outperform students who retrieve the material regarding students’ comprehension posttests (Hypothesis 1). Additionally, based on previous findings, we hypothesized that students who explain orally outperform students who write an explanation (standard written condition) to a fictitious peer (Hypothesis 2). Furthermore, we hypothesized that students who write an explanation with increased social presence (chat condition) outperform students in the standard written condition (Hypothesis 3). Additionally, we explored potential differences between the oral condition and the chat condition and assumed comparable outcomes regarding students’ comprehension since social presence was induced in the chat condition aiming at reaching a similar level of perceived social presence as explaining orally.

Based on previous research, we additionally investigated potential differences regarding (meta-)cognitive (i.e., monitoring accuracy, mental effort, and subjective difficulty; 3, 13) and motivational factors, such as enjoyment and interest [22,23]. To account for the quality of the generated explanations, we measured three characteristics of the generated explanations (i.e., personal references, concepts, elaborations), which are commonly measured in research on learning by explaining [1113].

Method and methods

Participants and design

The current study was approved in written form by the ethics committee of the Leibniz-Institut für Wissensmedien in Tübingen (approval number: LEK2019/009). We recruited university students (N = 137) from study programs that were not related to the study topic (i.e., biology). This sample reached the required sample size of 126 participants, as determined by an a priori power analysis. Power was set to.80, α-error to.05, and the assumed effect size to , as recent studies documented differences of medium to large effect size of explaining [11,12] and social presence in learning settings [14].

The mean age of the students was 23.23 years (SD = 2.65) and 73% of them were female. The students either stated to be German native speakers (85%), or that they grew up bilingually with German (15%). The students were advanced students, on average in their 5th semester (SD = 3.19) of their current study program and were mostly enrolled in humanities programs (71%). On average, they had taken biology classes in school for 8 years (SD = 2.13), had 10 points (on a scale from 0 to 15, corresponding to a B-) in their last report card in biology (SD = 3.25), and showed low to medium prior knowledge skills in the prior knowledge test (M = 2.15; SD = 1.13; on a scale from 0 to 5).

Students were randomly assigned to one of four experimental conditions (i.e., retrieval condition: n = 34; standard written condition: n = 34; chat condition: n = 35; oral condition: n = 34), which was the independent variable. The dependent variable was students’ text comprehension, measured by two knowledge subtests comprising text-based questions and inference questions. To investigate potential (meta-)cognitive differences among conditions, we additionally collected data regarding students’ perceived mental effort, subjective difficulty, and monitoring accuracy during the study phase (i.e., reading the learning material) as additional control variables, and during the learning activity (i.e., explaining vs. retrieving) as potential mediators. Moreover, to investigate potential differences regarding students’ motivation during the learning activity, we asked them to rate their enjoyment and interest during the learning activity. As potential underlying processes, we additionally analyzed three characteristics of students’ generated explanations: Personal references as an indicator for social presence, number of elaborations, and number of concepts to investigate further underlying learning processes. As control variables, we measured students’ prior knowledge, self-efficacy in explaining, their biological interest at the beginning of the study, and their perceived social presence during explaining.

Study text

We used a validated text from the domain of biology from Golke and Wittwer [60]. The text was about immunology and dealt with immune research based on laboratory mice. The text was previously used in a study on learning by explaining [13]. Overall, the text had a length of 397 words and constituted a relatively complex text regarding common measures of text difficulty [13].

Prior knowledge test as control variable

We used the prior knowledge test from Golke and Wittwer [60], which contained five questions with an open-ended answer format and represented a multidimensional construct, measuring different subcomponents of immunology (e.g., “Why can viruses be dangerous for humans?”; McDonald’s ωt = .51). For each right answer students could receive one point, yielding a maximum score of five points. Two independent raters coded 20% of the tests. As interrater reliability was excellent (ICC2,1 = .91), one rater coded the remaining answers [61].

Knowledge posttests as dependent variables

We measured students’ text-based knowledge and inference knowledge with two different posttests from Golke and Wittwer [60]. The tests consisted of different questions than the prior knowledge test and represented multidimensional constructs.

Text-based questions.

The text-based questions comprised six open-ended questions, which aimed at measuring students’ basic knowledge of the text (e.g., “What is the main concern against research with laboratory mice?”; McDonald’s ωt = .44). For each correct answer, students could reach one point, resulting in a maximum score of six points. Two independent raters coded 20% of the tests. As interrater reliability was excellent (ICC2,1 = .95), one rater coded the remaining answers.

Inference questions.

The inference questions measured students’ advanced understanding (e.g., “How can the results in the text help to clarify the basic problems of immune research with mice?”; McDonald’s ωt = .45), as students had to combine different aspects and information across the text, and to draw conclusions. Again, students could receive one point per answer, yielding a maximum score of six points. Two independent raters coded 20% of the tests. As interrater reliability was excellent (ICC2,1 = .92), one rater coded the remaining answers.

(Meta-)Cognitive processes during the learning activity

Monitoring accuracy.

To measure students’ monitoring accuracy, we asked them to make prospective judgments about their expected performance on the posttest (i.e., text-based and inference questions combined) to investigate their monitoring accuracy by rating the following item: “How confident are you that you can answer questions to the text correctly?” [62,63]. Monitoring accuracy is commonly measured by the correspondence between students’ judgements of their own current understanding and their actual performance on a comprehension test [60,64,65]. Therefore, students rated their comprehension after the study phase and after the learning activity on a scale from 0% (no confidence) to 100% (absolute confidence) [13,66]. We operationalized students’ monitoring accuracy in terms of bias [4,13,60,6769]. Bias refers to the signed difference between students’ estimated performance and the actual performance (i.e., XJudgment − XPerformance). Hence, this estimation indicated whether students over- or underestimate their own performance. Positive values indicate an overestimation and negative values indicate an underestimation of their judged performances. A value of zero indicates an accurate judgment.

Cognitive load.

We asked students to rate their perceived mental effort (i.e., “How much effort did you invest in explaining the material?”), as subjective proxies to perceived cognitive load, after the study phase and after the learning activity on a 9-point Likert scale from 1 “very little” to 9 “very much” [24].

Additionally, students rated their subjective difficulty (i.e., “How easy was it for you to explain the material?”), as subjective proxies to perceived cognitive load after the study phase and after the learning activity on a 9-point Likert scale from 1 “very easy” to 9 “very difficult” [25].

Motivation during the learning activity

Enjoyment during the learning activity.

We measured students’ enjoyment during the learning activity by using two self-generated items (e.g., “I enjoyed doing the task”). The students rated their enjoyment on a 4-point Likert scale from 1 “not at all” to 4 “absolutely”. Reliability was excellent (McDonald’s ωt = .96).

Interest in the learning activity.

We measured students’ enjoyment during the learning activity by using two self-generated items (e.g., “I enjoyed doing the task”). The students rated their interest on a 4-point Likert scale from 1 “not at all” to 4 “absolutely”. Again, reliability was good (McDonald’s ωt = .81).

Characteristics of explanations

Based on prior research, we analyzed three characteristics of the generated explanations as indictors for underlying processes during explaining: Personal references, concepts, and elaborations [1113].

Personal references.

Based on discourse literature, we used personal references (i.e., “I”, “you”, etc.) as an indicator for the perceived social presence [57,58,70,71] which is frequently also used in learning by explaining literature [1113]. We automatically counted the number of personal references with RStudio [72].

Concepts.

As an indicator of the level of comprehensiveness, we counted the number of concepts per explanation [73]. Concepts are the mentioned constructs within an explanation. For instance, the sentence “Laboratory mice grow up in sterile environments and, therefore, the laboratory mice do not have any contact with diseases” contains four concepts (marked in italics). Redundant concepts (e.g., “laboratory mice”) were ignored [73]. Two independent raters coded 20% of the explanations. As interrater reliability was excellent (ICC2,1 = .95), one rater coded the remaining explanations.

Elaborations.

As a third characteristic, we counted the number of elaborations within the explanations. An elaboration was operationalized as an idea unit, such as examples, analogies, and own experiences [2,11]. For instance, the sentences “Laboratory mice are too clean because they have been cultivated over generations” is an elaboration, as it was not mentioned in the study text. The student, therefore, combined the information of the text with her or his prior knowledge to generate the example. Two independent raters counted the number of elaborations for 20% of all explanations. Interrater reliability was excellent, ICC2,1 = .95. Therefore, one rater coded the remaining explanations.

Additional control measures

Perceived social presence.

We asked students to rate their feelings of a social presence form the fictitious peer during explaining. We asked students to rate three self-generated items (i.e., “How strongly did you imagine that Lisa was real?”; “How important was it for you that Lisa understands the contents?”; “How strongly did you perceived being in a communicative situation”; McDonald’s ωt = .84). Based on related approaches on subjective assessments of task characteristics, we used a 9-point Likert scale from 1 “not at all” to 9 “completely” [24,25]. Since students in the retrieval condition were not asked to explain to a fictitious peer, only students in explaining conditions (i.e., standard written condition, chat condition, oral condition) rated their perceived social presence.

Self-efficacy in explaining.

We assessed students’ self-efficacy in explaining as a further control variable. We used three adapted items (e.g., “I always find ways to explain even difficult contents”; McDonald’s ωt = .74) from Jerusalem and Schwarzer [74]. Students rated their explaining skills on a 4-point Likert scale from 1 “I completely disagree” to 4 “I completely agree”.

Biological interest.

We measured students’ interest in biology as an additional control variable by using three items (e.g., “I am fascinated by biological topics”; McDonald’s ωt = .91), based on Kunter and colleges [75]. Students rated the items on a 4-point Likert scale from 1 “I completely disagree” to 4 “I completely agree”.

Procedure

The instructor welcomed the students and informed them about the study procedure. After providing written consent, all students were seated individually in front of a laptop in noise-cancelling cubicles so that they neither could see or hear each other. A maximum of 6 students could participate in one session. The entire study was self-paced and all instructions were provided in an online learning environment created by Klemke [76].

First, students rated their efficacy in explaining and their interest in biology. Then, they answered the prior knowledge test. Afterwards, they read the study text in a self-paced manner without the intent to explain or retrieve the material afterwards. After this study phase, they rated their mental effort and difficulty during reading, and provided a monitoring judgment. Then, the students were randomly assigned to one of four experimental conditions (i.e., retrieval condition, standard written condition, chat condition, oral condition). All students had 10 minutes to accomplish the learning activity. During the learning activity, they could take notes on a separate sheet but were not able to see the learning materials anymore. Students in the oral and standard written condition were given the following instruction:

Imagine the following scenario: One person could not participate in the study. However, she is highly interested in the topic of the study. She has not read anything about the topic yet. Therefore, she asks you to explain the central contents of the text. Please provide her a clear and detailed explanation, so that she can understand the content without additional information. You may take notes on a separate sheet.

Students in the oral explanations recorded their explanation with a microphone which was connected to the laptop. Students in the standard written condition wrote their explanation into a text editor box. In contrast, students in the chat condition received a more personalized instruction:

Here you can see a chat with Lisa. Lisa is highly interested in the topic of the study. She has not read anything about the topic yet. Therefore, she asks you to explain the central contents of the text. Please provide her a clear and detailed explanation, so that she can understand the content without additional information. You may take notes on a separate sheet.

Students in the chat condition provided their explanation in a messenger mockup (Fig 1), which was an imitation of WhatsApp Messenger (freeware created by WhatsApp Inc. in 2009 and acquired by Facebook in 2014). We provided different social cues to induce higher levels of social presence (see Weidlich & Bastiaens for a similar approach; 18). First, students could see a profile picture from Lisa. Second, they received a short message from Lisa who directly asked the students for an explanation (Fig 1). We automatically adapted the time of the received message to increase the synchrony of the communication. Students could send Lisa a message to share their explanation within the chat. Students in the retrieval condition only retrieved the material via an open recall task [4,13] with the following instruction:

Please retrieve the information of the text. You may take notes on a separate sheet.

After this explaining task, students again rated their effort and difficulty during explaining or retrieving and provided a monitoring judgment. Additionally, they rated their enjoyment and interest in the learning activity and students who generated an explanation further stated their perceived social presence during explaining. Finally, all students answered the posttests. The study took approximately 1 hour and was rewarded with 8 €.

Results

We used partial and Cohens’ d as effect size measures, qualifying values of ,.06,.14 and d = .20,.50,.80 as small, medium, and large effects [77]. Additionally, we applied an alpha level of α = .05.

Preliminary analyses

Preliminary analyses showed no significant differences between gender among conditions, χ2 (6, 137) = 5.50, p = .538, . Results of a MANOVA showed no significant differences between age, students’ number of semesters, grade in their last report card in biology, prior knowledge, self-efficacy in explaining, and biological interest across conditions, F(3, 133) = 1.08, p = .370. Additionally, students rated the social presence as comparably across the three explaining conditions, F(2, 100) = 2.23, p = .113, .

Learning outcome

The descriptive statistics (see Table 1) suggested that students showed comparable learning outcomes across conditions. Therefore, although our preregistered analysis plan was to conduct two separate contrast analyses for each dependent variable (which would have resulted in four tests in total), we decided to use MANCOVAs instead to reduce the number of statistical tests and to use the most economic statistical approach [78,79]. The text-based and the inference questions were the dependent variables, experimental condition the independent variable, and prior knowledge the control variable. Results showed a main effect of prior knowledge, F(1, 132) = 14.59, p < .001, , but no differences between conditions, F(3, 132) = 2.03, p = .062, (text-based questions: F(3, 132) = 2.56, p = .058, ; inference questions: F(3, 132) = 0.69, p = .562, ). As recommended by Biel and Friedrich [80], to investigate whether the non-significant finding can be attributed to the fact that that the null-hypothesis was true (i.e., no differences among conditions), we additionally computed Bayesian factors with the bain package [81]. Values between 0 and 1 can be interpreted as evidence in favor of the alternative hypothesis. Values greater than 1 are suggested as evidence in favor of the null hypothesis. A value of 1 represents no preference for either hypothesis. Results showed a Bayes factor higher than 1 regarding both posttests (text-based questions: BF01 = 9.43; inference questions: BF01 = 128.04), suggesting that we can assume that the null-hypothesis is true and that all conditions resulted in comparable learning outcomes.

thumbnail
Table 1. Summary of means and standard deviations (in parentheses) for all measurements.

https://doi.org/10.1371/journal.pone.0250406.t001

Influence of learning activity on (meta-)cognitive processes

Monitoring accuracy.

In a first step, we conducted one-sample t-tests to investigate whether students significantly over- or underestimated their comprehension after the learning activity (i.e., bias). We contrasted students’ actual judgements of their current understanding relative to their performance with an accurate judgment, which was represented by a value of zero. Results revealed that students in all conditions significantly overestimated their current understanding (retrieval condition: t(33) = 3.43, p = .002, d = .58; written condition: t(33) = 3.74, p < .001, d = .64; chat condition: t(33) = 5.09, p < .001, d = .85; oral condition: t(33) = 2.81, p = .008, d = .50).

In a second step, we analyzed potential differences among conditions. We conducted an ANCOVA with experimental condition as independent variable, students’ judgements of their current understanding after the learning activity as dependent variable, and students’ judgements of their understanding after the study phase as covariate to control for potential intra-individual differences [4,82]. Results showed a main effect of students’ judgements after the study phase, F(1, 132) = 136.47, p < .001, , but not of experimental condition, F(3, 132) = 2.41, p = .070, . Again, we conducted a Bayesian analysis. Results showed a Bayes factor of BF01 = 51.68, indicating that all conditions resulted in comparable (biased) judgements.

Cognitive load.

First, to analyze whether experimental conditions differed regarding students’ mental effort during the learning activity, we performed an ANOVA with mental effort as dependent variable and experimental condition as independent variable. Results showed a significant difference among conditions, F(3, 133) = 5.63, p = .001, . Post-hoc comparisons (Scheffé) revealed that students who wrote an explanation investigated more mental effort than students who retrieved the material (standard written condition: p = .006; chat condition: p = .008). The remaining group comparisons were not significant. See Table 2 for correlations with students’ learning outcomes.

thumbnail
Table 2. Correlations with confidence intervals regarding all variables.

https://doi.org/10.1371/journal.pone.0250406.t002

Second, to investigate potential differences among conditions regarding students’ subjective difficulty, we conducted an ANOVA with subjective difficulty as dependent variable and experimental condition as independent variable. Results indicated no differences among conditions, F(3, 133) = 1.29, p = .282, . Bayesian analyses showed a Bayes factor of BF01 = 44.79, indicating that there were no differences between conditions regarding students’ subjective difficulty.

Motivation during the learning activity

Enjoyment.

We explored differences regarding students’ enjoyment during the learning activity by conducting an ANOVA with experimental condition as independent variable. Results revealed significant differences among conditions, F(3, 133) = 7.91, p < .001, . Post-hoc comparisons showed that students in all explaining conditions (i.e., standard written condition: p = .012; chat condition: p < .001; oral condition: p = .012) enjoyed the learning activity more than students in the retrieval condition.

Interest.

Similarly, we investigated potential differences regarding the interestingness of the learning activity among conditions. Results of an ANOVA with experimental condition as independent variable showed a significant effect of condition, F(3, 133) = 13.11, p < .001, . Post-hoc comparisons revealed that students rated all three explaining conditions (i.e., standard written condition: p = .006; chat condition: p < .001; oral condition: p < .001) as more interesting than the retrieval condition.

Characteristics of the explanations

As expected, within the explaining conditions, the generated explanations differed among conditions: Personal references, F(2, 100) = 12.19, p < .001, ; concepts, F(2, 100) = 6.38, p = .002, , and elaborations, F(2, 100) = 10.35, p = .034, . Post-hoc comparisons (Scheffé) showed that students mentioned more personal references in the chat condition compared to the standard written condition, p < .001, and the oral condition, p < .001, which can be regarded as an indicator that the chat condition indeed induced higher levels of social presence. Additionally, students who explained orally mentioned more concepts, p = .003, and provided more elaborations, p = .048, than students in the chat condition. None of the other comparisons were significant.

Discussion

The aim of the current study was to investigate whether increasing social presence during writing explanations aids learning. Contrarily to our preregistered hypotheses, we did not obtain a significant effect of induced social presence during the learning activity compared to a less social learning environment. Apparently, only raising the social presence did not contribute to learning. Additionally, we did not find significant differences between the explaining and the retrieval conditions. Thus, we did not replicate prior findings on the effectiveness of explaining [1,9,22,23,83]. Relatedly, we were also not able to replicate beneficial effects of oral explaining compared to writing explanations [1113]. The null findings regarding students’ comprehension might be attributed to low levels of prior knowledge. Little prior knowledge limits students in learning new contents adequately [30,84]. However, prior research revealed that generative learning activities, such as explaining, are in particular beneficial for low prior knowledge students [9]. Additionally, students showed comparable monitoring accuracy ratings across conditions. Interestingly, all students overestimated their current understanding. This, however, is in line with prior research that highlighted that students generally tend to overestimate their current comprehension [8587]. Nevertheless, we need to reject our hypotheses regarding effects on students’ comprehension and their monitoring accuracy. We want to note, however, that the obtained null-findings are in line with a growing body of empirical research that was also not able to document an effect of explaining [3,4,8,88].

In this context, our study is of particular interest, as we used previously tested materials and knowledge tests in the context of explaining [13], and had sufficient test power to test our hypotheses. As additional safe-guard we also computed Bayes factors, which similarly suggested that our findings are rather in favor of the null-hypothesis. Thus, we are confident that this study represents reliable evidence to reject the explaining and modality hypothesis, at least in the current study context.

Furthermore, the analyses of the characteristics of the explanations demonstrated that the quality of the generated explanations was rather low, suggesting that the students were less capable to generate effective explanations. This finding may explain why we did not find any effects on learning. Our explorative analyses regarding the (meta-)cognitive and motivational conditions of explaining revealed distinct differences between explaining and retrieval, as students perceived more effort and more motivation (i.e., enjoyment, interest) during the learning activity, which is in line with previous research [13,23]. Cognitive conditions, such as mental effort, are strongly linked to students’ learning [89], and, therefore, should be investigated in more detail in combination with additional learning activities, such as learning by explaining, in future research. Moreover, learning enjoyment and relatedly interest are regarded as important value-related facets of intrinsic motivation [55], and intrinsic motivation is particularly key to whether and how students would persist to use explaining as a learning strategy beyond the experimental context [90]. To explore the differences regarding students’ enjoyment and mental effort in more detail, we additionally conducted explorative mediation analyses by applying a bootstrapping approach with 1,000 simulations based on Hayes [91]. Results revealed that enjoyment but not mental effort mediated learning by explaining on students’ text-based comprehension as the indirect effect was significant (ACME = 0.24, 95% CI [0.03, 0.50], p = .024). These findings are in line with the results by Hoogerheide and colleges who demonstrated that explaining led to higher levels of enjoyments, which, in turn, enhanced students’ learning [23].

Nevertheless, against the background of the relatively low quality of students’ explanations, as a theoretical consequence of our study, we suggest that future research should focus on how students could be supported to generate high-quality explanations. Given that students are generally not familiar with explaining challenging contents [92], they might depend on further support to be able to apply this learning activity successfully. As a first attempt, Lachner and Neuburg [93] investigated the role of formative feedback during explaining (in written form). They found that formative feedback helped students generate more cohesive explanations, which finally contributed to their comprehension. Additionally, the self-explaining literature focusses on two main instructional approaches [94], supporting self-explaining directly by means of pre-trainings [41] or indirectly by the use of prompts [39,95], which should work as strategy activators to enact deep-level explaining strategies. Whether and under which conditions such direct and indirect support procedures also hold true for generating explanations to fictitious others has to be investigated by further research.

Study limitations and future research

With our study, we provide the first empirical approach to systematically investigate the influence of perceived social presence on the effectiveness of explaining to fictitious others. In this context, however, we would like to point out some limitations of our study. First, an unexpected finding was that we indeed found an effect of our intervention on the number of personal references, but not on the social presence ratings. This finding might be contradictory at first glance. However, we want to note that such contradictory findings frequently occur in the context of inducing discourse situations [96,97]. As such, these findings are often interpreted as suggesting that inducing discourse situation may not affect subsequent communication processes directly, but rather subconsciously. This may explain the differences between the number of personal references and the intentional presence ratings.

Another surprising finding was that students in the explaining conditions did not outperform students in the retrieval practice condition. This not only contradicts our assumptions but also prior studies that showed beneficial effects of explaining in contrast to restudying or retrieving the learned material [1,2,4,12,13,23]. In this context, we would like to point out that we used the same learning materials as in a previous study by Jacob, Lachner, and Scheiter who obtained effects of explaining with this difficult learning material [13]. In contrast to Jacob and colleges, students in our study showed slightly lower learning outcomes. Additionally, results indicated a higher correlation between prior knowledge and students’ learning outcome, suggesting that the provided learning material was too difficult, which may explain the null findings. Seemingly, students have problems in applying this learning strategy successfully when the learning material is overly difficult [30], which highlights the need for additional support during learning, such as pre-trainings or prompts [39,43,95].

Another caveat refers to the measurement of the generated explanations. Based on prior research, we measured the number of personal references, concepts, and elaborations in students’ generated explanations [1113,58,70,71,98]. We used these characteristics as a coarse proxy for the underlying processes during the learning activity [2,11,13]. Further research is needed to explore the generated explanations in more detail which could provide a deeper insight into the explaining effect. This indeed could disentangle which detrimental effects occurred during explaining. Further research should, therefore, implement additional online-measures, for instance, think-aloud protocols or log-file-data [99].

Another limitation refers to the relatively low homogeneity indices. However, we want to note that we measured a broad understanding of the topic by asking different and independent questions with a restricted set of 5 to 6 items per test. This decision likely resulted in a trade-off regarding the internal consistency of the knowledge measures, as they likely covered diverse sub-components of students’ comprehension [100]. Further research with advanced statistical procedures is needed which explicitly takes the multidimensionality into account, for instance, by using bifactor-(S-1) measurement models which, however, require larger sample sizes [101].

Finally, we would like to point out restrictions regarding the measurement of enjoyment, as we only measured enjoyment quantitatively by means of two items. Our results revealed, in line with the study by Hoogerheide and colleges [23], that explaining resulted in higher levels of enjoyment compared to retrieving the materials, which is in part indicative for the prognostic validity of our instrument. However, as we only used a quantitative measure, we are not able to conclude whether and how enjoyment affected learning outcomes in a qualitative manner. Therefore, further research is needed that investigates the impact of enjoyment on the explaining effect in more detail by implementing qualitative methods, such as interviews [102,103].

Conclusion

The findings of our study show that explaining does not necessarily foster learning, although it enhances investments of mental effort, and increases students’ motivation. Furthermore, our findings indicate that simply inducing social presence does not increase the effectiveness of explaining. It is, therefore, up for further research to find ways to trigger deep-learning processes during explaining. As such, explaining can contribute to students’ mental effort and motivation, which are also linked to higher learning outcomes.

Acknowledgments

We would like to thank Eleonora Dolderer, Lisa Holzschuh, Gamze Coecen, and Louisa Döderlein for transcribing and rating the qualitative data. Leonie Jacob is a doctoral student at the LEAD Graduate School & Research Network [GSC1028], which was funded within the framework of the Excellence Initiative of the German federal and state governments.

References

  1. 1. Fiorella L, Mayer RE. Role of expectations and explanations in learning by teaching. Contemporary Educational Psychology 2014; 39(2):75–85.
  2. 2. Fiorella L, Kuhlmann S. Creating drawings enhances learning by teaching. Journal of Educational Psychology 2020; 112(4):811–22.
  3. 3. Fukaya T. Explanation generation, not explanation expectancy, improves metacomprehension accuracy. Metacognition Learning 2013; 8(1):1–18.
  4. 4. Lachner A, Backfisch I, Hoogerheide V, van Gog T, Renkl A. Timing matters! Explaining between study phases enhances students’ learning. Journal of Educational Psychology 2020; 112(4):841–53.
  5. 5. Palinscar AS, Brown AL. Reciprocal teaching of comprehension-fostering and comprehension-monitoring activities. Cognition and Instruction 1984; 1(2):117–75.
  6. 6. Plötzner R, Dillenbourg P, Praier M, Traum D. Learning by explaining to oneself and to others. In: Dillenbourg P, editor. Collaborative-learning: Cognitive and Computational Approaches. Oxford: Elsevier; 1999. p. 103–21.
  7. 7. Roscoe RD. Self-monitoring and knowledge-building in learning by teaching. Instructional Science 2014; 42(3):327–51.
  8. 8. Roscoe RD, Chi MTH. Tutor learning: The role of explaining and responding to questions. Instructional Science 2008; 36(4):321–50.
  9. 9. Hoogerheide V, Renkl A, Fiorella L, Paas F, van Gog T. Enhancing example-based learning: Teaching on video increases arousal and improves problem-solving performance. Journal of Educational Psychology 2019; 111(1):45–56.
  10. 10. Kobayashi K. Learning by preparing-to-teach and teaching: A meta-analysis. Japanese Psychological Research 2019; 61(3):1–12.
  11. 11. Lachner A, Ly K-T, Nückles M. Providing written or oral explanations? Differential effects of the modality of explaining on students’ conceptual learning and transfer. Journal of Experimental Education 2018; 86(3):344–61.
  12. 12. Hoogerheide V, Deijkers L, Loyens SM, Heijltjes A, van Gog T. Gaining from explaining: Learning improves from explaining to fictitious others on video, not from writing to them. Contemporary Educational Psychology 2016; 44–45:95–106.
  13. 13. Jacob L, Lachner A, Scheiter K. Learning by explaining orally or in written form? Text complexity matters. Learning and Instruction 2020; 68:101344.
  14. 14. Weidlich J, Bastiaens TJ. Explaining social presence and the quality of online learning with the SIPS model. Computers in Human Behavior 2017; 72:479–87.
  15. 15. Oh CS, Bailenson JN, Welch GF. A systematic review of social presence: Definition, antecedents, and implications. Frontiers in Robotics and AI 2018; 5:114. pmid:33500993
  16. 16. Gunawardena CN, Zittle FJ. Social presence as a predictor of satisfaction within a computer-mediated conferencing environment. American Journal of Distance Education 1997; 11(3):8–26.
  17. 17. Short J, Williams E, Christie B. The social psychology of telecommunications. London: Wiley; 1976.
  18. 18. Weidlich J, Bastiaens TJ. Designing sociable online learning environments and enhancing social presence: An affordance enrichment approach. Computers & Education 2019; 142:103622.
  19. 19. Kim Y. Digital peers to help children’s text comprehension and perceptions. Educational Technology & Society 2013; 16(4):59–70.
  20. 20. Wang J, Antonenko PD. Instructor presence in instructional video: Effects on visual attention, recall, and perceived learning. Computers in Human Behavior 2017; 71:79–89.
  21. 21. Atkinson RK. Optimizing learning from examples using animated pedagogical agents. Journal of Educational Psychology 2002; 94(2):416–27.
  22. 22. Hoogerheide V, Loyens SM, van Gog T. Effects of creating video-based modeling examples on learning and transfer. Learning and Instruction 2014; 33:108–19.
  23. 23. Hoogerheide V, Visee J, Lachner A, van Gog T. Generating an instructional video as homework activity is both effective and enjoyable. Learning and Instruction 2019; 64:101226.
  24. 24. Paas F. Training strategies for attaining transfer of problem-solving skill in statistics: A cognitive-load approach. Journal of Educational Psychology 1992; 84(4):429–34.
  25. 25. DeLeeuw KE, Mayer RE. A comparison of three measures of cognitive load: Evidence for separable measures of intrinsic, extraneous, and germane load. Journal of Educational Psychology 2008; 100(1):223–34.
  26. 26. Eccles JS, Wigfield A. Motivational beliefs, values, and goals. Annual Review of Psychology 2002; 53:109–32. pmid:11752481
  27. 27. Fiorella L, Mayer RE. Eight ways to promote generative learning. Educational Psychology Review 2016; 28(4):717–41.
  28. 28. Mayer RE. Learning strategies for making sense out of expository text: The SOI model for guiding three cognitive processes in knowledge construction. Educational Psychology Review 1996; 8(4):357–71.
  29. 29. Mayer RE. Multimedia learning. Cambridge: Cambridge University Press; 2009.
  30. 30. Brod G. Generative learning: Which strategies for what age? Educational Psychology Review 2020.
  31. 31. Chi MTH. Active-constructive-interactive: A conceptual framework for differentiating learning activities. Topics in Cognitive Science 2009; 1(1):73–105. pmid:25164801
  32. 32. Chi MTH, Wylie R. The ICAP framework: Linking cognitive engagement to active learning outcomes. Educational Psychologist 2014; 49(4):219–43.
  33. 33. Wittrock MC. Learning as a generative process. Educational Psychologist 2010; 45(1):40–5.
  34. 34. Watson GR. What is. concept mapping? Medical Teacher 1989; 11(3–4):265–9.
  35. 35. de Bruin ABH, Dunlosky J, Cavalcanti RB. Monitoring and regulation of learning in medical education: The need for predictive cues. Medical Education 2017; 51(6):575–84. pmid:28332224
  36. 36. van Loon MH, de Bruin ABH, van Gog T, van Merriënboer JJG, Dunlosky J. Can students evaluate their understanding of cause-and-effect relations? The effects of diagram completion on monitoring accuracy. Acta Psychologica 2014; 151:143–54. pmid:24977937
  37. 37. Chi MTH, Bassok M, Lewis MW, Reimann P, Glaser R. Self-explanations: How students study and use examples in learning to solve problems. Cognitive Science 1989; 13:145–82.
  38. 38. Chi MTH, VanLehn Kurt A. The content of physics self-explanations. The Journal of The Learning Science 1991; 1(1):69–105.
  39. 39. Chi MTH, de Leeuw N, Chiu M-H, LaVancher C. Eliciting self-explanations improves understanding. Cognitive Science 1994; 18(3):439–77.
  40. 40. Dillenbourg P, editor. Collaborative-learning: Cognitive and Computational Approaches. Oxford: Elsevier; 1999.
  41. 41. McNamara DS. Self-explanation and reading strategy training (SERT) improves low-knowledge students’ science course performance. Discourse Processes 2017; 54(7):479–92.
  42. 42. Duran D. Learning-by-teaching: Evidence and implications as a pedagogical mechanism. Innovations in Education and Teaching International 2016; 54(5):476–84.
  43. 43. McNamara DS. SERT: Self-explanation reading training. Discourse Processes 2004; 38(1):1–30.
  44. 44. McNamara DS, Scott JL. Training self explanation and reading strategies. Proceedings of the Human Factors and Ergonomics Society 43rd Annual Meeting 1999:1156–60.
  45. 45. Topping KJ. Trends in peer learning. Educational Psychology 2005; 25(6):631–45.
  46. 46. Webb NM, Franke ML, De T, Chan AG, Freund D, Shein P et al. ‘Explain to your partner’: Teachers’ instructional practices and students’ dialogue in small groups. Cambridge Journal of Education 2009; 39(1):49–70.
  47. 47. Webb NM, Troper JD, Fall R. Constructive activity and learning in collaborative small groups. Journal of Educational Psychology 1995; 87(3):406–23.
  48. 48. Thiede KW, Anderson MCM, Therriault D. Accuracy of metacognitive monitoring affects learning of texts. Journal of Educational Psychology 2003; 95(1):66–73.
  49. 49. Lachner A, Jacob L, Hoogerheide V. Learning by writing explanations: Is explaining to a fictitious student more effective than self-explaining? Learning and Instruction 2021.
  50. 50. Richardson JC, Maeda Y, Lv J, Caskurlu S. Social presence in relation to students’ satisfaction and learning in the online environment: A meta-analysis. Computers in Human Behavior 2017; 71:402–17.
  51. 51. Russo T, Benson S . Learning with invisible others: Perceptions of online presence and their relationship to cognitive and affective learning. International Forum of Educational Technology & Society 2005; 8(1):54–62.
  52. 52. Clark HH, Brennan SE. Grounding in communication. In: Resnick LB, Levine JM, Teasley SD, editors. Perspectives on socially shared cognition. American Psychological Association; 1991. p. 127–49.
  53. 53. Nickerson RS. How we know—and sometimes misjudge—what others know: Imputing one’s own knowledge to others. Psychological Bulletin 1999; 125(6):737–59.
  54. 54. Wittwer J, Nückles M, Landmann N, Renkl A. Can tutors be supported in giving effective explanations? Journal of Educational Psychology 2010; 102(1):74–89.
  55. 55. Deci EL, Ryan RM. The "what" and "why" of goal pursuits: Human needs and the self-determination of behavior. Psychological Inquiry 2000; 11(4):227–68.
  56. 56. Lachner A, Hoogerheide V, van Gog T, Renkl A. When and why does learning by non-interactive teaching work? in prep.
  57. 57. Sindoni MG. Spoken and written discourse in online interactions: A multimodal approach. New York, NY: Routledge; 2013.
  58. 58. Chafe W. Integration and involvement in speaking, writing, and oral literature. In: Tannen D, editor. Spoken and written language: Exploring orality and literacy. Norwood, NJ: Ablex; 1982. p. 35–54.
  59. 59. Scardamalia M, Bereiter C. Knowledge telling and knowledge transforming in written composition. In: Rosenberg S, editor. Advances in applied psycholinguistics: Reading, writing, and language learning. Cambridge: Cambridge University Press; 1987. p. 142–75 (Cambridge monographs and texts in applied psycholinguistics).
  60. 60. Golke S, Wittwer J. High-performing readers underestimate their text comprehension: Artifact or psychological reality? Proceedings of the 39th Annual Conference of the Cognitive Science 2018.
  61. 61. Koo TK, Li MY. A guideline of selecting and reporting intraclass correlation coefficients for reliability research. Journal of Chiropractic Medicine 2016; 15(2):155–63. pmid:27330520
  62. 62. Baars M, van Gog T, de Bruin A, Paas F. Effects of problem solving after worked example study on secondary school children’s monitoring accuracy. Educational Psychology 2017; 37(7):810–34.
  63. 63. Prinz A, Golke S, Wittwer J. The double curse of misconceptions: Misconceptions impair not only text comprehension but also metacomprehension in the domain of statistics. Instructional Science 2018; 46(5):723–65.
  64. 64. Maki RH, McGuire MJ. Metacognition for text: Findings and implications for education. In: Perfect TJ, Schwartz BL, editors. Applied Metacognition. Cambridge University Press; 2009. p. 39–67.
  65. 65. Wiley J, Griffin TD, Thiede KW. Putting the comprehension in metacomprehension. The Journal of General Psychology 2005; 132(4):408–28.
  66. 66. Schleinschok K, Eitel A, Scheiter K. Do drawing tasks improve monitoring and control during learning from text? Learning and Instruction 2017; 51:10–25.
  67. 67. Griffin TD, Jee BD, Wiley J. The effects of domain knowledge on metacomprehension accuracy. Memory & Cognition 2009; 37(7):1001–13. pmid:19744939
  68. 68. Schraw G. A conceptual analysis of five measures of metacognitive monitoring. Metacognition Learning 2009; 4(1):33–45.
  69. 69. Wiley J, Griffin TD, Jaeger AJ, Jarosz AF, Cushen PJ, Thiede KW. Improving metacomprehension accuracy in an undergraduate course context. Journal of Experimental Psychology. Applied 2016; 22(4):393–405. pmid:27936853
  70. 70. Chafe W, Tannen D. The relation between written and spoken language. Annual Review of Anthropology 1987; 16(1):383–407.
  71. 71. Akinnaso FN. On the similarities between spoken and written language. Language and Speech 1985; 28:323–59.
  72. 72. R: A language and environment for statistical. R Foundation for Statistical Computing, Vienna, Austria. URL https://www.R-project.org/; 2018.
  73. 73. Boshuizen HPA, Schmidt HG. On the role of biomedical knowledge in clinical reasoning by experts, intermediates and novices. Cognitive Science 1992; 16(2):153–84.
  74. 74. Jerusalem M, Schwarzer R. Allgemeine Selbstwirksamkeitserwartung. In: Schwarzer R, Jerusalem M, editors. Skalen zur Erfassung von Lehrer- und Schülermerkmalen: Dokumentation der psychometrischen Verfahren im Rahmen der Wissenschaftlichen Begleitung des Modellversuchs Selbstwirksame Schulen. Berlin; 1999. p. 13–4.
  75. 75. Kunter M, Baumert J, Blum W, Klusmann U, Krauss S, Neubrand M, editors. Cognitive activation in the mathematics classroom and professional competence of teachers: Results from the COACTIV project. New York: Springer; 2013. (Mathematics teacher education; vol 8).
  76. 76. Klemke A. IWM-Study 2.0: A generic learning environment for online studies. Leibniz-Institut für Wissensmedien 2017.
  77. 77. Cohen J. Statistical power analysis for the behavioral sciences. Rev. ed. Lawrence Erlbaum Associates, Inc.; 1977.
  78. 78. Ranganathan P, Pramesh CS, Buyse M. Common pitfalls in statistical analysis: The perils of multiple testing. Perspectives in Clinical Research 2016; 7(2):106–7. pmid:27141478
  79. 79. Streiner DL. Best (but oft-forgotten) practices: The multiple problems of multiplicity-whether and how to correct for many statistical tests. The American Journal of Clinical Nutrition 2015; 102(4):721–8. pmid:26245806
  80. 80. Biel AL, Friedrich EVC. Why you should report bayes factors in your transcranial brain stimulation studies. Frontiers Psychology 2018; 9:1125. pmid:30013501
  81. 81. Hoijtink H, Mulder J, van Lissa C, Gu X. A tutorial on testing hypotheses using the Bayes factor. Psychological Methods 2019; 24(5):539–56. pmid:30742472
  82. 82. Hertzog C, Hines JC, Touron DR. Judgments of learning are influenced by multiple cues in addition to memory for past test accuracy. Archives of Scientific Psychology 2013; 1(1):23–32. pmid:25914865
  83. 83. Fiorella L, Mayer RE. The relative benefits of learning by teaching and teaching expectancy. Contemporary Educational Psychology 2013; 38(4):281–8.
  84. 84. Renkl A. Lehren und Lernen. In: Tippelt R, Schmidt-Hertha B, editors. Handbuch Bildungsforschung. Wiesbaden: Verlag für Sozialwissenschaften; 2009. p. 737–51.
  85. 85. Eitel A. How repeated studying and testing affects multimedia learning: Evidence for adaptation to task demands. Learning and Instruction 2016; 41:70–84.
  86. 86. Mayer RE, Stull AT, Campbell J, Almeroth K, Bimber B, Chun D et al. Overestimation bias in self-reported SAT scores. Educational Psychology Review 2007; 19(4):443–54.
  87. 87. Prinz A, Golke S, Wittwer J. To What Extent Do Situation-Model-Approach Interventions Improve Relative Metacomprehension Accuracy? Meta-Analytic Insights. Educational Psychology Review 2020; 32(4):917–49.
  88. 88. Rhoads JA, Daou M, Lohse KR, Miller MW. The effects of expecting to teach and actually teaching on motor learning. Journal of Motor Learning and Development 2019; 7(1):84–105.
  89. 89. van Gog T, Hoogerheide V, van Harsel M. The role of mental effort in fostering self-regulated learning with problem-solving tasks. Educational Psychology Review 2020.
  90. 90. Yi MY, Hwang Y. Predicting the use of web-based information systems: self-efficacy, enjoyment, learning goal orientation, and the technology acceptance model. International Journal of Human-Computer Studies 2003; 59(4):431–49.
  91. 91. Hayes AF. Introduction to mediation, moderation, and conditional process analysis: A regression-based approach. New York, NY: Guilford Press; 2013.
  92. 92. Rozenblit L, Keil F. The misunderstood limits of folk science: An illusion of explanatory depth. Cognitive Science 2002; 26(5):521–62. pmid:21442007
  93. 93. Lachner A, Neuburg C. Learning by writing explanations: Computer-based feedback about the explanatory cohesion enhances students’ transfer. Instructional Science 2019; 47(1):19–37.
  94. 94. Fonseca BA, Chi MTH. Instruction based on self-explanation. In: Mayer RE, Alexander PA, editors. Handbook of research on learning and instruction. New York: Routledge; 2011 (Educational psychology handbook series).
  95. 95. Schworm S, Renkl A. Learning argumentation skills through the use of prompts for self-explaining examples. Journal of Educational Psychology 2007; 99(2):285–96.
  96. 96. Weinhuber M, Lachner A, Leuders T, Nückles M. Mathematics is practice or argumentation: Mindset priming impacts principle- and procedure-orientation of teachers’ explanations. Journal of Experimental Psychology. Applied 2019; 25(4):618–46. pmid:31070391
  97. 97. Savary J, Kleiman T, Hassin RR, Dhar R. Positive consequences of conflict on decision making: When a conflict mindset facilitates choice. J Exp Psychol Gen 2015; 144(1):1–6. pmid:25494549
  98. 98. Einhorn L. Oral and written style: An examination of differences. Southern Speech Communication Journal 1978; 43(3):302–11.
  99. 99. Ericsson KA, Simon HA. Protocol analysis: Verbal reports as data. Rev. ed., 3. print. Cambridge, Mass.: MIT Press; 1999. (A Bradford book).
  100. 100. Dunn TJ, Baguley T, Brunsden V. From alpha to omega: A practical solution to the pervasive problem of internal consistency estimation. British Journal of Psychology 2014; 105(3):399–412. pmid:24844115
  101. 101. Eid M, Geiser C, Koch T, Heene M. Anomalous results in G-factor models: Explanations and alternatives. Psychol Methods 2017; 22(3):541–62. pmid:27732052
  102. 102. Wienke B, Jekauc D. A qualitative analysis of emotional facilitators in exercise. Frontiers Psychology 2016; 7:1296. pmid:27621718
  103. 103. Csikszentmihalyi M. Beyond boredom and anxiety: Experiencing flow in work and play. San Francisco: Jossey-Bass Publishers; 2000.