Figures
Abstract
The vast majority of research on student learning is based on assessments of student knowledge given during or at the end of an academic term. Until now, we have known very little about what knowledge students retain after a course is over or what determines how much they retain. In this paper, we analyze data collected from students who took one of six courses in introductory or intermediate microeconomics. All students in these courses took a low-stakes standard assessment of their learning at the end of the term. At follow-ups, one to 2.5 years later, these students were surveyed about their academic and job-related activities, and given the same assessment they took at the end of the course. We find that some demographic characteristics and prior preparation for the course are strong predictors of how much students retain while initial attitudes toward economics are not. We also find evidence that for some students, application of economic skills in subsequent jobs and courses helps students retain course skills.
Citation: McKee D, Orlov G (2025) As time goes by: Long-term retention of economics skills. PLoS One 20(12): e0333305. https://doi.org/10.1371/journal.pone.0333305
Editor: Ann L. Owen, Hamilton College, UNITED STATES OF AMERICA
Received: February 5, 2025; Accepted: September 11, 2025; Published: December 12, 2025
Copyright: © 2025 McKee, Orlov. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: All data used in this study are available at: https://doi.org/10.5281/zenodo.17517274.
Funding: This paper is based upon work supported by the National Science Foundation under Grant 2021094, and any opinions, findings, and conclusions or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views of the National Science Foundation.
Competing interests: The authors have declared that no competing interests exist.
Introduction
As exemplified by the meta-analysis of [1], there is ample evidence supporting the influence of pedagogical methods on student achievement. However, the overwhelming majority of research on student learning focuses on the immediate gains rather than the long-term effects on student outcomes. While there is a sizable literature analyzing students’ retention (i.e., continuation of study) in a subject or major (e.g., [2]), very few studies have looked at how well students retain the knowledge and skills that they have been taught as time passes. Our project’s goal is to fill this gap in the literature.
Although psychology research on short-term memory considers retention intervals of seconds, and research on long-term memory focuses on retention measured in minutes to a few weeks, educators hope their students will remember what they have learned for months or years. In this paper, we refer to this very long-term memory as learning retention. The literature on learning retention is extremely limited, because it requires longitudinal data on students, which in the context of higher education can be difficult and expensive to collect. It is also important for samples to be large enough for statistical inference and representative of the student population of interest.
[3] used a sample of 92 students who were given a set of questions originally asked on their consumer behavior course final exam as an extra credit test in a marketing course (usually taken after consumer behavior). The data were collected over three years. For this sample, retention intervals ranged from 8 to 101 weeks (with 12 unique intervals). The authors argued that students performed better on items that tested recognition of concepts in a scenario or identification of scenarios associated with a concept when compared to items that required recall of definitions. Furthermore, concepts on which the students were tested more than once had better retention rates.
[4] measured the changes in the Force Concept Inventory (FCI) to assess student learning retention in physics courses. They observed little decline in the FCI scores over several years; however, the study did not demonstrate whether the sample of 127 students who responded for the retention study was representative of the target student population.
[5] examined the longitudinal effect of pedagogical transformations in introductory physics courses at the University of Colorado using the Brief Electricity & Magnetism Assessment (BEMA). While learning retention is not the focus of the paper, there is a brief section on tracked students who took BEMA at the end of the freshman physics course and again at the start of the junior Electricity & Magnetism course. The author found that these students scored, on average, only approximately 5% lower on the post-test (administered junior year) compared to their BEMA scores in freshman physics. The author highlights that the tracked students were all physics majors and represent a highly selective sample.
[6] used the Quantum Mechanics Conceptual Survey (QMCS) to study learning retention. Comparing two similar cohorts of students, they demonstrated that while the decline in the percentage of correct answers for a post-test administered 6 months after the pre-test is small to begin with, the retention of material was higher in the cohort taught using interactive engagement methods.
[7] measured retention using a test that consists of three open-ended questions testing one concept from Newtonian Physics and is administered, without a baseline test, one and two years after the students complete their freshman Principles of Physics course. The authors claimed that understanding of the material improved over time, potentially due to coursework performed in the time between the testing, but there is no quantitative evidence of this effect.
Finally, [8] study students’ learning and knowledge retention in two introductory economics courses taught over multiple academic terms using various teaching methods. Students are given the same set of 10 multiple choice questions on the first day of class (before the material is taught), on the ninth day of class (after the material is taught), and on the last day of class. They compare the second test to the first test to measure initial learning, and use the difference between the third test and the second test as a measure of knowledge retention. They find that on average, student knowledge declines between the second test and the end of the term, but they say nothing about retention beyond this point.
One limitation shared by all these studies is that none empirically examined the potential mediating role of relevant coursework completed, or work experience acquired within the retention interval, and only [8] considers the role of student characteristics. These factors might hold a great deal of information that could be useful for educators in designing their course plans. That is, if some groups systematically retain less of what we teach, instructors could pay particular attention to those groups to instill a deeper level of understanding. Similarly, instructors should consider encouraging post-course behavior associated with higher retention of course skills. Our paper addresses this important limitation.
Another key innovation of our study is that we collect data from a large sample of students in introductory and intermediate-level courses spread across three semesters and follow them over several years. These students were given standard assessments that evaluated their skill in the course material at the end of each of these courses. One to 2.5 years later, we followed up with these students, re-administered the assessment of economics skills for the appropriate course, and asked them about their professional and academic behavior in the interim period.
Methods
In this study, we collect demographics and performance on assessments for two courses (Introductory Microeconomics and Intermediate Microeconomics) offered at Cornell University in three semesters each between Spring 2019 and Fall 2022. We follow up with students who took these courses over the following one to 2.5 years and give them performance assessments and surveys on their academic and professional activities during the interim period. We use regression models to investigate how initial performance, demographic characteristics, attitudes toward economics, and interim behavior predict changes in the performance measures over time.
Note that the change in assessment score is not a perfect measure of knowledge retention. For example, a student’s score could remain the same at the follow-up if they forget some course concepts they knew at the end of the course but also learn course concepts they had not learned during the course. In this case, the change in score would over-estimate the amount of knowledge that is actually retained by the student.
While the instructor and content of all three cohorts of each course were the same, there were differences in how these courses were taught. Specifically, some courses were taught online during the COVID-19 pandemic, while others were taught in-person before or after the pandemic. Similarly, the instructors spent different fractions of class time lecturing and having students engage in problem solving activities in the different academic terms. We estimate our regression models using pooled data from all the cohorts for a given course, allowing us to maximize our sample size while still including cohort fixed effects to account for unobserved differences between cohorts. That is, our models allow the average change in assessment scores to differ by cohort, holding student characteristics constant.
Methods: Introductory microeconomics
Fig 1 shows an overview of the data collection process that started in Fall 2019 and continued until Fall 2024. The course we follow was taught in Fall 2019, Fall 2020, and Fall 2022. As noted above, the learning goals in all three academic terms were the same, and in all three terms the course was taught by the same instructor. The course follows a traditional textbook (Krugman and Wells, 5th edition) and uses graphs and equations to define core microeconomic concepts including supply and demand, firm behavior, market structure, equilibrium, and externalities, and use them to make predictions about real world outcomes. Students are assigned 20 online problem sets during the term, and exams are closed book with a mix of multiple choice and short answer questions. The amount of active learning used in the classroom varied across the semesters as did the course modality (in-person versus online). We explore the impact of these differences in a separate paper.
The data collection process for students who take introductory microeconomics
Students in these three courses were invited to complete online surveys and assessments for the study on the first day of the term (August 29, 2019; September 2, 2020; and August 22, 2022). Participants were presented with an information statement describing the study’s purpose, procedures, and their rights, including the voluntary nature of participation. Informed consent was obtained by having participants click a button indicating their consent before beginning the survey. Non-consenting students and students under age 18 were not included in the study. All data was deidentified before it was analyzed.
At the end of each term when the course was taught, students were given the Principles of Economics Skills Assessment for Microeconomics (PESA-Micro). This test is a part of the Cornell Suite of Economic Skills Assessments that is publicly available to economics instructors at econ-assessments.org [9], and it is composed of 30 multiple choice questions that evaluate student knowledge of the learning goals in a typical introductory microeconomics course. While no assessment can be a complete measure of learning in a course, economics and other STEM disciplines have a strong tradition of using well-designed multiple choice tests as proxies for course learning. These include the Test of Understanding College Economics [10], the Force Concept Inventory [11], the Genetics Concept Assessment [12], and the Chemical Concept Inventory [13]. PESA-Micro was developed using the best practices outlined in [14]. Each assessment question in PESA-Micro maps to specific learning goals of the relevant course to ensure broad coverage, and the questions were developed using an iterative process that included student think-aloud interviews, faculty feedback, and pilot data. PESA-Micro scores are strongly correlated with final exam scores of students in each of the the three cohort samples of introductory microeconomics with correlation coefficients of 0.53, 0.45, and 0.20.
PESA-Micro was administered online, and students could earn a small amount of extra credit for taking the test. Their score on the test did not affect their grade in the course, but students were advised that putting in a good effort would help them understand where they needed to focus their studying for the final exam and would help the instructor improve the teaching of the course in the future. These students were also given the Math for Economics Skills Assessment, Foundations (MESA-Foundations) at the beginning of the term to measure their mathematical preparation for the course.
The MESA-Foundations score is a critical control in the models we present below, and the primary goal of our study is to explain changes in PESA-Micro scores between the end of the term in future follow-ups. For these reasons, we define our target sample for each cohort to be every student that took PESA-Micro and MESA-Foundations at the end of their course and consented to be part of the research study. While these target samples are subsets of the broader groups of students enrolled in each course, Table 1 shows that these samples are quite representative of the broader groups. Average scores on final exams were not significantly different for the Fall 2019 cohort and 1.4-2.2 points higher for test-takers relative to the overall average in the other two cohorts. Freshmen were significantly more likely to participate in Fall 2020 and Fall 2022, and sophomores in these two cohorts were significantly less likely to participate.
We use a two-stage follow-up strategy to reach out to each student in our target samples one year and two years later. In the first stage, we identified several economics courses offered in the follow-up period that contained at least three target students and where the instructor was willing to give their students extra credit in their course for participating in the study. All students in these courses were offered extra credit for participation, though we only use data from our target students for this project. Over 80% of the students in these classes chose to participate, allowing us to inexpensively reach about 10% of our target students. In the second stage, we offered the rest of our target students gift cards for participation. Participating students who were still on campus received $25 gift cards, and students who had left campus received $50 gift cards since most of them worked full-time and were likely to be less responsive to cash incentives. Total response rates (shown in Tables 2–4) ranged from 30% to 46% at the one-year follow-up and from 25% to 34% at the two-year follow-up. We describe our follow-up samples in more detail and assess their representativeness of the target samples in the Data Description section below.
Students who provided informed consent and chose to participate in a follow-up survey took the online assessment (PESA-Micro) again and answered several questions about their behavior during the year since they completed the introductory microeconomics course. Specifically, we asked about courses they had taken that used their economics knowledge and jobs they held where they applied their economics knowledge. We use the measure of time spent on the online assessment as a proxy for student effort on the online assessment in order to check the validity of submitted answers, and we drop students who spent 10 minutes or less from our analysis sample. We followed up with students that were still enrolled in courses at Cornell as well as students who had graduated from Cornell or left for other reasons.
We model student scores on the follow-up assessment using the following regression model:
This specification allows a student’s follow-up score to depend on the student’s characteristics measured at the beginning of the course (demographics, prerequisite skills), attitudes toward economics at the end of the course, academic and professional activities undertaken between the end of the course and the follow-up, and the student’s score on the original end-of-term assessment of what was learned in the course. [15] and [16] recommend this specification when analyzing changes in student scores over time as well as an alternative model where the dependent variable is the difference in the scores and the earlier score is not included as an explanatory variable. We have estimated both models, and the results are very similar. Results for the alternative model are available upon request.
is a vector of binary student demographic variables including college year, gender, under-represented minority (URM) status, first-generation college-goer status, and whether the student has declared (or plans to declare) an economics major.
All scores that enter our models are standardized using the distribution from the original cohort’s baseline distribution to simplify interpretation and comparison of coefficients. That is, we subtract the mean score in the original cohort and divide by its standard deviation yielding baseline scores that are mean zero and unit variance.
is the standardized score on the math test given at the beginning of the term and
is the standardized score on assessment of course skills given the at the end of the term.
is the standardized score on the follow-up assessment.
is a vector of two binary measures of student attitudes toward economics. These variables reflect whether the student strongly agreed with the following statements:
- “I think economics is interesting and applicable for people like me.”
- “I think about economic events I experience and witness in day to day life (e.g., in your own life and decisions, on the news, internet articles, etc.)”
is a vector of variables representing work and course enrollment since the end of the course or the previous follow-up. These variables are constructed from answers to the following three yes/no questions given to participants during the follow-up survey:
- Have you completed any courses where you extended or applied your knowledge of microeconomics since [DATE OF END OF COURSE IF FIRST FOLLOW-UP or DATE OF FIRST FOLLOW-UP IF SECOND FOLLOW-UP]?
- Are you currently enrolled in any courses where you are extending or applying your knowledge of microeconomics?
- Since you took introductory microeconomics, have you worked in any job where you used your microeconomics knowledge?
In some models, we use binary indicators based on answers to each of the above questions, and in some specifications we combine the two course measures into one that measures whether they were currently enrolled in a course that used their economics knowledge or if they completed at least one since the end of the course.
While all respondents who answer these questions affirmatively believe they are either applying or extending their knowledge, there is some inherent ambiguity as different respondents will interpret the questions differently. Some will only consider only cases where they make calculations similar to those made in their classes, and others will report more advanced courses or jobs where they encountered concepts from their introductory course.
Methods: Intermediate microeconomics
The study design for the intermediate microeconomics students is very similar to the design described above. Fig 2 shows an overview of the data collection process which started in Spring 2019 with the first cohort and ended in Spring 2023 with the 2.5 year follow-up of the Fall 2020 cohort. As was the case for the introductory course, the learning goals and the instructor of the intermediate course remained the same throughout the project. This is a mathematically intensive course that uses the tools of calculus, constrained optimization, and game theory to put a formal foundation under the economic concepts taught in an introductory microeconomics course. Students complete six challenging pencil-and-paper problem sets during the term, and the three major exams are composed of multipart problems that require significant mathematical calculation and deep conceptual understanding. As was the case for the introductory microeconomics course we study here, there was significant variation in the amount of active learning students did in the classroom and in the course modality.
The data collection process for students who take intermediate microeconomics.
In each of the three academic terms, students were invited to complete online surveys and assessments for the study on the first day of the term (January 22, 2019; August 29, 2019; and September 2, 2020). These students, like those in the Introductory Microeconomics cohorts, explicitly consented to participating in the research study before completing the survey or assessment. Non-consenting students and students under age 18 were not included in the study. All data was deidentified before it was analyzed.
Students were given the Intermediate Economics Skills Assessment for Microeconomics (IESA-Micro) at the end of each of the three study courses. This test is made up of 31 multiple-choice questions that evaluate the learning goals of a typical calculus-based course in intermediate microeconomic theory. Like PESA-Micro, IESA-Micro was developed using the process described in [14], and scores are highly correlated with final exam scores in all three cohorts with correlation coefficients of 0.45, 0.56, 0.33.
It was administered in-person in Spring 2019 and Fall 2019, and online in Fall 2020. To measure their level of preparation for the course at the beginning of the term, students were given the Math for Economics Skills Assessment, Intermediate (MESA-Intermediate) and PESA-Micro, the same test that the introductory students took at the end of the term.
Table 1 shows that the between 70% and 81% of students enrolled in these courses completed the both pre-assessments and IESA-Micro assessment at the end of the term. These students (i.e., our target samples) were very similar to those students who did not take the assessments. We observe a small (1.8 percentage point) significant difference in the final exam scores between target sample and the other students only in Fall 2019.
We used the same two-stage follow-up strategy described above to reach out to our target samples for Intermediate Micro. This resulted in surveying and assessing 38-48% of students in each cohort at the 1.5-year follow-up and 25-31% of the target students at the 2.5-year follow-up.
Students that were successfully followed were given the same IESA-Micro assessment again and surveyed about economics-related courses they may have taken since the end of the course and any jobs they had where they applied the economic ideas they had learned in their courses. We also drop students from the analysis sample who spent 10 minutes or less on the assessment, and we followed both students that were still taking courses at Cornell and those that had left campus.
The regression models we use to study the determinants of skills retention are very similar to those described above:
The demographic, attitudinal, and behavioral measures that enter the model are identical to those used in our models for introductory students. and
represent the standardized scores on the intermediate microeconomics assessment taken at the end of the course and at follow-up, respectively.
and
are scores on the two prerequisite skill assessments taken at the beginning of the term.
Data description
Tables 2, 3, and 4 describe our analysis variables for our three cohorts of introductory microeconomics students. The first column of each table provides means of the variables that were collected at baseline, while the second and third columns provide means for these variables for the subsets of students that were successfully followed as well as for the variables that were collected at follow-up.
Our follow-up success rates ranged between 25% and 46%, but it is important to consider how our data collection strategy may have biased our samples. We successfully followed up with about 10% of our target samples by surveying students in other popular economics courses, resulting in samples that likely overrepresent students who took additional economics courses. In our second stage of follow-up, we encouraged participants by offering gift cards and explaining the value of educational research. This suggests our samples may be disproportionately composed of students who are motivated either by financial incentives or by an intrinsic interest in contributing to research. Despite these potential biases, we find that all of our follow-up samples have similar distributions of observable characteristics when compared to students in the target sample that did not participate.
We can also see in Tables 2, 3, and 4 that PESA scores on average declined between the end of the course and both follow-ups for all three cohorts, though students retained most of what they knew, and declines were modest. In the Fall 2022 cohort there was only a 1-point decline by the end of the first year, while scores decline by 6 points at the one year follow up for the Fall 2019 cohort and by 3 points for the Fall 2020 cohort. At the two-year follow-up, declines from baseline were almost the same as those observed at the one year follow-up.
While only 14-16% of students reported having a job where they used economics in the year after the course, this increases to 37-42% in the two years after the course ended. Shares of students who either completed an economics course in the interim period or were currently enrolled are much higher, ranging from 65% to 82% depending on the cohort and year of follow-up.
Tables 5, 6, and 7 describe the target sample and the follow-up analysis samples for the intermediate microeconomics courses taught in Spring 2019, Fall 2019, and Fall 2020. The 1.5-year follow-up sample for the Spring 2019 cohort does not substantially differ on any of the observed characteristics from the students who were not followed, but the 2.5-year follow-up sample is somewhat more positively selected as their average baseline PESA score is significantly higher, and these students are more likely to find economics interesting and think about it daily than students who were not successfully followed. We see a similar pattern in the Fall 2019 cohort where the follow-ups are more likely to major in economics (significant for 1.5 year follow-up), have higher average baseline test scores, and have more positive attitudes toward economics. The 1.5 and 2.5-year follow-ups of the Fall 2020 are also somewhat more positively selected as they have significantly higher baseline PESA and MESA scores. The 2.5-year follow-up somewhat over-represents students who were sophomores when they took the course relative to juniors.
We observe declines in scores on end-of-term tests during the follow-up period that were somewhat larger than we observed for the introductory courses, though there is still substantial retention. The average decline for the Spring 2019 cohort was 7 points at 1.5 years and 11 points at 2.5 years, and the declines observed for the Fall 2019 cohort were 11 points at 1.5 years and 9 points at 2.5 years. Scores of students in the Fall 2020 cohort declined by 7 points and 8 points at the 1.5-year and 2.5-year follow-ups respectively.
Intermediate-level students were much more likely to report that they held jobs where they applied their economic skills than the introductory-level students (e.g., 24-26% at first follow-up for intermediate microeconomics versus aforementioned 14-16% for introductory microeconomics students). This is in part due to the longer follow-up periods, but there are likely to be more employers willing to higher more advanced students too.
Results
The determinants of introductory microeconomics skills retention
Tables 8 and 9 contain our estimates of the regression model shown in Eq 1 that explain follow-up scores (1-year and 2-year respectively) for introductory microeconomics students.
In column (1) of Table 8, the model includes as regressors student baseline PESA scores and dummy variables for college year, omitting the freshman reference category. We find that higher scores at baseline yield significantly higher scores at follow-up, but the return is less than one-to-one. In other words, we see some regression to the mean: If a student performs well on the baseline test, but part of this performance is driven by successful “educated guesses” or choices based on knowledge which was not well-consolidated, these gains will not be preserved. The estimated intercept (-0.23) is the expected standardized score at follow-up, holding the initial assessment at its zero mean. As expected, the decline is significant and negative. Seniors seem to retain substantially more skills than first year students.
The second column adds several demographic measures, and we find that performance of female students declines by a statistically significant almost half a standard deviation relative to males. We see no significant differences due to any other demographic characteristics. We add a measure of students’ math skills at the beginning of the course in the third column and find it has a moderate positive effect on how much knowledge is retained. Incorporating this control induces no real change in the magnitude or significance of the other coefficients. In the fourth column we add controls for student attitudes toward economics, and neither is a statistically significant predictor of skills at the follow-up.
In the final two columns, we include measures of self-reported interim behavior, and find that neither holding a job that applies economics nor completing a class in economics during the interim have any significant effect on retention of economic knowledge. Being enrolled in an economics course at the time of follow-up, however, has a strongly significant positive (0.30 SD) effect on scores at follow-up.
The story changes somewhat in our analysis of the 2-year follow-up scores shown in Table 9. We now see that the positive relationship between initial math skills and knowledge retention has become more statistically significant. Students who are either enrolled in another economics course at the time of follow-up or completed one before the follow-up are retain significantly more information than those who are not, but we can no longer determine which type of class has a stronger effect.
The determinants of intermediate microeconomics skills retention
Our findings for the intermediate-level course are in many ways similar to what we find in our analysis of introductory students. Focusing first on the 1.5-year follow-up analysis shown in Table 10, we see that female students score lower at the 1.5-year follow-up than male students, though the effect is not statistically significant. On the other hand, we see that URM students retain marginally significantly less than non-URM students.
Students with higher prerequisite skills (math and economics) retain significantly more than students who entered the course less prepared, while attitudes toward economics continue to have little measurable effect. Students that held jobs during the interim period scored significantly higher than those who did not, but none of the course behavior variables are significant at the 1.5 year follow-up.
Our sample size at the 2.5-year follow up is quite small (89 observations), so it is not surprising that many of the findings from the earlier follow-up are no longer statistically significant, though signs and magnitudes of effects are similar, as can be seen in Table 11.
Discussion
We have found several results that warrant consideration of potential underlying mechanisms. Although our current evidence does not conclusively identify these mechanisms, we propose several possibilities in the hope that future research will clarify the processes at work.
The first result is the most straight-forward: Performance on the assessment declined significantly between the initial assessment and the follow-ups in most of our specifications. While the first follow-up for the introductory students was sooner (1 year) than it was for the intermediate students (1.5 years), this cannot explain the difference in declines as the intermediate students lost more skills in 1.5 years than the introductory students lost in 2 years. We believe this difference is likely due to differences in the learning goals of the two courses. The intermediate-level material was more technical and thus potentially more difficult to retain, and there was more reinforcement of the introductory material after the fact than the intermediate material. In future work, we plan to investigate whether retention differed based on types of questions, categorizing them by sophistication and difficulty.
Second, we find that higher scores on the course prerequisite skill assessments are strongly positively associated with increased retention of economics skills in both the introductory and intermediate-level courses. We believe the likely mechanism is that these students were able to understand the course concepts at a deeper level than those who did not have a strong economic or mathematical background. This deeper level of understanding may have been more persistent than the more surface-level understanding that other students had at the end of the term.
Third, we see the roles of future economics courses and jobs that apply economics are quite different for students in the introductory and intermediate courses. Later economics courses seem to help students retain information from the introductory course in both follow-ups, and this is consistent with subsequent economics courses reinforcing what was learned in the original course. Holding a job where they apply economic concepts is not significantly associated with retention. Conversely, courses do not help retention of intermediate-level skills, but we believe few “advanced” undergraduate economics courses substantially apply or build on the technical skills that are evaluated in the assessment. Students who have jobs during the interim period do in fact retain more skills, but this may in part be due to selection. That is, it is possible that students with strong skills are more successful finding jobs where they can apply the economics knowledge. On the other hand, it is possible that working on economics-related topics on the job solidified student understanding.
Conclusion
In this paper, we have examined the determinants of learning retention for introductory and intermediate-level microeconomics skills. Our institution regularly administers standard learning assessments at the end of our introductory and intermediate courses, and we have supplemented this data by following up with students in these courses one to 2.5 years after the original courses have ended. During the follow-up surveys, we gave these students the same assessments and asked them about their interim academic and job-related activities. Demographic data was collected as a part of end-of-semester surveys in each course.
Our analysis shows that initial performance levels, some student demographic characteristics, and interim behavior all have significant effects on how much of what was learned in the original course is retained over time. These results have practical implications for how we teach economics given that our primary goal as instructors is to impart useful skills that stick with students over the long-term.
The most clearly applicable lesson learned is that students who come in with better preparation, especially in math, retain more economic skills than other students. These math skills are also major determinants of learning during the course implying that whatever we do to support students and build these skills before or during the course should yield short run and long run dividends.
We also find that students who take additional economics courses on average retain more of what they learned in the introductory course. Most instructors already encourage students take additional courses in their field, but this gives us even more reason to continue to do so. At the intermediate level, additional courses do not have a positive impact on retention. The reason may in part be that these courses build on economic concepts but tend not use the technical models emphasized in our intermediate microeconomic theory courses. We believe there needs to be better synchronization of the intermediate-level course and the more advanced classes.
Finally, we find that applying and building on economics skills in jobs may help students retain skills learned in intermediate microeconomics. If true, we should invest more in supporting students during their searches for economics-related summer internships as well as starting their careers after graduation.
We acknowledge that there are some limitations of our current work. Educational processes are inherently complex with a myriad of decisions made by instructors and students based on a range of factors that are both observed and unobserved. The measures we use in this paper are by their nature imperfect: Test scores are incomplete measures of student knowledge, application of economic skills in future courses and jobs is subjective, and our measure of knowledge retention does not distinguish between students remembering concepts taught during the term and re-learning concepts after the term is over. Given these issues, the reader should not interpret our results as definitive. The same can be said of most educational research.
A more concrete potential issue relates to sample selection. Because not all students that we recruit to participate in the study chose to participate, the analysis sample may not be representative of the original course participants. That said, based on the statistical tests of demographic characteristics and exam scores discussed above, we believe our samples reasonably represent the original population of students.
A related concern is external validity. Our analysis sample is determined by the data collection design and is limited to data from six terms of two courses taught by two instructors at one institution. While Cornell University is not representative of all universities, most economics departments around the world offer courses with content that is very similar to these courses. In addition, Cornell is a selective R1 university with a student body very much like other selective R1 universities in the US. Future research could replicate this work at other institutions, with other instructors, in other courses, or even in other disciplines to determine whether long-term retention of knowledge and skills operates similarly in different contexts.
In the current paper, we highlight the findings of differences in knowledge retention across gender and minority status and leave to future work a deeper exploration of the mechanisms underlying the fact that, for example, female students seem to demonstrate lower material retention compared to their male counterparts. Another important aspect of differences in retention across demographic lines is their potential intersectionality. For example, there may be mechanisms that are specific to female students who come from under-served student populations. Given the size of our final analysis sample, we are unable to explore differences in retention in subgroups defined by gender and race, but we hope that future research using larger data sets will shed light on these issues.
In our own future research, we plan to supplement our intermediate microeconomics student data with retrospective surveys on study behavior that were collected during the course and at the time of the final exam. This should help us understand how differences in study behavior might affect retention of information. Moreover, we intend to examine retention on a question-by-question level, separately analyzing the determinants of retention, loss of knowledge, and improved performance compared to the original end-of-semester assessment. We will also plan to analyze assessment scores based on subsets of questions to see how retention of different types of skills varies across students. For example, it may be that students retain conceptual understanding better than their ability to solve more technical problems. Finally, we plan to analyze the role that modality of delivery (online vs. in-person) and pedagogical approach (pure lecture vs. use of active learning methods) have on student knowledge retention over time.
Perhaps the most important contribution of our paper is the process we have developed for following undergraduate students in the years after they complete courses. While this requires significant time and resources, we have shown it is still quite feasible to collect this kind of data and obtain reasonable response rates and representative samples. We hope other researchers will learn from our experience and go on to collect their own data to learn about long-term learning in other contexts.
Acknowledgments
This study was determined to be exempt from review by Cornell University’s Institutional Review Board (Protocol No. 1708007347).
The authors would like to thank Nahid Hassan and Sarah Almosawari for their research assistance, and Carolyn Aslan and Lisa Sanfillipo from the Cornell Center for Teaching Innovation for their support during the IRB process. We are grateful to Peter Lepage for founding the Active Learning Initiative at Cornell and believing in our vision. Third, we thank all of the students who participated in this study by taking numerous assessments and surveys for their trust and hard work. Last but not least, we thank Robin Taylor (RTRES Consulting) for her outstanding contributions as the project’s external evaluator.
References
- 1. Freeman S, Eddy SL, McDonough M, Smith MK, Okoroafor N, Jordt H, et al. Active learning increases student performance in science, engineering, and mathematics. Proc Natl Acad Sci U S A. 2014;111(23):8410–5. pmid:24821756
- 2. Jüttler M. Predicting economics student retention in higher education: the effects of students’ economic competencies at the end of upper secondary school on their intention to leave their studies in economics. PLoS One. 2020;15(2):e0228505. pmid:32023319
- 3. Bacon DR, Stewart KA. How fast do students forget what they learn in consumer behavior? A longitudinal study. Journal of Marketing Education. 2006;28(3):181–92.
- 4. Francis GE, Adams JP, Noonan EJ. Do they stay fixed?. The Physics Teacher. 1998;36(8):488–90.
- 5. Pollock SJ. Longitudinal study of student conceptual understanding in electricity and magnetism. Phys Rev ST Phys Educ Res. 2009;5(2).
- 6. Deslauriers L, Wieman C. Learning and retention of quantum concepts with different teaching methods. Phys Rev ST Phys Educ Res. 2011;7(1).
- 7. Feizabadi MS, Troha AL. A case study of the learning retention of a key physics concept among undergraduate physics majors. Work. 2017;8(3):5.
- 8. Cosgrove SB, Olitsky NH. Knowledge retention, student learning, and blended course work: Evidence from principles of economics courses. Southern Economic Journal. 2015;82(2):556–79.
- 9. McKee D, Zhu S, Orlov G. Econ-assessments.org: automated assessment of economics skills. East Econ J. 2023;49(1):4–14. pmid:36320611
- 10.
Walstad WB, Watts MW, Rebeck K. Test of understanding in college economics: examiner’s manual. Council for Economic Education; 2007.
- 11. Hestenes D, Wells M, Swackhamer G. Force concept inventory. The Physics Teacher. 1992;30(3):141–58.
- 12. Smith MK, Wood WB, Knight JK. The genetics concept assessment: a new concept inventory for gauging student understanding of genetics. CBE Life Sci Educ. 2008;7(4):422–30. pmid:19047428
- 13. Mulford DR, Robinson WR. An Inventory for alternate conceptions among first-semester general chemistry students. J Chem Educ. 2002;79(6):739.
- 14. Adams WK, Wieman CE. Development and validation of instruments to measure learning of expert-like thinking. International Journal of Science Education. 2010;33(9):1289–312.
- 15. Becker W, Greene W, Rosen S. Research on High School Economic Education. The Journal of Economic Education. 1990;21(3):231–45.
- 16. Allison PD. Change scores as dependent variables in regression analysis. Sociological Methodology. 1990;20:93.