Figures
Abstract
The importance of online learning in higher education settings is growing, not only in wake of the Covid-19 pandemic. Therefore, metrics to evaluate and increase the quality of online instruction are crucial for improving student learning. Whereas instructional quality is traditionally evaluated with course observations or student evaluations, course syllabi offer a novel approach to predict course quality even prior to the first day of classes. This study develops an online course design characteristics rubric for science course syllabi. Utilizing content analysis, inductive coding, and deductive coding, we established four broad high-quality course design categories: course organization, course objectives and alignment, interpersonal interactions, and technology. Additionally, this study exploratively applied the rubric on 11 online course syllabi (N = 635 students) and found that these design categories explained variation in student performance.
Citation: Fischer C, McPartlan P, Orona GA, Yu R, Xu D, Warschauer M (2022) Salient syllabi: Examining design characteristics of science online courses in higher education. PLoS ONE 17(11): e0276839. https://doi.org/10.1371/journal.pone.0276839
Editor: Heng Luo, Central China Normal University, CHINA
Received: August 5, 2021; Accepted: October 15, 2022; Published: November 3, 2022
Copyright: © 2022 Fischer et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: All relevant data are within the paper and its Supporting information files. Additional data cannot be shared publicly because of FERPA restrictions. Raw data may be requested for eligible researchers through the University of California, Irvine Education Research Initiative (ERI) website (https://dtei.uci.edu/tlrc-requesting-data-from-the-tlrc/).
Funding: This work is supported by the National Science Foundation through the EHR Core Research Program (ECR) through an award (1535300) given to MW. This study was also supported by the Open Access Publishing Fund of University of Tübingen through an award given to CF. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
Introduction
The importance of online coursework in higher education reached new heights due to the Coronavirus disease 2019 (Covid-19) induced shift to emergency remote education. However, even before the Covid-19 pandemic, many students enrolled in online courses. For instance, about 34% of all undergraduate students (about 5.7 million students) enrolled in at least one online course [1]. Although studies reported that online learning in higher education can have positive effects similar to face-to-face coursework [2–4], more recent studies underscored that the effectiveness of online education is a field full of heterogeneous effects with ample research studies indicating that students may struggle in online learning environments [5–9]. This may lead to lower course persistence rates in online settings, compared to their face-to-face counterparts [9, 10]. Especially, students with weaker self-regulation are challenged in online learning environments [11–14].
To unpack potential areas for improvement of online learning, recent studies examined the structure and design of online courses to identify quality indicators of online course experiences [10, 15]. Much work that examines the effectiveness of college courses relies on course observations or student evaluations [15–18]. However, course observations and student evaluations have several limitations. For instance, student course evaluations tend to disfavor female instructors [19]. In-depth course observations may provide rich information regarding course design features but require intensive time commitment from observers making data collections at scale difficult. Course syllabi may address these scalability problems as they represent a more cost-effective, less biased, and less invasive approach to examine course quality. Thus, this study utilizes course syllabi as its primary data source and adapts existing classification rubric primarily used in observational studies for course syllabi.
This study has implications for multiple stakeholders. From a theoretical perspective, it represents the first step in identifying links between syllabus-derived course design features and student performance. From a practitioner perspective, identifications of important course design features may guide instructional practice. From an administrative perspective, this study responds to the need for colleges to benchmark their online course quality as departments may want to continue including online course offering in their teaching portfolios after the Covid-19 pandemic. Inferences may support the development of cost-effective (and feasible) ways to assess many online courses and to identify courses that are not fully leveraging the affordances of online learning in a timely manner, for instance, as part of a course-level early warning system
Online courses in college environments
The role of online education in the higher education landscape is ever increasing in the decades to come. With the shift to emergency distance learning due to the Covid-19 pandemic forcing many departments to move to fully online education, departments may choose to incorporate some courses in the regular teaching portfolio after the pandemic. Although online courses should certainly not be viewed as a one-size-fits-all solution to the future of learning, they provide some affordances that could potentially benefit learners. For example, online courses present opportunities for differentiated instruction and the ability to go at one’s own pace [20, 21]. Also, online courses allow universities greater capacity to accommodate non-traditional students who may need additional preparatory course work or flexibility of an asynchronous schedule to balance academic coursework with work and family responsibilities [22–24]. Furthermore, online education may be more cost-effective not only to universities but to students as increased access to learning opportunities may accelerate their time-to-degree [25].
Potential adversary characteristics include a reduction of social presence through fewer face-to-face interactions, which can reduce students’ motivation to carry out their study plans and the greater need for self-regulation abilities [26, 27]. Although, self-regulation is important for learning in any context, it is especially important in online learning as learner-centered approaches require greater self-directed learning skills to succeed [11–13, 28]. This has implications for both educational quality and equity. For example, institutions that enroll many students with insufficient self-regulation abilities would face greater challenges in achieving learning outcomes in online coursework. Indeed, existing studies indicate that this heterogeneity has largely been approached by investigating the context in which online courses are taught. Online courses composed of students from traditionally disadvantaged backgrounds are more likely to see worse achievement outcomes online compared to face-to-face courses [5, 29–31]. Because of this, institution type may also play a role, with negative effects coming from community college populations [8, 32], which tend enroll more students from disadvantaged backgrounds. However, less research focused on how online courses are taught and their subsequent impact of specific course elements on student success [15]. Thus, this study explores course design features that are common in online education to examine the potential quality of online course offerings.
Design characteristics of online courses
Among studies that have investigated the quality of online courses, a handful of consistent themes emerge. Jaggars and Xu provided a comprehensive framework for understanding indicators of quality online course delivery [15]. Their review spanned research from practitioner-oriented literature, surveys of student and instructor opinion regarding elements of high-quality online courses, experimental studies manipulating specific design features, and rubrics of quality assessment developed by educational associations. The resultant framework suggested four main areas of quality, including course organization and presentation, learning objectives and alignment assessment, interpersonal interactions, and technology. This study builds on this framework but adapts it for course syllabi as its primary data source.
Course organization.
A popular metric of online course quality, course organization, involves the clarity and consistency of the course structure. Research from opinion surveys and practitioner literature emphasize the importance of the course’s presentation to course quality, finding associations with students’ appreciation of the instructor [33–35]. Students consider “ease of use” and course organization an important criteria for determining course quality [36]. Course organization can be evident in course outlines, hyperlink structures, instructions for assignments, and grading policies [37, 38]. Creating a navigable infrastructure (e.g., organization of weekly assignments, instructions for getting started) supports student autonomy within the course, which is essential for asynchronous learning [39, 40]. Supporting students’ autonomy not only applies to their ability to find resources online, but also their ability to self-regulate their learning appropriately based on the organization of course expectations. Course organization may improve students’ experiences when it clearly conveys assignment due dates, preparation time, and policies towards late submissions [14, 15, 41].
Learning objectives and alignment.
Especially in online learning environments, where students are expected to study more independently, carefully choosing and presenting a course’s learning objectives is important [34, 42]. Several quality rubrics highlight that learning objectives should be measurable and consistent [34, 43]. Accordingly, assessments should be aligned with those learning objectives, for instance, utilizing detailed assignment descriptions. Instructors are encouraged to align course elements with overall learning objectives by precisely stating learning objectives for which course activities then serve to measure, especially when it comes to exams [36, 39]. Within each course activity, instructors can support students’ achievement of each learning objective by suggesting strategies for successful study behavior [44]. This may include clear instructions on the sequence in which course elements should be started and completed to optimize understanding [36, 40] and to encourage students to self-test their knowledge and space their study time into multiple sessions throughout a week [45–47].
Interpersonal interactions.
In online courses, interpersonal interactions stand out as for promoting cognitive engagement with the course and students’ psychological connection to the course [15]. Time and space between instructors and students inherent to online courses creates greater transactional distance, which may threaten students’ psychological connection to the course [48]. Instructors can counteract this by cultivating a stronger social presence, students’ perceptions that the instructor is a “real person.” In practice, this may be approached through an instructor’s use of humor and self-disclosure, both of which help convey the uniqueness of an instructor’s personality in the absence of non-verbal communication [49]. Similarly, communication that uses a tone that is friendly, open, and caring can help students feel a sense of identity with the course, as can preparing students for the hazards of discussing sensitive topics in the classroom [39, 50]. Instructors can even increase the effectiveness of student-peer interactions by providing guidelines for “netiquette,” or preparing students to use a “connected voice” when working with each other [39]. Furthermore, the effectiveness of interactions depends also their frequency and accessibility [51]. Students’ perceptions of instructor accessibility can be supported by quickly replying to email communications and facilitating social integration [39, 40]. Connecting with students through multiple digital channels, including social networking sites, can also increase instructor accessibility [49]. Students can also benefit if the instructor facilitates student-peer communication that has a clear purpose [52]. Similarly, students may benefit from voluntary peer communication, for instance, on course-specific discussion boards [53].
Technology.
The approaches of how instructors utilize technology is associated with student satisfaction and performance [33, 36]. It is important to note that using technology for the sake of technology is not necessitating improvements for student learning as it may also induce extraneous cognitive loads [54]. Therefore, theoretical reviews have encouraged instructors to make the challenges of online learning explicit to students, along with information about the course and technical support [39, 55]. For instance, using technology to optimize the potential for learners’ autonomy is critical beyond just its mere presence [56]. “Linkability” to important resources outside the course website, for instance, increases instructors’ ability to moderate online courses effectively [57]. Similarly, easily accessible and downloadable technology are important quality course indicators, as well as tools that help students connect with the course material beyond simply reading text [33]. Furthermore, the effectiveness of technology should coincide with lowering barriers to participation so that students have sufficient access to required technology [34].
Research questions
This study connects to the research base on online courses in higher education, ultimately attempting to support improvements of instructional practices in online settings. This study adapts and examines commonly used course design rubrics for course syllabi. In addition, we provide an exploratory empirical investigation on potential correlational associations of online course design features with student grades, which may inspire future research pursuits including testing causal processes and systematically analyzing larger databases of course syllabi. This study is guided by the following research questions (RQs):
- RQ1: What are trends in course design characteristics across online courses?
- RQ2: What are associations between online course design characteristics and student grades?
Material and methods
Study setting
This project is situated at a large public research university in California. This institution enrolls a more diverse student body compared to peer institutions in the United States holding federal designation as both an Asian American and Native American Pacific Islander-Serving Institution and a Hispanic-Serving Institution.
Data for this study was provided from multiple units on campus including Admissions, the Office of Institutional Research, and the Registrar’s Office. Course syllabi were collected through targeted emails to science course instructors (i.e., Biology, Chemistry, Physics) who taught at least one four-unit online lecture course for undergraduates between 2014 and 2017.
Instructors were asked to share their course syllabi or information about their course in the learning management system, which we treated as de facto syllabi. Notably, there were no departmental or university standards for the content of syllabi, which were solely designed by the instructors of each course. If an instructor taught the same online course multiple times, we only requested the syllabus for the latest course iteration. In total, 47 courses fit our inclusion criteria with a total of 18 unique course-instructor pairings. We collected course syllabi from 11 course-instructor pairings (61.1% response rate). That said, this institution’s overall proportion of online science courses is relatively small (3.3% online courses in 2014–2017). Notably, this study takes part before the shift to emergency distance education, in which the university did not centrally organize online courses offerings. Instead, instructors could often choose whether to offer their courses in online or face-to-face modality.
These course syllabi represent four biological science and four physics courses. None of the eligible chemistry instructors provided us with course syllabi. Topics of the biological sciences courses included cell biology and genetics (biological sciences course 1); ecosystems and evolutionary processes (biological sciences course 2); proteins, properties and pathways (biological sciences course 3); and molecular, cellular, and other structural features of the human body (biological sciences course 4). Topics of the physics courses included basic concepts of force, motion, and vectors (physics course 1); oscillations, waves, fluids, and optics (physics course 2); fundamental physics and mathematical principles to understand science fiction from scientific facts and discoveries (physics course 3); and the history and methodology to study solar systems (physics course 4).
Notably, the small course-level sample size did not impede our ability to develop the course design rubric for college science courses as occurrence of additional categories and items within categories was saturated even with a subset of these syllabi. A subsequent quantitative analysis applying the coding rubric on these 11 courses (which enrolled a total of 635 students) allows for some initial explorations on potential correlations with syllabi-derived course design characteristics. However, the limited statistical power for subsequent multi-level quantitative analysis (i.e., nesting students within courses) requires caution when interpreting associations of course design characteristics with student grades.
Development of coding rubric
Our analysis modeled the steps of content analysis in identifying sources of data, developing categories and codes, and clustering codes in themes [58]. Content analysis considers which data are to be analyzed, how they are defined, relevant population(s), context, boundaries, and intended inferences [58, 59]. The coding rubric was developed through a six-stage process:
The first stage consisted of an extensive review of the online teaching and learning literature. This included an extensive search for studies investigating course design features and components. Key search words included online course design features, online course components, syllabi, coding frameworks, and online teaching and learning. Articles that exhibited topics concerning online course features, observation protocols, and coding schemes were selected for detailed review. This review was used to identity important online course design features.
In the second stage, we classified online course design characteristics derived from the literature search into general course design categories and subcategories in a series of meetings with the research team. For instance, categories such as interpersonal communication and peer-to-peer interaction emerged from the deliberation of the online course design literature and were vetted against seminal works in the field.
The third stage utilized four existing course syllabi to verify the completeness and accuracy of the coding rubric and added, removed, merged, and split course design categories and subcategories as appropriate.
The fourth stage consisted of the item writing process. More specifically, once the category headers for various course design features were identified, we began to develop items under each domain.
In the fifth stage we randomly selected two syllabi to undergo review. Four researchers on the team examined the same two syllabi to determine if items needed to be refined. Interrater reliability prior to refinement of the coding schema was at κ = 0.547, indicating moderate agreement, using the Kappa coefficient derived from Davies and Fleiss’ recommendation for a fully-crossed design with three or more coders [60, 61].
In the sixth stage we further refined items that exhibited low reliability to finalize the coding rubric. This led to an improved interrater reliability of κ = 0.678 representing substantial agreement [61]. Afterwards, all course syllabi were split among the four researchers who developed the coding rubric, and subsequently coded according to revised coding rubric.
Measures
Qualitative syllabi measures.
The coding rubric includes a total of 23 items; however, only 22 were deemed relevant for science syllabi (one item pertained to the discussion of sensitive social issues, which we found to be largely absent in the examined science courses). All items are nested within the four larger course design categories, namely (a) technology, (b) course organization, (c) learning objectives and alignment, and (d) interpersonal interactions. Notably, all individual items were coded as either categorical or dichotomous variables. None of the items are reverse-coded. Table 1 describes the full coding rubric used in this study.
The descriptive quantitative analysis used sum scores that add each item rating within each course design category. Then, these sum scores were scaled to a maximum of 100 for each category to allow for cross-category comparisons.
Institutional data.
This study used student-level institutional variables (Table 2). Variables included a continuous variable representing students’ course grades (A+, A, A-: 4.0, 4.0, 3,7; B+, B, B-: 3.3, 3.0, 2.7; C+, C, C-: 2.3, 2.0, 1.7; D+, D, D-:1.3, 1.0, 0.7; F: 0.0). Please note that these grades represent students’ final course grades, which are often a cumulation of different assignments (e.g., quizzes, exams, homework) across the course. While most courses included information on the grading policies on their course syllabi, the specific grading policies often vary across courses and instructors as university courses do not usually employ standardized assessments to measure student performance. Additional continuous variables included SAT/ACT mathematics score, which was z-score transformed, and the years of enrollment in college. Categorical student-level variables included gender, underrepresented racial/ethnic minority status, first-generation college student status, low-income status, English language learner status, and whether a student is a transfer student.
Analytical methods.
To answer RQ1, descriptive analyses illustrate the distributions of scores across every item and course design category. Also, the additive scores of course design categories were visualized with a dot chart and examined within and across courses. Finally, pairwise Pearson’s correlation coefficients between course design category scores were computed and discussed.
To answer RQ2, we used linear regression models with standard errors clustered at the course level to account for the nesting of students in courses [62]. Note that this analysis is meant to be exploratory to identify potentially interesting trends and not an attempt at rigorous hypothesis testing. In this exploratory analysis, student grades represent the dependent variable. Continuous z-score transformed course design category score variables represent the independent variables. Covariates include student demographics. Model 1 includes all institutional student-level covariates, whereas Model 2 include all institutional student-level covariates and course design category score independent variables. Of interest are both the additional percentage of variance explained by inclusion of all four aggregate course design characteristics and the associations of each course design category with student grades.
Notably, we examine the explained percentages of variance in student scores and the associations of the course design with student grades for both all students and students historically underserved in STEM college environments (i.e., first-generation college students, low-income students, female students, underrepresented minority students). Modeling assumptions were tested to verify the appropriateness of the models. For instance, the absence of multicollinearity was determined through the calculation of variance inflation factors. This study applied a Markov Chain Monte Carlo multiple imputation approach with 150 iterations and 200 imputations to address missing data in the covariates [63, 64].
Results
Online course design trends
Course organization.
Descriptive information on each item for each course is presented in the appendix (S1 Table). Course organization items had exceptionally little variance. In all 11 courses (100%), course materials were stored across multiple websites or platforms including Canvas, Applia, Mastering Biology, and Smartwork (O1). Nine of 11 syllabi (82%) did not mention the availability of redo opportunities (O4), and again 9 of 11 (82%) did not mention the availability of extra credit opportunities (O5). Eight of 11 syllabi (73%) presented required course assignments in a way considered “somewhat clear,” indicating that although some combination of assignment descriptions, grade breakdowns, and calendars was present in many syllabi, a very clear presentation of assignments combining all of these was present in few syllabi (O2). Additionally, 10 of 11 syllabi (91%) did not mention adjustments of routine course elements when introducing non-routine course elements, or else did not provide sufficient information (e.g., course calendar) to portray this adjustment (O3). Ten of 11 syllabi (91%) either did not allow or mention late submissions (O6). These results suggest that online course syllabi do not vary substantially in terms of course organization. Course materials are likely distributed across multiple websites, with information about required course components presented somewhat clearly. Syllabi are unlikely to exhibit a relaxed load of smaller, routine assignments when larger assignments are due (e.g., tests, essays), and are also unlikely to mention policies regarding re-doing work, extra credit, or late submissions.
Learning objectives and alignment.
Contrary to course organization, high levels of variability among syllabi emerged when looking at the presence of learning objectives and their alignment with required course elements (S1 Table). Whereas most syllabi gave instructions for a recommend sequence of assignments, several syllabi did provide guidance beyond listing all due dates (L1). Whereas over half of the syllabi provided thorough recommendations for how students might improve their study behavior, for instance, offering study resources or ways to decide if more preparation is needed before an exam, over a third did not offer such guidance (L2). Just over half of the syllabi mentioned higher level learning objectives, whereas the other half did not (L3). Over a third of the syllabi offered in-depth details on the overall grading and rubrics for specific assignments, whereas about half provided no more information than assignments titles and due dates (L4). Finally, roughly half of the syllabi explicitly connected course requirements with course objectives whereas the other half did not (L5). In sum, instructors present very different amounts of explicit information regarding the learning objectives of their course and their expectations for assignment completion, with some thoroughly detailing their expectations for grading and rationales for required assignments and others simply not mentioning what the goals of the course are or what to expect from the assignments therein.
Interpersonal interactions.
Across these online course syllabi, there appeared to be consistency in some forms of interpersonal interaction, but much more heterogeneity in others (S1 Table). For example, almost all syllabi (82%) mentioned the availability of regularly scheduled interactions with the instructor such as face-to-face office hours or online office hours (I1). Similarly, almost all syllabi (91%) specified that students would have regular opportunities to voluntarily interact with peers through discussion forums or class meetings (I4). Conversely, very few syllabi (27%) explicitly stated the timeframe in which students could expect the instructor to respond to student emails or discussion posts (I2). Analysis of all 11 syllabi revealed that the possibility of preparing students for sensitive topics was “not applicable,” suggesting the topics of the course did not require discussion of sensitive topics (I8).
Other interpersonal interactions items differed across courses. Just over a half of the syllabi (55%) did not require interacting with classmates through discussion posts or groupwork (I3). Of the five that required peer interactions, four emphasized that grading would be based on quality of content and participation, rather than on participation alone. Just under half of the syllabi (45%) did not include any notes about etiquette for interpersonal interactions (I5). Of the six courses including etiquette notes, often in a section titled “netiquette,” two syllabi offered specific examples of how to interact (or not interact) with classmates. Similarly, just over half the syllabi (55%) did not explicitly mention that students should be aware of how online interactions can work differently from face-to-face interactions, whereas the remaining syllabi did (I6). Finally, instructors evoked very different levels of social presence through their syllabi (I7). Just over half (55%) gave at least some indication of the instructor’s unique personality or tone through deliberately placed punctuation (e.g., exclamation points) or text emphasis tools (e.g., bolding, capital letters, underlining). Of those six syllabi, four included only up to two examples of this, but two syllabi exuded the instructor’s tone and personality through several examples throughout the syllabi. Overall, syllabi consistently provide opportunities for interacting with the teacher and classmates throughout the term, but less consistently provide guidelines for how to interact.
Technology.
Finally, high levels of variation appeared regarding the technological affordances in the course syllabi (S1 Table). Almost all syllabi (91%) highlighted the unique challenges and affordances of online courses in some way (T1). Seven of those 10 syllabi provided suggestions about how to specifically change behavior in online courses to increase success (e.g., the advantages of altering communication when conducted online), whereas three made only mentioned that students should prepare for difference in the online environment compared to typical face-to-face experiences. Just over half (55%) of syllabi pointed out cheaper means to access or purchase course materials (T2). Just over half (55%) provided no information on contact information or FAQs regarding technical issues (T3). Of those that did, two provided basic contact information for when problems arise, and three gave specific advice on potential technical problems. Finally, syllabi were split in mentioning specialized technologies required for the course (T4). About a third (36%) required at least two additional special software or web services on top of the course website (e.g., ProctorU, ALEKS). Another third (36%) required a single additional software or web service, whereas the remaining syllabi (27%) required only the course website. Overall, these syllabi suggest that technological requirements can be very different across courses, as well as the recommendations that instructors provide for navigating, purchasing, and troubleshooting those technologies.
Aggregate course design characteristics
Descriptive analysis of the aggregate course design characteristics across courses indicates that highest and lowest rated design category varied across courses (Fig 1).
Please note that in a few cases, multiple categories have identical scores leading to obscured data points. In such cases, please consult the raw data in the appendix (S2 Table).
There seemed to be no consistent trend among design features that were higher rated in the course syllabi included in this study. For instance, whereas Learning Objectives and Alignment is the highest rated design characteristic in courses 8, 10, and 11, it represented the lowest rated design characteristic in courses 2, 3, 7, and 9. Interpersonal Interactions was the highest rated design characteristic in courses 5, 7, and 9, but the lowest rated design characteristic in courses 1, 4, and 6. Technology was the highest rated design characteristic in courses 1, 4, and 6, but the lowest rated design characteristic in courses 5, 8, and 10. Meanwhile, Course Organization was almost always between other quality categories. This large amount of course-level heterogeneity among the four quality categories is mirrored in an analysis of Pearson’s pairwise correlation coefficients (Table 3). Notably, all four categories were positively associated with each other. However, the strength of all pairwise correlations were below a moderate effect size. Notably, the correlation between Course Organization and Learning Objectives and Alignment (r = 0.131) and the correlation between Technology and Interpersonal Interactions (r = 0.039) were below recommended minimum effect sizes [65].
Associations of course design characteristics with performance
Multiple linear regression models explored associations of the course design characteristics with student performance (Table 4). Subgroup analyses of students who are historically underserved in college examined potential heterogeneity in the estimates.
The inclusion of the four course design characteristics variables in the full sample regression models led to a 6.2% increase in the explained percentage of variance of student grades (Model 1: R2 = 0.086, Model 2: R2 = 0.148). This increase in the explained percentage of variance in student grades was even greater for student groups who are historically underserved in college, with a 14.7% increase for first-generation college students (Model 1: R2 = 0.106, Model 2: R2 = 0.253), an 11.3% increase for low-income students (Model 1: R2 = 0.167, Model 2: R2 = 0.280), a 9.8% increase for female students (Model 1: R2 = 0.089, Model 2: R2 = 0.187), and a 15.2% increase for underrepresented minority students (Model 1: R2 = 0.094, Model 2: R2 = 0.246).
The regression models provided some indication of design characteristics that may be associated with student grades. In the full sample, each standard deviation increase in the Course Organization rating was associated with a 0.15 letter grade increase, b = 0.153, t = 2.79, p < 0.05. In contrast, each standard deviation increase in the Learning Objectives and Alignment rating was associated with a 0.26 letter grade decrease, b = -0.258, t = -2.60, p < 0.05. Notably, both Technology and Interpersonal Interactions course design ratings were not significantly associated with students’ grades.
Heterogeneity analysis on student groups who are traditionally underserved in college indicated similar trends. Both Technology and Interpersonal Interactions course design ratings were consistently not associated with students’ course grades across all models. Similarly, the Course Organization rating had a consistent significant association with student course grades. Each standard deviation increase in the Course Organization rating was associated with a 0.30 letter grade increase for first-generation college students, b = 0.298, t = 3.18, p < 0.05; a 0.24 letter grade increase for low-income students, b = 0.243, t = 4.32, p < 0.01; a 0.20 letter grade increase for female students, b = 0.199, t = 2.81, p < 0.05; and a 0.48 letter grade increase for underrepresented minority students, b = 0.481, t = 7.28, p < 0.001. Similarly, the Learning Objectives and Alignment rating was negatively associated with student course grades for students who are traditionally underserved in college. Each standard deviation increase in the Learning Objectives and Alignment rating was associated with a 0.34 letter grade decrease for first-generation college students, b = -0.341, t = -2.96, p < 0.05; a 0.36 letter grade decrease for low-income students, b = -0.355, t = -4.72, p < 0.01; and a 0.30 letter grade decrease for female students, b = -0.298, t = -3.08, p < 0.05. The appendix includes a table with all regression coefficients (S3 Table).
Discussion
This mixed-methods study describes the qualitative development of a rubric that identifies syllabi-derived course design characteristics of college-level science online courses. This study is positioned to contribute to the research base on online learning in higher education as it represents one of the first efforts to systematically utilize course syllabi to generate inferences on the quality of online course instruction and student learning. In contrast to more resource-intensive course observations [66, 67], course syllabi and institutional data are more readily available at colleges [68–71]. Therefore, this study can provide insights on how universities could apply syllabi-based rubrics to generate inferences for educational policies, for instance, when deciding what courses to keep in an online format after the Covid-19 pandemic [72], or in an effort to use predictive analytics to enhance learning outcomes [73–75]. Instead of traditional early warning systems that intend to identify at-risk students using institutional data during a college career and/or clickstream data during a specific course [74, 76, 77], syllabi-based early warning systems would identify courses not leveraging the affordances of online learning, and could be addressed even before students are exposed to the instructional enactments. In particular, this also contrasts research assessing teaching quality through course evaluations [78]. In consequence, the three main findings are as follows:
First, course syllabi allow for an identification of online course design characteristics. In particular, it is possible to use course syllabi to detect course design categories related to Technology, Course Organization, Learning Objectives and Alignment, and Interpersonal Interactions, which were also used in observational studies [15]. Interestingly, there were no consistent trends across all courses in this study; for instance, we did not find that certain course design categories had consistently higher ratings than other design categories. In addition, the course design characteristics were not substantially correlated with each other. This indicates that these four categories can be viewed as distinct categories, which should not be collapsed in subsequent analyses.
Second, the syllabi-derived online course design characteristics can explain some variance in student grades. Although, prior performance is a considerably better predictor of student performance–as one would expect [79, 80]–it is still promising that a comparatively low-cost effort of examining course syllabi may provide additional insights in explaining student-level learning outcomes. In comparison, classroom observations are often resource-intensive while not necessarily providing valid and reliable estimates of student learning [18, 67]. Furthermore, in contrast to classroom observations, which occur during a term, syllabi can be collected before a term begins, allowing for identification of course design weaknesses before students are exposed to the course.
Third, the explorative analysis of design characteristics and student performance provides first indications that higher ratings on the Course Organization design characteristics may to be related to greater student performance. Notably, Course Organization refers to the clarity and consistency of the navigational structure, for instance, through a clear presentation of core components and requirements of the course. This finding would be in line with prior research that suggests that students tend to struggle in online courses due to lower self-regulation skills [12–14]. In courses with more transparent course organization, instructors may be able to better support their students’ self-regulatory skills [81].
Limitations and future work
The largest limitation of this study is the assumption that the content in syllabi directly translates to instructional practice, and ultimately student learning experiences. Validation studies are highly encouraged to confirm sufficient accuracy of course syllabi compared to the current gold standard of course observations. This is mirrored in this study’s relatively large percentage of unexplained variance in student grades. Although the inclusion of online course design characteristics improved the percentage of explained variance, many potentially important constructs that relate to student learning and performance were not captured. These may include variables on student’s motivation, beliefs, and goals, self-regulation, and study skills [45, 47, 81–84]. Similarly, many teacher and teaching characteristics influence learning including teachers’ knowledge, teaching experience, and instructional practices [80, 85–89]. However, these variables are not easily available at-scale to institutions without an additional resource-intensive data collection. The goal of the paper was to utilize data that are readily available to universities without an additional resource-intensive data collection.
The second important limitation of this study is the small sample size, and thus limited statistical power [90], of examined course syllabi. While this sample size is sufficient to develop the coding rubric, inferences from the quantitative analysis need to be interpreted with caution. Robust hypothesis testing is limited due to the small statistical power as the source of variation for related hypotheses comes from between-course differences. Although this small sample size does not allow us to employ advanced quantitative modeling to thoroughly examine the impact of course design characteristics on student performance, its descriptive and exploratory nature represents a first step in the research process providing insights to inform future research.
Another limitation of this study is related to the analyzed scope of course syllabi. On the one hand, we capture a range of different aspects of constructs related to online course design. To keep the coding scheme manageable, we measure most aspects (e.g., social presence) within a construct (e.g., interpersonal interaction) with single items. A study focusing on a particular aspect of online course design (for instance, to examine the social presence in a course [91]) would need to substantially increase the item count needed to comprehensibly map the related features (e.g., also include items related to affective responses and cohesive responses related to social presence). On the other hand, this study only examined course syllabi in undergraduate online courses in science disciplines. In particular, this study reviewed online course syllabi in only biological science and physics. While it may seem reasonable to believe that similar trends would be identified in chemistry online courses [92, 93], extensions to other STEM and non-STEM disciplines are unclear. Similarly, this study was situated at a large public research university. In order to generalize findings, replication studies in other contexts with other student demographics are encouraged.
Potential future research directions that use syllabi-derived course design characteristics may ascertain whether these design characteristics help explain differences in student learning across face-to-face and online versions of the same course. The most carefully controlled study would include only courses that had the same instructor for both modalities. However, we would have to consider selection effects and the discipline of the subject. An ideal sample would intentionally balance an equal number of courses that are better and worse than their respective face-to-face counterparts. Another direction for research relates to the rise of educational data mining and learning analytics research focused on corpora of writing [69, 94, 95]. These developments have inspired researchers to use college course syllabi as a data source to better understand teaching and learning. For instance, a current research project led by Peter Bearman at Columbia utilizes machine learning, natural language processing, and social network analysis techniques to examine text corpora of hundreds of thousands of syllabi from universities all across the country to generate multidimensional measures of “liberal artsness” of student college experiences and their relations to post-graduation outcomes [96]. Similar to Peter Bearman’s work, future research could apply deep learning and machine learning algorithms on a corpus of historic course syllabi to detect underlying online course design features that are associated with student performance. Afterwards, the classification of course syllabi could be automated for any new syllabi so that this tool could serve as an early detection system for departments and administrators to identify courses and instructors that may benefit from additional institutional support. Furthermore, future research may also compare and psychometrically validate the myriad on available online course design tools (for an overview, see [97]) to help guide higher education administration on the best instruments for their individual contexts and use cases.
Supporting information
S1 Table. Raw data of course design characteristic ratings for each item and course.
https://doi.org/10.1371/journal.pone.0276839.s001
(DOCX)
S2 Table. Raw data of aggregate course design characteristic ratings.
https://doi.org/10.1371/journal.pone.0276839.s002
(DOCX)
S3 Table. Multiple linear regression analysis of on student grades with clustered standard errors, subgroup analyses.
https://doi.org/10.1371/journal.pone.0276839.s003
(DOCX)
Acknowledgments
This work is supported by the Teaching and Learning Research Center at the University of California at Irvine. The views contained in this article are those of the authors and not of their institutions.
References
- 1.
Hussar B, Zhang J, Hein S, Wang K, Roberts A, Cui J, et al. The Condition of Education 2020. Washington, DC: National Center for Education Statistics; 2020. Report No.: NCES 2020–144.
- 2.
Allen IE, Seaman J. Changing Course: Ten Years of Tracking Online Education in the United States. Babson Park, MA: Babson Survey Research Group; 2013.
- 3. Fischer C, Baker R, Li Q, Orona GA, Warschauer M. Increasing Success in Higher Education: The Relationships of Online Course Taking With College Completion and Time-to-Degree. Educational Evaluation and Policy Analysis. 2022;44(3):355–79.
- 4.
U.S. Department of Education, Office of Planning, Evaluation, and Policy Development. Evaluation of evidence-based practices in online learning: A meta-analysis and review of online learning studies. Washington, DC: U.S. Department of Education, Office of Planning, Evaluation, and Policy Development; 2009.
- 5. Figlio D, Rush M, Yin L. Is it live or is it internet? Experimental estimates of the effects of online instruction on student learning. Journal of Labor Economics. 2013;31(4):763–84.
- 6. Fischer C, Xu D, Rodriguez F, Denaro K, Warschauer M. Effects of course modality in summer session: Enrollment patterns and student performance in face-to-face and online classes. The Internet and Higher Education. 2020;45.
- 7.
Lack K. Current Status of Research on Online Learning in Postsecondary Education [Internet]. New York, NY: Ithaka S+R; 2013 [cited 2019 Dec 5]. http://sr.ithaka.org/?p=22463
- 8. Xu D, Jaggars SS. The impact of online learning on students’ course outcomes: Evidence from a large community and technical college system. Economics of Education Review. 2013 Dec;37:46–57.
- 9. Xu D, Jaggars SS. Performance gaps between online and face-to-face courses: Differences across types of students and academic subject areas. The Journal of Higher Education. 2014 Sep;85(5):633–59.
- 10.
Xu D, Xu Y. The Promises and Limits of Online Higher Education: Understanding how distance education affects access, cost, and quality. Washington, DC: American Enterprise Institute; 2019.
- 11. Broadbent J. Comparing online and blended learner’s self-regulated learning strategies and academic performance. The Internet and Higher Education. 2017 Apr;33:24–32.
- 12. Kizilcec RF, Pérez-Sanagustín M, Maldonado JJ. Self-regulated learning strategies predict learner behavior and goal attainment in Massive Open Online Courses. Computers & Education. 2017 Jan;104:18–33.
- 13. Parkes M, Stein S, Reading C. Student preparedness for university e-learning environments. The Internet and Higher Education. 2015 Apr;25:1–10.
- 14. You JW. Identifying significant indicators using LMS data to predict course achievement in online learning. The Internet and Higher Education. 2016 Apr;29:23–30.
- 15. Jaggars SS, Xu D. How do online course design features influence student performance? Computers & Education. 2016 Apr;95:270–84.
- 16. Reimer LC, Schenke K, Nguyen T, O’dowd DK, Domina T, Warschauer M. Evaluating promising practices in undergraduate STEM lecture courses. RSF: The Russell Sage Foundation Journal of the Social Sciences. 2016;2(1):212–33.
- 17. Solanki SM, Xu D. Looking Beyond Academic Performance: The Influence of Instructor Gender on Student Motivation in STEM Fields. American Educational Research Journal. 2018 Aug;55(4):801–35.
- 18. Wieman C. A Better Way to Evaluate Undergraduate Teaching. Change: The Magazine of Higher Learning. 2015 Jan 2;47(1):6–15.
- 19. Kogan LR, Schoenfeld-Tacher R, Hellyer PW. Student evaluations of teaching: perceptions of faculty based on gender, position, and rank. Teaching in Higher Education. 2010 Dec;15(6):623–36.
- 20. Means B, Toyama Y, Murphy R, Baki M. The effectiveness of online and blended learning: A meta-analysis of the empirical literature. Teachers College Record. 2013;115(3):1–47.
- 21. Zhou N, Fischer C, Rodriguez F, Warschauer M, King S. Exploring how enrolling in an online organic chemistry preparation course relates to students’ self-efficacy. J Comput High Educ [Internet]. 2019 Nov 28 [cited 2019 Dec 5]; Available from: http://link.springer.com/10.1007/s12528-019-09244-9
- 22. Bailey M, Gosper M, Ifenthaler D, Ware C, Kretzschma M. On-campus, distance or online? Influences on student decision-making about study modes at university. Australasian Journal of Educational Technology. 2018 Apr 12;34(5):72–85.
- 23. Castillo M. At issue: Online education and the new community college student. The Community College Enterprise. 2013;19(2):35.
- 24. Fischer C, Zhou N, Rodriguez F, Warschauer M, King S. Improving College Student Success in Organic Chemistry: Impact of an Online Preparatory Course. J Chem Educ. 2019 May 14;96(5):857–64.
- 25. Shea P, Bidjerano T. Does online learning impede degree completion? A national study of community college students. Computers & Education. 2014 Jun;75:103–11.
- 26.
Moore MG, Kearsley G. Distance education: A systems view of online learning. 3rd ed. Belmont, CA: Wadsworth; 2011.
- 27. Zhan Z, Mei H. Academic self-concept and social presence in face-to-face and online learning: Perceptions and effects on students’ learning achievement and satisfaction across environments. Computers & Education. 2013 Nov;69:131–8.
- 28. Baker R, Evans B, Li Q, Cung B. Does Inducing Students to Schedule Lecture Watching in Online Classes Improve Their Academic Performance? An Experimental Analysis of a Time Management Intervention. Res High Educ. 2019;60(4):521–52.
- 29. Alpert WT, Couch KA, Harmon OR. A Randomized Assessment of Online Learning. American Economic Review. 2016 May;106(5):378–82.
- 30. Bettinger EP, Fox L, Loeb S, Taylor ES. Virtual classrooms: How online college courses affect student success. American Economic Review. 2017 Sep;107(9):2855–75.
- 31. Kaupp R. Online penalty: The impact of online instruction on the Latino-White achievement gap. Journal of Applied Research in the Community College. 2012;19(2):3–11.
- 32.
Johnson H, Mejia MC. Online Learning and Student Outcomes in California’s Community Colleges. San Francisco, CA: Public Policy Institute of California; 2014.
- 33. Grandzol CJ, Grandzol JohnR. Best Practices for Online Business Education. IRRODL. 2006 Jun 13;7(1):1–18.
- 34.
Quality Matters Program. Course Design Rubric Standards [Internet]. 2014. https://www.qualitymatters.org/qa-resources/rubric-standards/higher-ed-rubric
- 35. Young S. Student Views of Effective Online Teaching in Higher Education. American Journal of Distance Education. 2006 Jun;20(2):65–77.
- 36. Ralston-Berg P, Buckenmeyer J, Barczyk C, Hixon E. Students’ Perceptions of Online Course Quality: How Do They Measure Up to the Research? IL. 2015;4(1):38–55.
- 37. Ausburn LJ. Course design elements most valued by adult learners in blended online education environments: an American perspective. Educational Media International. 2004 Sep;41(4):327–37.
- 38. Grigorovici D, Nam S, Russill C. The effects of online syllabus interactivity on students’ perception of the course and instructor. The Internet and Higher Education. 2003 Jan;6(1):41–52.
- 39. Rovai AP. In search of higher persistence rates in distance education online programs. The Internet and Higher Education. 2003;6(1):1–16.
- 40. Swan K. Virtual interaction: Design factors affecting student satisfaction and perceived learning in asynchronous online courses. Distance Education. 2001 Jan;22(2):306–31.
- 41. Conrad DL. Engagement, Excitement, Anxiety, and Fear: Learners’ Experiences of Starting an Online Course. American Journal of Distance Education. 2002 Dec;16(4):205–26.
- 42.
Naidu S. Instructional design models for optimal learning. In: Moore MG, editor. Handbook of Distance Education. 3rd ed. New York, NY: Routledge; 2013. p. 268–81.
- 43.
Phipps R, Merisotis J. Quality on the line: Benchmarks for success in internet-based distance education. Washington, D.C.: The Institute for Higher Education Policy; 2000.
- 44. Williams PE, Hellman CM. Differences in self-regulation for online learning between first-and second-generation college students. Research in Higher Education. 2004;45(1):71–82.
- 45. Hartwig MK, Dunlosky J. Study strategies of college students: Are self-testing and scheduling related to achievement? Psychon Bull Rev. 2012 Feb;19(1):126–34. pmid:22083626
- 46. Rodriguez F, Kataoka S, Rivas MJ, Kadandale P, Nili A, Warschauer M. Do spacing and self-testing predict learning outcomes? Active Learning in Higher Education. 2018 May 14;146978741877418.
- 47. Rodriguez F, Fischer C, Zhou N, Warschauer M, Massimelli Sewall J. Student spacing and self-testing strategies and their associations with learning in an upper division microbiology course. SN Soc Sci. 2021;1:1–24.
- 48.
Moore MG. The theory of transactional distance. In: Moore MG, editor. Handbook of Distance Education. New York, NY: Routledge; 2013. p. 66–85.
- 49. Imlawi J, Gregg D, Karimi J. Student engagement in course-based social networks: The impact of instructor credibility and use of communication. Computers & Education. 2015 Oct;88:84–96.
- 50. Morgan Consoli ML, Marin P. Teaching diversity in the graduate classroom: The instructor, the students, the classroom, or all of the above? Journal of Diversity in Higher Education. 2016;9(2):143–57.
- 51. Bernard RM, Abrami PC, Borokhovski E, Wade CA, Tamim RM, Surkes MA, et al. A meta-analysis of three types of interaction treatments in distance education. Review of Educational Research. 2009 Sep 1;79(3):1243–89.
- 52. Baran E, Correia A. Student-led facilitation strategies in online discussions. Distance Education. 2009 Nov;30(3):339–61.
- 53. Ho CH, Swan K. Evaluating online conversation in an asynchronous learning environment: An application of Grice’s cooperative principle. The Internet and Higher Education. 2007 Jan;10(1):3–14.
- 54. Skulmowski A, Xu KM. Understanding Cognitive Load in Digital and Online Learning: a New Perspective on Extraneous Cognitive Load. Educ Psychol Rev. 2022 Mar;34(1):171–96.
- 55. Hew KF. Student perceptions of peer versus instructor facilitation of asynchronous online discussions: further findings from three cases. Instr Sci. 2015 Jan;43(1):19–38.
- 56. Balaji MS, Chakrabarti D. Student Interactions in Online Discussion Forum: Empirical Research from ‘Media Richness Theory’ Perspective. 2010;9(1):1–22.
- 57. Dias SB, Diniz JA. Towards an Enhanced Learning Management System for Blended Learning in Higher Education Incorporating Distinct Learners’ Profiles. Journal of Educational Technology & Society. 2014;17(1):307–19.
- 58. Stemler S. An overview of content analysis. Practical Assessment, Research & Evaluation. 2001;7(17):1–6.
- 59.
Krippendorff K. Content Analysis. In: Barnouw E, Gerbner G, Schramm W, Worth TL, Gross L, editors. International encyclopedia of communication. New York, NY: Oxford University Press; 1989. p. 403–7.
- 60. Davies M, Fleiss JL. Measuring Agreement for Multinomial Data. Biometrics. 1982 Dec;38(4):1047–51.
- 61. Hallgren KA. Computing Inter-Rater Reliability for Observational Data: An Overview and Tutorial. TQMP. 2012 Feb 1;8(1):23–34. pmid:22833776
- 62.
Montgomery DC, Peck EA, Vining GG. Introduction to linear regression analysis. 5th ed. Hoboken, NJ: John Wiley & Sons; 2012. (Wiley Series in Probability and Statistics).
- 63. Cheema JR. A review of missing data handling methods in education research. Review of Educational Research. 2014;84(4):487–508.
- 64. Graham JW. Missing data analysis: Making it work in the real world. Annual Review of Psychology. 2009;60(1):549–76. pmid:18652544
- 65. Ferguson CJ. An effect size primer: A guide for clinicians and researchers. Professional Psychology: Research and Practice. 2009;40(5):532–8.
- 66.
Wragg T. An Introduction to Classroom Observation. 1st ed. London, UK: Routledge; 2011.
- 67.
Waxman HC, Hilberg RS, Tharp RG. Future directions for classroom observation research. In: Waxman HC, Tharp RG, Hilberg RS, editors. Observational Research in US Classrooms: New Approaches for Understanding Cultural and Linguistic Diversity. Cambridge, UK: Cambridge University Press; 2004. p. 266–77.
- 68.
Fiesler C, Garrett N, Beard N. What Do We Teach When We Teach Tech Ethics?: A Syllabi Analysis. In: Proceedings of the 51st ACM Technical Symposium on Computer Science Education [Internet]. Portland OR USA: ACM; 2020 [cited 2022 Aug 26]. p. 289–95. https://dl.acm.org/doi/10.1145/3328778.3366825
- 69. Fischer C, Pardos ZA, Baker RS, Williams JJ, Smyth P, Yu R, et al. Mining big data in education: Affordances and challenges. Review of Research in Education. 2020;44(1).
- 70. Williamson B. The hidden architecture of higher education: building a big data infrastructure for the ‘smarter university.’ Int J Educ Technol High Educ. 2018 Dec;15(1):12.
- 71. Daniel B. Big Data and analytics in higher education: Opportunities and challenges: The Value of Big Data in Higher Education. Br J Educ Technol. 2015 Sep;46(5):904–20.
- 72. O’Dea X (Christine), Stern J. Virtually the same?: Online higher education in the post Covid-19 era. Brit J Educational Tech. 2022 May;53(3):437–42. pmid:35600417
- 73. Bird KA, Castleman BL, Mabel Z, Song Y. Bringing Transparency to Predictive Analytics: A Systematic Comparison of Predictive Modeling Methods in Higher Education. AERA Open. 2021 Jan;7:233285842110376.
- 74. Jayaprakash SM, Moody EW, Lauría EJM, Regan JR, Baron JD. Early Alert of Academically At-Risk Students: An Open Source Analytics Initiative. JLA. 2014;1(1):6–47.
- 75. Kemper L, Vorhoff G, Wigger BU. Predicting student dropout: A machine learning approach. European Journal of Higher Education. 2020;10(1):28–47.
- 76.
Arnold KE, Pistilli MD. Course signals at Purdue: using learning analytics to increase student success. In: Proceedings of the 2nd International Conference on Learning Analytics & Knowledge [Internet]. Vancouver, Canada: ACM Press; 2012 [cited 2019 May 1]. p. 267. http://dl.acm.org/citation.cfm?doid=2330601.2330666
- 77.
Harrison S, Villano R, Lynch G, Chen G. Measuring financial implications of an early alert system. In: Proceedings of the Sixth International Conference on Learning Analytics & Knowledge [Internet]. Edinburgh, UK: ACM Press; 2016 [cited 2019 May 1]. p. 241–8. http://dl.acm.org/citation.cfm?doid=2883851.2883923
- 78. Clayson DE. Student Evaluations of Teaching: Are They Related to What Students Learn?: A Meta-Analysis and Review of the Literature. Journal of Marketing Education. 2009 Apr;31(1):16–30.
- 79. Ouellette JA, Wood W. Habit and Intention in Everyday Life: The Multiple Processes by Which Past Behavior Predicts Future Behavior. Psychological Bulletin. 1998;124(1):54–74.
- 80.
Hattie J. Visible learning for teachers: Maximizing impact on learning. New York, NY: Routledge; 2012.
- 81.
Greene JA. Self-Regulation in Education. 1st ed. New York, NY: Routledge; 2017.
- 82. Eccles JS, Wigfield A. Motivational beliefs, values, and goals. Annual Review of Psychology. 2002;53:109–32. pmid:11752481
- 83. Eccles JS, Wigfield A. From expectancy-value theory to situated expectancy-value theory: A developmental, social cognitive, and sociocultural perspective on motivation. Contemporary Educational Psychology. 2020;61.
- 84. Wu LL, Fischer C, Rodriguez F, Washington GN, Warschauer M. Project-based engineering learning in college: associations with self-efficacy, effort regulation, interest, skills, and performance. SN Soc Sci. 2021 Dec;1(12):287. pmid:34901878
- 85. Shulman LS. Those who understand: Knowledge growth in teaching. Educational researcher. 1986;15(2):4–14.
- 86. Ball DL, Thames MH, Phelps G. Content knowledge for teaching: What makes it special? Journal of Teacher Education. 2008 Nov 1;59(5):389–407.
- 87. Papay JP, Kraft MA. Productivity returns to experience in the teacher labor market: Methodological challenges and new evidence on long-term career improvement. Journal of Public Economics. 2015;130:105–19.
- 88. Fischer C, Fishman B, Dede C, Eisenkraft A, Frumin K, Foster B, et al. Investigating relationships between school context, teacher professional development, teaching practices, and student achievement in response to a nationwide science reform. Teaching and Teacher Education. 2018 May;72:107–21.
- 89. Fischer C, Foster B, McCoy A, Lawrenz F, Dede C, Eisenkraft A, et al. Identifying Levers Related to Student Performance on High-Stakes Science Exams: Examining School, Teaching, Teacher, and Professional Development Characteristics. Teachers College Record. 2020;122(2).
- 90.
Cohen J. Statistical power analysis for the behavioral sciences. 2nd ed. Hillsdale, NJ: Lawrence Erlbaum Associates; 1988.
- 91. Dunlap JC, Lowenthal PR. Tweeting the Night Away: Using Twitter to Enhance Social Presence. Journal of Information Systems Education. 2010;20(2):129–35.
- 92. Matz RL, Koester BP, Fiorini S, Grom G, Shepard L, Stangor CG, et al. Patterns of Gendered Performance Differences in Large Introductory Courses at Five Research Universities. AERA Open. 2017 Oct;3(4):233285841774375.
- 93. Fischer C, Witherspoon E, Nguyen H, Feng Y, Fiorini S, Vincent-Ruz P, et al. Advanced placement course credit and undergraduate student success in gateway science courses. J Res Sci Teach. 2022 Jul 23;tea.21799.
- 94. Young T, Hazarika D, Poria S, Cambria E. Recent Trends in Deep Learning Based Natural Language Processing. IEEE Computational Intelligence Magazine. 2018 Nov 24;13(3):55–75.
- 95. Hirschberg J, Manning CD. Advances in natural language processing. Science. 2015;349(6245):261–6. pmid:26185244
- 96.
Bearman P, McAllister W. Incite. Measuring liberal arts [Internet]. 2022. https://incite.columbia.edu/measuring-liberal-arts/
- 97. Baldwin S, Ching YH, Hsu YC. Online Course Design in Higher Education: A Review of National and Statewide Evaluation Instruments. TechTrends. 2018 Jan;62(1):46–57.