Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Cognitive load and teachers’ innovative behavior in AI-enhanced English language instruction: A mediation analysis of technological adaptability

Abstract

This investigation examines the complex interrelationships between teachers’ cognitive load, technological adaptability, and innovative teaching behavior in AI-enhanced educational environments. Through the integration of Cognitive Load Theory, Technology Acceptance Model, and Innovation Diffusion Theory, we develop a sophisticated theoretical framework for understanding how cognitive demands influence teaching innovation through adaptive mechanisms. Employing exploratory structural equation modeling with WLSMV estimation (N = 600), we analyze data collected through rigorously validated instruments measuring AI-assisted Teaching Cognitive Load (ATCL), technological adaptability, and innovative teaching behavior. Results reveal a significant negative relationship between cognitive load and innovative teaching behavior (β = −.134, p < .001), mediated by technological adaptability (indirect effect β = −.171, p < .001). The measurement model demonstrates exceptional psychometric properties (α = .91−.93; AVE = .64−.68) and establishes measurement invariance across teacher subgroups (ΔCFI ≤ .001). These findings advance theoretical understanding of cognitive-adaptive mechanisms in technology-enhanced teaching while providing empirically validated pathways for enhancing pedagogical innovation. The study contributes methodologically through the development of the ATCL scale and analytically through sophisticated mediation analysis techniques. Implications extend to professional development strategies, institutional policy formulation, and the theoretical conceptualization of cognitive load in AI-enhanced educational environments.

1. Introduction

The integration of artificial intelligence (AI) into educational ecosystems represents a transformative shift in teaching practices, fundamentally altering pedagogical approaches in language instruction. This transformation, occurring at the intersection of cognitive psychology, educational technology, and pedagogical innovation, requires understanding the interplay between teachers’ cognitive processes, adaptive capabilities, and innovative practices. The theoretical framework underpinning this investigation synthesizes Cognitive Load Theory (CLT), Technology Acceptance Model (TAM), and Innovation Diffusion Theory (IDT) to examine the dynamics of AI integration in educational contexts.

The theoretical foundation for examining teachers’ cognitive processes during AI integration stems from Cognitive Load Theory. While CLT’s initial conceptualization [1] focused on learner cognition, its scope has expanded to encompass instructional design and delivery [2]. Contemporary interpretations of CLT, particularly in technology-enhanced environments, posit that educators’ cognitive resources are allocated across multiple dimensions: the intrinsic complexity of AI systems, the extraneous load of interface navigation, and the germane load associated with pedagogical integration [3].

These cognitive considerations intersect with technological adaptability through Teacher Cognitive Adaptability Theory, which links educators’ instructional flexibility to their cognitive processing capabilities. This perspective is complemented by the Technology Acceptance Model, which explains psychological mechanisms underlying technology adoption, and Innovation Diffusion Theory, which addresses social and systemic factors influencing technological integration. Recent empirical investigations have documented the relationships between cognitive demands and pedagogical innovation in AI-enhanced educational environments [4]. This transformation is particularly relevant in English language instruction, where AI integration demands attention to linguistic pedagogy, technological proficiency, and innovative teaching methodologies.

Despite extensive documentation of AI implementation in educational settings, several critical research gaps persist. First, the cognitive mechanisms underlying successful technology adoption remain inadequately theorized, particularly regarding the relationships between cognitive load, technological adaptability, and innovative teaching practices. Second, the pathways through which cognitive load influences technological adaptability and innovative behavior have not been systematically investigated. While existing research acknowledges the importance of adaptive capabilities [5], the mediating processes remain obscured by methodological limitations and theoretical fragmentation. Third, current analytical approaches have proven insufficient for examining the multidimensional relationships between cognitive processes and pedagogical innovation, impeding the development of comprehensive theoretical models to inform professional development strategies.

This study examines the relationships between teachers’ cognitive load, technological adaptability, and innovative teaching behavior in AI-assisted English education through mediation analysis. The investigation addresses three key research questions: (1) How do variations in cognitive load influence teachers’ innovative behavior in AI-assisted English teaching contexts, considering both direct and indirect pathways? (2) To what extent does technological adaptability mediate the relationship between cognitive load and innovative teaching behavior, and what are the underlying psychological mechanisms facilitating this mediation? (3) What specific cognitive and adaptive processes facilitate or inhibit innovative teaching practices in AI-enhanced educational environments, and how can these insights inform professional development interventions?

This research contributes to both theoretical advancement and practical application in AI-assisted education. Theoretically, it integrates cognitive load theory with technological adaptability models, offering a nuanced understanding of innovation in educational contexts. Methodologically, it demonstrates the utility of mediation analysis in examining complex educational phenomena. Practically, the findings inform the development of evidence-based interventions for enhancing teachers’ technological adaptability and innovative capabilities in AI-enhanced educational environments.

2. Literature review and theoretical framework

2.1. Cognitive load theory in AI-enhanced education

2.1.1. Theoretical evolution and contemporary extensions.

Cognitive Load Theory (CLT) underwent significant theoretical evolution since its initial conceptualization by Sweller [1], transitioning from a narrow focus on problem-solving cognition to a comprehensive framework for understanding complex instructional environments. The theory’s contemporary interpretation, as articulated by Sweller, Ayres, and Kalyuga [6], provided sophisticated analytical tools for examining the multifaceted cognitive demands imposed by technological integration in pedagogical contexts. This theoretical progression reflected a crucial paradigm shift from viewing cognitive load as a unitary construct to understanding it as a complex, interactive phenomenon particularly relevant in AI-enhanced educational environments.

Chandler and Sweller’s [3] foundational work established the tripartite framework of intrinsic, extraneous, and germane cognitive load, which has proved instrumental in analyzing teachers’ cognitive processes during technology integration. Recent theoretical advances, particularly Skulmowski and Xu’s [7] reconceptualization of extraneous cognitive load in digital environments, suggested that traditional CLT frameworks require substantial adaptation to fully capture the unique cognitive demands of AI-enhanced teaching contexts. This theoretical refinement emphasizes the dynamic interplay between different types of cognitive load, challenging earlier linear conceptualizations.

2.1.2. Empirical validation and methodological advances.

Empirical investigations significantly enhanced our understanding of cognitive load in educational technology contexts. Dalinger’s [8] development of instruments measuring teachers’ cognitive load during technology adoption represented a crucial methodological advancement, demonstrating that cognitive load manifested through complex interactions rather than simple additive effects. This finding had profound implications for understanding how teachers process and integrate AI-based instructional tools.

2.2. Technology acceptance and teacher adaptability

2.2.1. Theoretical foundations and extensions.

The Technology Acceptance Model (TAM) provided foundational framework for understanding teachers’ adaptive responses to AI integration. Davis’s [9] seminal work established perceived usefulness and ease of use as core determinants of technology adoption, while Venkatesh and Davis’s [10] extensions incorporated social influence and cognitive instrumental processes specific to professional contexts. Recent applications to generative AI contexts revealed that traditional TAM constructs required substantial modification. Gong, Xu, Luo, and Lin’s [11] structural equation modeling analysis demonstrated that LLM adoption among teacher education students depends critically on perceived anthropomorphism and conversational quality beyond conventional usefulness constructs, suggesting that human-like AI interactions introduced novel psychological dynamics absent in earlier educational technologies.

2.2.2. Contemporary applications in AI-enhanced education.

Empirical investigations of generative AI acceptance reveal complex patterns across institutional contexts. [12] investigation of EFL teachers established use experience as critical moderator, with hands-on engagement significantly strengthening perceived usefulness-adoption intention relationships. Chen's [13] analytic hierarchy process study identified technological competence and institutional support as primary adoption determinants, demonstrating that acceptance transcends individual psychological factors to encompass organizational infrastructure and professional development resources.

Critical for the present investigation, Zhang et al.’s [14] quasi-experimental evaluation of AI-driven feedback tools established empirical linkages between cognitive load and technology acceptance. Instructors reporting higher extraneous cognitive load during initial implementation demonstrated significantly lower acceptance levels, with perceived pedagogical effectiveness mediating this relationship. This finding provided empirical precedent for examining cognitive load as antecedent to technology acceptance and adaptive behavior in AI-enhanced educational environments. These contemporary investigations collectively demonstrate that while TAM’s foundational constructs maintained explanatory power, generative AI technologies’ distinctive characteristics necessitate theoretical extensions incorporating AI-specific factors including anthropomorphism, output reliability, institutional support, and cognitive processing demands examined in this study.

2.3. Innovation diffusion and teaching behavior

2.3.1. Historical development and theoretical evolution of IDT.

Innovation Diffusion Theory (IDT) has evolved substantially since Rogers’ [15] seminal work, progressing from a descriptive model to a robust explanatory framework. Rogers’ initial conceptualization identified five adopter categories (innovators, early adopters, early majority, late majority, and laggards) and traced innovation adoption through awareness, interest, evaluation, trial, and adoption stages. Over subsequent decades, IDT has been empirically validated across diverse domains, with educational applications emerging as a particularly productive area of inquiry. Rogers’ [16] later refinements incorporated key attributes influencing adoption rates—relative advantage, compatibility, complexity, trialability, and observability—that directly inform our understanding of cognitive barriers to technology integration.

The complexity attribute merits particular attention in the context of our research, as it directly corresponds to cognitive load constructs. Early educational applications of IDT by Hall and Hord [17] established the Concerns-Based Adoption Model, which documented how teachers’ concerns about cognitive demands significantly influenced technology adoption patterns. This theoretical development paralleled advances in cognitive load research, creating natural conceptual linkages between the two frameworks that inform our integrated approach. Fullan’s [18] work further established the relationship between perceived complexity and implementation success, demonstrating how cognitive demands influence adoption outcomes in educational settings.

The evolution of IDT has progressively moved toward greater integration with cognitive frameworks. Moore and Benbasat’s [19] extensions incorporated the perceived ease of use construct—directly paralleling elements of the Technology Acceptance Model—while highlighting how cognitive perceptions influence diffusion patterns. Venkatesh et al.’s [20] Unified Theory of Acceptance and Use of Technology further synthesized IDT with elements from cognitive models, creating theoretical precedent for our integrated approach. This historical trajectory demonstrates IDT’s progressive convergence with cognitive frameworks, justifying its inclusion in our theoretical model.

2.3.2. Empirical evidence linking IDT to cognitive load and teaching innovation.

The empirical literature demonstrates clear connections between IDT constructs and cognitive load in educational technology contexts. Sahin’s [21] systematic review of 45 IDT applications in educational settings established complexity perception (a cognitive variable) as a primary determinant of adoption success, explaining 49–87% of variance across multiple studies. This review documented how cognitive demands systematically influence technology diffusion patterns, providing empirical justification for examining cognitive load as a predictor of innovative teaching behavior. Subsequent meta-analytic work by Straub [22] further established how cognitive processing requirements influence technology adoption trajectories across diverse educational settings.

The relevance of IDT to our research questions is further reinforced by empirical investigations demonstrating bidirectional relationships between diffusion patterns and cognitive load. Surry and Ely’s [23] longitudinal studies of educational technology adoption documented how initial cognitive demands gradually diminish through collective adaptation processes, suggesting that diffusion mechanisms may moderate cognitive load effects. This finding directly informs our hypothesis regarding the mediating role of technological adaptability, as it suggests adaptation processes that potentially mitigate cognitive barriers to innovation.

Recent empirical work specifically examining AI integration in educational contexts further validates IDT’s relevance to our research questions. Wu et al.’s [24] investigation of factors influencing rural teachers’ innovative behavior identified cognitive complexity as a critical determinant of AI adoption, establishing direct empirical connections between our cognitive load constructs and innovation outcomes. Similarly, Li and Zhu’s [25] study demonstrated significant relationships between cognitive demands, adaptive responses, and innovative teaching behaviors, providing empirical precedent for our hypothesized mediation model.

2.3.3. Theoretical integration: Bridging cognitive load, adaptability, and innovation.

The integration of IDT with cognitive load theory and technology acceptance models provided essential theoretical mechanisms for understanding how cognitive demands influence teaching innovation through adaptive processes. While cognitive load theory explains the internal cognitive constraints that teachers experience when engaging with AI technologies, IDT illuminates how these constraints manifest in observable adoption patterns and innovative behaviors. This integration addresses a critical theoretical gap in understanding the transition from cognitive experience to behavioral outcomes.

The complexity attribute in IDT directly parallels cognitive load constructs, providing a theoretical bridge between internal processing demands and observable innovation patterns. Rogers’ [26] specification that complexity perception negatively influences adoption rates corresponds to our hypothesis that cognitive load negatively influences innovative teaching behavior. Similarly, IDT’s emphasis on adaptation processes during implementation aligns with our technological adaptability construct, providing theoretical justification for its hypothesized mediating role.

This theoretical integration is particularly relevant in AI-enhanced educational environments, where cognitive demands are often substantial due to the novelty and complexity of AI systems. The IDT framework provided essential explanatory mechanisms for understanding how cognitive load influences innovation patterns, while also identifying potential intervention points for enhancing technological adaptability. This theoretical comprehensiveness is critical for addressing our research questions regarding the relationships between cognitive load, technological adaptability, and innovative teaching behavior in AI-enhanced educational contexts.

2.4. Methodological advances in educational technology research

2.4.1. Analytical frameworks and measurement development.

Recent methodological advances, particularly in structural equation modeling [27] and multiple mediation analysis, have enabled more nuanced understanding of the complex relationships between cognitive load, adaptability, and innovation. Teo and Tsai’s [28] methodological framework provided sophisticated analytical tools for examining these intricate pathways, while In’nami and Koizumi’s [29] work established rigorous standards for educational research methodology.

2.4.2. Measurement validation and analytical sophistication.

The development and validation of measurement instruments has become increasingly sophisticated, as evidenced by the emergence of multi-dimensional constructs and advanced psychometric techniques. This methodological evolution facilitates more precise investigation of the complex relationships between cognitive, behavioral, and institutional variables in AI-enhanced educational settings.

2.5. Theoretical integration and research gaps

2.5.1. Theoretical mechanisms: Bridging internal cognition and external adaptation.

The integration of Cognitive Load Theory, Technology Acceptance Model, and Innovation Diffusion Theory reveals a fundamental theoretical challenge: elucidating how cognitive load—an internally-determined phenomenon rooted in working memory constraints—can be mediated by external technological factors. This apparent paradox necessitates examining contemporary theoretical frameworks that bridge cognitive and socio-technical domains (Fig 1).

thumbnail
Fig 1. Theoretical integration framework.

Theoretical Integration Framework: Mediating Role of Technological Adaptability in the Relationship between Cognitive Load and Innovative Teaching Behavior in AI-Enhanced Educational Contexts. This conceptual model illustrates the integration of Cognitive Load Theory (CLT), Technology Acceptance Model (TAM), and Innovation Diffusion Theory (IDT) through three theoretical mechanisms: distributed cognition, cognitive offloading, and adaptive scaffolding. The model depicts technological adaptability as a mediator between cognitive load and innovative teaching behavior, with specific hypotheses (H1a, H1b, H1c) representing the direct and indirect pathways examined in this study.

https://doi.org/10.1371/journal.pone.0343002.g001

Distributed cognition theory provided the primary theoretical mechanism for understanding this mediation process ([30]). Within AI-enhanced educational environments, cognitive processes extend beyond individual mental architectures to encompass technological systems that function as integrated components of distributed cognitive networks. Technological adaptability, in this framework, represents educators’ capacity to effectively orchestrate these human-AI cognitive assemblages, transforming isolated cognitive constraints into distributed processing systems [31]. This theoretical perspective fundamentally reconceptualizes cognitive load from a purely internal limitation to a dynamically negotiated phenomenon spanning human and technological agents.

The cognitive offloading framework offers complementary explanatory mechanisms (Gerlich [32]). When educators develop technological adaptability, they acquire capabilities to strategically externalize cognitive functions onto AI systems, thereby restructuring internal processing demands. This process operates through metacognitive monitoring and strategic resource allocation, as demonstrated in recent investigations of AI-mediated instruction (Zhai & Nezakatgoo [33]. Technological pedagogical reasoning competency emerged as a critical mediating construct, suggesting that adaptability encompassed both technical proficiency and metacognitive awareness of cognitive distribution opportunities.

Recent theoretical syntheses reveal bidirectional relationships between cognitive experiences and adaptive responses Gerlich [34]. Initial encounters with AI-induced cognitive demands trigger compensatory adaptation mechanisms, which subsequently reshape cognitive processing patterns through iterative human-AI interaction cycles. This dynamic interplay, documented across diverse educational contexts Glickman & Sharot [35], suggested that technological adaptability not only responded to cognitive load but actively reconfigured it, creating recursive feedback loops that progressively transform teaching practices.

The institutional dimension introduces additional theoretical complexity. Tsakeni [36] conceptualize “adaptive scaffolding” as organizational structures that facilitate cognitive-technological integration, while Zhang [37] identify contextual moderators influencing mediation strength. These findings align with socio-technical systems theory, suggesting that cognitive-adaptive processes are embedded within broader institutional ecologies that either facilitate or constrain adaptation pathways Carayon et al. [38].

2.5.2. Research gaps and theoretical hypotheses.

Despite theoretical advances, critical gaps persist in understanding cognitive-adaptive mechanisms within AI-enhanced educational contexts. First, while associations between cognitive load and technological adaptability have been established (Zhu et al. [39]), the precise causal mechanisms and temporal dynamics remain inadequately theorized. Existing frameworks insufficiently address how different cognitive load types—intrinsic, extraneous, and germane—differentially interact with adaptive processes, particularly in rapidly evolving AI environments [40].

Second, the mediating role of technological adaptability requires more nuanced theoretical specification. Current conceptualizations inadequately distinguish between reactive adaptation (responding to cognitive demands) and proactive adaptation (anticipating and preventing cognitive overload). Furthermore, the cognitive offloading literature has not been systematically integrated with educational technology frameworks, leaving theoretical gaps in understanding how teachers strategically distribute cognitive resources across human-AI assemblages ,For example, distributed cognition and cognitive offloading research highlights how modern technologies can offload cognitive tasks and influence mental processing [41,42].

Third, contextual and individual moderators remain undertheorized. While institutional support structures and professional autonomy are recognized as influential factors, their specific mechanisms of action within the cognitive-adaptive framework require clarification. Similarly, individual differences in working memory capacity, technological self-efficacy, and metacognitive capabilities likely moderate mediation pathways, yet these interactions remain empirically unexplored within AI-enhanced teaching contexts.

Fourth, the implementation stage dimension introduces temporal complexity that current static models inadequately capture. The relationship between cognitive load and innovation likely evolves as teachers progress through adoption stages, yet existing research predominantly employs cross-sectional designs that cannot capture these developmental trajectories. Understanding how cognitive-adaptive processes unfold across implementation phases is essential for developing targeted interventions.

These theoretical considerations and empirical gaps inform the following hypotheses:

  1. H1: Teachers’ cognitive load has a direct negative effect on innovative teaching behavior in AI-enhanced environments, mediated by their technological adaptability.
  2. H2: The relationship between cognitive load and technological adaptability is moderated by institutional support structures and professional autonomy.
  3. H3: The indirect effect of cognitive load on innovative teaching behavior through technological adaptability varies systematically across different stages of AI implementation and adoption.

3. Methodology

3.1. Research design and sample characteristics

This investigation employed a cross-sectional research design within the framework of a broader research initiative examining technology integration in Chinese higher education (Project ID: NSSFC-21BYY092). The study was conducted between September and December 2023 across 15 higher education institutions in eastern and northwestern China, selected to represent diverse institutional contexts and varying levels of AI technology implementation. Employing a multistage stratified random sampling approach, institutions were stratified by type, region, and level of AI technology implementation, followed by systematic sampling of English language departments and individual teachers. Sample size determination employed Monte Carlo simulation techniques based on pilot studies (n = 45), ensuring sufficient statistical power (1-β > .90) for detecting hypothesized relationships.

The analytical sample comprised 600 English language teachers (female: 62%; male: 38%; mean age: 38.6 years, SD = 8.3) with teaching experience ranging from 1 to 35 years (M = 12.4, SD = 7.8), distributed across experience levels (1–5 years: 27%; 6–10 years: 33%; 11–15 years: 22%; > 15 years: 18%). Participants represented diverse educational contexts (public universities: 68%; private institutions: 32%) and varied levels of prior AI technology experience (beginner: 32%; intermediate: 45%; advanced: 23%). The institutional contexts reflected systematic variation in technological infrastructure, with 28% reporting comprehensive AI integration platforms, 46% reporting partial implementation, and 26% reporting pilot-stage implementation.

3.2. Instrumentation and measurement development

The measurement instruments employed in this study were systematically adapted from established scales with demonstrated psychometric properties, followed by a rigorous validation process to ensure contextual appropriateness for AI-enhanced English language teaching environments. Adaptation procedures for all instruments followed a standardized protocol comprising: (1) initial item selection and modification based on theoretical considerations, (2) cognitive interviewing with experienced educators, (3) expert panel evaluation, and (4) pilot testing with subsequent psychometric analysis.

The AI-assisted Teaching Cognitive Load (ATCL) scale was adapted from Dalinger’s [8] Teacher Technology Cognitive Load Instrument and Leppink et al.’s Cognitive Load Instrument. Cognitive interviews conducted with 12 experienced educators employed think-aloud protocols during scale completion (60–90 minutes per interview), followed by retrospective probing regarding item interpretation. Expert review involved five educational technology specialists and three language teaching experts who evaluated item relevance and clarity using standardized rating forms (inter-rater agreement threshold: 85%). The final ATCL scale comprised three subscales capturing distinct dimensions of cognitive load: Intrinsic Load (6 items, α = .88), Extraneous Load (7 items, α = .90), and Germane Load (5 items, α = .86).

The Technological Adaptability measure integrated items from Ployhart and Bliese’s [43] Adaptive Performance Scale, Parasuraman & Colby 's [44] Technology Readiness Index, and Mishra and Koehler’s [45] TPACK instrument. Following similar validation procedures (cognitive interviews with 8 educators, expert review by 6 specialists), the final measure comprised four subscales: Cognitive Flexibility (4 items, α = .85), Innovation Orientation (4 items, α = .87), Uncertainty Management (4 items, α = .84), and Integration Capability (4 items, α = .89).

The Innovative Teaching Behavior scale was adapted from Thurlings et al.'s [46] Teacher Innovative Behavior Instrument and Janssen’s [47] Innovative Work Behavior measure. Adaptation involved cognitive interviews with 10 English teachers and expert review by a panel of 7 specialists. The final scale comprised three subscales: Idea Generation (5 items, α = .86), Idea Promotion (5 items, α = .88), and Idea Implementation (5 items, α = .90).

All scales employed 7-point Likert response formats, with construct validity established through confirmatory factor analysis. The measurement validation process demonstrated robust psychometric properties across all instruments, with reliability coefficients exceeding conventional thresholds and factor structures aligning with theoretical expectations.

3.3. Data collection and analytical strategy

Data collection proceeded according to a standardized protocol to ensure measurement consistency across institutional contexts. Participants were recruited through institutional coordinators and accessed the survey through a secure online platform between September and December 2023. Methodological features to enhance data quality included strategically positioned attention check items (n = 4), page timing metrics to identify excessively rapid completion, and randomized item presentation within scales to mitigate order effects. The survey required approximately 25–35 minutes to complete based on pilot testing and actual completion metrics.

The analytical framework employed structural equation modeling with weighted least squares mean and variance adjusted (WLSMV) estimation, selected for its superior performance with ordinal data and non-normal distributions. Analysis proceeded through a two-stage approach, examining the measurement model before testing structural relationships. Preliminary data screening revealed minimal missing data (<2% per variable) determined to be MCAR (χ2 = 135.42, df = 128, p = .312), permitting the use of full information maximum likelihood estimation.

Measurement model evaluation employed confirmatory factor analysis with multiple fit indices (CFI, TLI, RMSEA, SRMR), using established thresholds for acceptable fit [48]. Reliability was assessed through multiple indicators, including Cronbach’s alpha, composite reliability, and average variance extracted, while discriminant validity was evaluated by comparing the square root of AVE values with inter-construct correlations.

The structural model examined hypothesized relationships among latent constructs, with particular focus on the mediating role of technological adaptability in the relationship between cognitive load and innovative teaching behavior. Mediation analysis employed the product of coefficients approach with bias-corrected bootstrap confidence intervals (5,000 resamples) to test the significance of indirect effects. While our conceptual model specified technological adaptability as a mediator, the multidimensional nature of both cognitive load and technological adaptability created a complex mediation structure. This analysis is characterized as a mediation analysis with multidimensional constructs rather than a multiple mediation analysis with parallel mediators, examining specific indirect effects through each dimension while controlling for other dimensions.

Measurement invariance testing examined whether the measurement model functioned equivalently across teacher subgroups defined by teaching experience and institutional type, proceeding through a sequential approach testing increasingly restrictive models (configural, metric, scalar, and structural invariance). Changes in fit indices (ΔCFI ≤ .01, ΔRMSEA ≤ .015) across increasingly constrained models were used to evaluate invariance, following Chen’s [13] recommendations. All analyses were conducted using Mplus version 8.3, with supplementary analyses performed in R version 4.0.3.

To assess potential common method bias arising from single-source self-report data, we conducted Harman’s single-factor test by entering all measurement items into an exploratory factor analysis without rotation. The results indicated that the first unrotated factor accounted for 40.2% of variance, below the 50% threshold, suggesting that common method bias is not a major concern in this study [49]. This finding, combined with procedural remedies implemented during data collection (e.g., guaranteed anonymity, counterbalanced item order), provides reasonable confidence that common method variance does not substantially threaten the validity of our findings.

3.4. Ethical approval and research conduct

Ethical approval for this study was granted by the Institutional Review Board of Ningxia Medical University (approval No. NXMU-IRB-20230821) on August 21, 2023, with validity through August 20, 2024. The IRB review confirmed that the study protocol met all ethical standards stipulated by relevant national and institutional guidelines, including protection of human subjects, informed consent procedures, confidentiality protocols, and risk minimization measures.

All research procedures adhered to the Declaration of Helsinki and PLOS ONE guidelines for research involving human participants. In-service English teachers aged 18 years or older were prospectively recruited through institutional coordinators between September 1 and December 31, 2023. Participation was entirely voluntary, with no minors involved in the study. Before accessing the anonymous, secure online questionnaire, each participant reviewed an electronic information sheet detailing the study’s purpose, procedures, potential risks and benefits, and data handling protocols. Written informed consent was obtained through an electronic “I agree” button, confirming participants’ voluntary agreement to participate.

The study involved no invasive procedures, biomedical interventions, or high-risk protocols. All data were collected anonymously, with no personally identifiable information recorded at any stage of the research process. This ensured that investigators had no access to individually-identifiable data throughout data collection, analysis, and reporting. All collected data are stored securely and used solely for academic and research purposes in accordance with institutional data protection policies.

4. Results

4.1. Measurement validation

Preliminary analyses established robust psychometric properties across all theoretical constructs. To establish the basic characteristics of our measures, we first examined descriptive statistics. Table 1 reveals satisfactory descriptive statistics, with means ranging from 3.84 (ATCL) to 4.12 (TA), and standard deviations demonstrating appropriate score dispersion (0.87–0.93). The distributional characteristics supported the assumption of multivariate normality, with skewness (−0.45 to −0.28) and kurtosis (−0.41 to 0.12) values well within acceptable parameters. Internal consistency measures demonstrated exceptional reliability, with Cronbach’s alpha coefficients (ATCL: α = .92; TA: α = .91; ITB: α = .93) and composite reliability indices (CR:.93−.95) substantially exceeding conventional thresholds. The average variance extracted values (AVE:.64−.68) provided strong evidence of convergent validity.

thumbnail
Table 1. Descriptive statistics and initial psychometric properties.

https://doi.org/10.1371/journal.pone.0343002.t001

To verify whether the observed patterns in our data align with the theoretically proposed factor structure, we conducted confirmatory factor analysis. This analysis assesses how well our observed indicators (survey items) represent the underlying latent constructs (cognitive load, technological adaptability, and innovative teaching behavior). Confirmatory factor analysis substantiated the measurement model’s factorial validity through rigorous parameter estimation. Table 2 demonstrates robust standardized factor loadings across all constructs: ATCL (λ = .707−.883), TA (λ = .730−.868), and ITB (λ = .767−.867). Critical ratios exhibited exceptional significance, ranging from 54.67 to 147.71 (p < .001), with squared multiple correlations (R2 = .500−.779) indicating strong item-level reliability. Notably, ATCL2 demonstrated the highest factor loading (λ = .883, R2 = .779), while TA8 exhibited similarly robust psychometric properties (λ = .868, R2 = .753).

The discriminant validity analysis (Table 3) revealed clear construct differentiation. The square roots of AVE values (ATCL:.812; TA:.800; ITB:.825) systematically exceeded inter-construct correlations, satisfying established discriminant validity criteria. Maximum shared variance indices (MSV:.123−.240) remained consistently below corresponding AVE values, while average shared variance measures (ASV:.182−.212) further confirmed construct distinctiveness. The correlation matrix revealed theoretically consistent relationships, with ATCL demonstrating significant negative associations with both TA (r = −.350, p < .01) and ITB (r = −.304, p < .01).

thumbnail
Table 3. Discriminant validity analysis and inter-construct correlations.

https://doi.org/10.1371/journal.pone.0343002.t003

4.2. Structural model evaluation

The hypothesized structural model exhibited satisfactory fit to the empirical data: χ2(275) = 1366.544, p < .001; RMSEA = .081 (90% CI: [.077,.086]); CFI = .982; TLI = .981; SRMR = .030. Fig 2 illustrates the structural relationships, revealing significant standardized path coefficients with associated standard errors, demonstrating substantial explained variance in both technological adaptability (R2 = .123) and innovative teaching behavior (R2 = .301).

4.3. Hypothesis testing

Systematic evaluation of the theoretical propositions (Table 4) revealed significant pathways aligned with hypothesized relationships. The analysis confirmed a significant direct negative effect of cognitive load on innovative teaching behavior (β = −.134, SE = .037, CR = −3.635, p < .001), supporting H1. H2 received robust empirical support through the substantial negative relationship between cognitive load and technological adaptability (β = −.350, SE = .036, CR = −9.663, p < .001). Similarly, H3 was validated by the significant positive effect of technological adaptability on innovative teaching behavior (β = .487, SE = .028, CR = 17.561, p < .001).

thumbnail
Table 4. Structural model results and hypothesis testing.

https://doi.org/10.1371/journal.pone.0343002.t004

4.4. Mediation analysis

The decomposition of effects (Table 5) revealed a significant total effect of cognitive load on innovative teaching behavior (β = −.304, SE = .036, CR = −8.445, p < .001), comprising both direct (β = −.134, SE = .037, CR = −3.635, p < .001) and indirect effects (β = −.171, SE = .020, CR = −8.438, p < .001) through technological adaptability. Bootstrap analysis with 5,000 resamples generated bias-corrected confidence intervals [−.211, −.131] for the indirect effect, excluding zero and confirming the mediational pathway. The proportion of the total effect mediated by technological adaptability was calculated as |indirect effect|/ |total effect| = |−0.171|/ |−0.304| = 56.3%, indicating that technological adaptability accounts for over half of the relationship between cognitive load and innovative teaching behavior.

4.5. Measurement invariance

Multi-group analyses (Table 6) established measurement invariance through progressive model constraints. The configural model demonstrated adequate fit (CFI = .978, RMSEA = .083), with subsequent metric (Δχ2 = 41.33, Δdf = 22, ΔCFI = .001), scalar (Δχ2 = 53.33, Δdf = 22, ΔCFI = .001), and structural invariance (Δχ2 = 15.23, Δdf = 3, ΔCFI = .001) tests revealing minimal degradation in model fit indices. These findings substantiate the measurement model’s stability and structural relationships across teacher subgroups.

5. Discussion

5.1. Theoretical and methodological implications

This investigation’s empirical findings extend theoretical understanding of cognitive-adaptive mechanisms in AI-enhanced educational environments. The identified negative relationship between cognitive load and innovative teaching behavior (β = −.134, p < .001) corroborates Heller’s [50] conceptualization of cognitive constraints in technology integration while illuminating specific pathways through which these constraints operate. The mediation analysis revealing technological adaptability as a significant intervening mechanism (indirect effect β = −.171, p < .001) extends Park et al.’s [51] theoretical propositions by demonstrating how cognitive resources dynamically interact with adaptive capabilities in technology-rich educational settings. This theoretical synthesis provides a more nuanced understanding than previous unidimensional models, particularly in explaining how cognitive constraints are associated with innovative behavior through adaptive mechanisms.

The robust factor structure revealed through our measurement model (CFI = .982; TLI = .981) provides empirical support for Rey’s [52] theoretical framework while extending it to encompass AI-specific cognitive demands. This theoretical advancement is particularly significant when considered alongside Sanchez and Wiley’s [53] work on working memory capacity in technology-enhanced learning environments. The stability of these relationships across different teaching contexts, as evidenced by measurement invariance tests (ΔCFI ≤ .001), suggests a universal theoretical mechanism that transcends specific educational settings, supporting Zhang et al.’s [27] conceptualization of technology integration processes.

Our application of structural equation modeling with WLSMV estimation represents an appropriate application of established analytical techniques to examine complex educational phenomena. The development and validation of the AI-assisted Teaching Cognitive Load (ATCL) scale addresses methodological challenges identified by Hizam et al. [54] by providing a contextually appropriate instrument for assessing cognitive demands in AI-enhanced teaching environments. The measurement model’s robust psychometric properties establish a foundation for future research examining similar constructs in various educational technology contexts.

5.2. Contextual and institutional considerations

The study findings demonstrate notable contextual variations in cognitive-adaptive processes across different institutional environments, supporting Shi et al.’s [55] observations regarding context-specific patterns in AI implementation. Educational settings with established technology support infrastructures demonstrate weaker negative relationships between cognitive load and technological adaptability (βestablished = -.296 vs. βemerging = -.382, p <.05), aligning with Kamalov et al.’s [56] framework for sustainable technology integration. These variations reflect the socio-technical nature of AI integration, where institutional factors interact with individual cognitive processes to shape adoption outcomes.

The complex interplay between cognitive load and technological adaptability suggests the need for differentiated institutional approaches addressing both individual and organizational factors. Extraneous load showed particularly strong negative relationships with innovation (β = −.208, p < .001), suggesting that institutional resources should prioritize addressing interface complexity and technical barriers before addressing intrinsic complexity through conceptual training. This finding extends Maričić et al.’s [57] exploration of continuous teaching intention in emerging-technology environments by identifying specific cognitive mechanisms that associated with sustained engagement.

5.3 Implications for educational practice

Given the correlational nature of this cross-sectional investigation, the following discussion presents potential implications as hypotheses requiring experimental validation rather than prescriptive recommendations. While our findings establish significant associations among cognitive load, technological adaptability, and innovative behavior, causal directionality remains unverified.

The empirical patterns—cognitive load’s negative association with innovation (β = −.134, p < .001) mediated by technological adaptability (β = −.171, p < .001)—suggest three areas warranting rigorous investigation. First, the strong relationship between technological adaptability and innovative behavior (β = .487, p < .001) raises the question of whether professional development prioritizing adaptive capabilities might mitigate cognitive barriers. Yang and Huang’s [58] quasi-experimental study provides partial external support, demonstrating that reduced teaching loads during AI implementation improved adaptation outcomes. However, whether structured implementation protocols incorporating cognitive assessment, exploration periods with reduced responsibilities, and progressive complexity introduction would causally improve innovation requires randomized controlled validation.

Second, technological adaptability’s substantial mediating role (accounting for 56.3% of total effect) suggests that metacognitive training targeting adaptive capabilities may be associated with enhanced enhance innovation adoption. Convergent experimental evidence from Xu et al [59] demonstrated that metacognitive support in generative AI environments reduced cognitive load and enhanced self-regulated learning. Nevertheless, our correlational design cannot establish whether such training would causally enhance adaptability in our specific context, necessitating controlled experimental investigation.

Third, the particularly strong negative association between extraneous cognitive load and innovation (β = −.208, p < .001) suggests that institutional interventions reducing system complexity and interface difficulties might prove effective. This hypothesis aligns with Pelletier et al.'s (2024) conceptualization of institutional “adaptive scaffolding” and Kalkan and Yilmaz Altuntas’s (2024) identification of contextual moderators. However, our study did not systematically measure institutional support structures or employ multilevel designs capable of testing such effects, requiring future multi-level experimental research.

These hypotheses, while grounded in our correlational findings and supported by convergent external evidence, require systematic experimental validation through randomized controlled trials or rigorous quasi-experimental designs before informing practice recommendations. Educational institutions implementing AI technologies should prioritize systematic evaluation and evidence generation over premature adoption of theoretically plausible but empirically unvalidated protocols.

5.4. Limitations and future research directions

Several methodological limitations warrant consideration. The cross-sectional design provides robust correlational evidence but fundamentally constrains causal inferences regarding temporal relationships. While our theoretical model specifies cognitive load as antecedent, the correlational data cannot rule out reverse causation (low innovation increasing cognitive load) or reciprocal relationships. This limitation directly impacts Section 5.3’s hypotheses, which require experimental validation before constituting evidence-based recommendations. Future longitudinal research employing multiple measurement waves and experimental designs manipulating cognitive load or adaptability would provide definitive causal evidence.

The self-report nature of instruments introduces potential method variance despite statistical controls and psychometric validation. Future research should employ multi-method assessment incorporating objective cognitive load indicators (dual-task performance, physiological measures), observational protocols for innovative behavior, and peer ratings of adaptability to establish whether self-report patterns reflect actual behavioral differences.

The present study focused on examining the cognitive load-technological adaptability-innovation pathway derived from our theoretical framework, but did not systematically control for other potentially relevant predictors such as prior AI usage experience, teacher self-efficacy, and professional autonomy. The absence of these variables may limit the model’s explanatory power and our ability to rule out alternative explanations. Our dataset did not include validated measures of these constructs, precluding their statistical control in the present analysis. Future research should adopt more comprehensive modeling approaches that incorporate individual difference variables (e.g., technological self-efficacy, prior experience, working memory capacity) and organizational factors (e.g., institutional support, professional autonomy, workload) alongside the cognitive-adaptive mechanisms examined here. Such expanded models would provide a more complete understanding of the multifaceted factors influencing innovative teaching behavior in AI-enhanced educational environments.

Regarding the direction of potential bias from these omitted variables, theoretical considerations suggest mixed effects on the observed mediation relationships. Teacher self-efficacy, if unmeasured, would likely lead to overestimation of the negative relationship between cognitive load and technological adaptability (H1a). Teachers with higher self-efficacy typically experience lower perceived cognitive load [60] and demonstrate greater technological adaptability [61], potentially inflating the observed correlation between these constructs. Similarly, prior AI experience would likely strengthen both technological adaptability and innovative teaching behavior while reducing perceived cognitive load, potentially exaggerating the indirect effect through the mediation pathway. However, the direction of bias is less straightforward for professional autonomy. While autonomy may enhance both adaptability and innovation, it might also buffer against cognitive load effects, potentially attenuating rather than amplifying the mediated relationship. On balance, we suspect that the net effect of omitted variables (particularly self-efficacy and prior experience) would be to somewhat overestimate the strength of the observed mediation effect, though the fundamental pattern of relationships would likely remain intact. Future research incorporating these variables would provide more precise effect size estimates and potentially reveal important moderating mechanisms.

While the sample provided adequate representation across Chinese institutional types, generalizability to other cultural contexts, educational levels, and AI technology types requires careful consideration, as highlighted by Granić [62] regarding technology acceptance across cultures. Cross-cultural comparative studies and systematic examination of specific AI characteristics would clarify boundary conditions.

Future research should examine technological adaptability’s mediating role through longitudinal designs to understand developmental trajectories and critical periods for adaptation support. Multi-level analyses examining how organizational factors interact with individual cognitive processes would enhance theoretical understanding of AI integration, building on Garzón et al.’s [63] work. Experimental designs manipulating interface characteristics would identify features optimizing cognitive resource allocation while establishing causal relationships our correlational design cannot support.

6. Conclusion

This investigation examined the interrelationships between cognitive load, technological adaptability, and innovative teaching behavior in AI-enhanced educational environments, integrating Cognitive Load Theory, Technology Acceptance Model, and Innovation Diffusion Theory within a comprehensive mediation framework. Employing structural equation modeling with WLSMV estimation (N = 600), the study tested theoretically-derived hypotheses regarding the pathways through which cognitive demands influence pedagogical innovation in AI-assisted English language instruction.

The empirical analysis provided robust support for the primary hypothesis, establishing that teachers’ cognitive load exerts significant negative influence on innovative teaching behavior through both direct (β = −.134, p < .001) and indirect pathways mediated by technological adaptability (β = −.171, p < .001). The substantial magnitude of the indirect effect, accounting for 56.3% of the total effect, identifies technological adaptability as a critical psychological mechanism translating cognitive constraints into behavioral innovation outcomes. The measurement model’s exceptional psychometric properties (α = .91−.93; AVE = .64−.68) and demonstrated measurement invariance across teacher subgroups (ΔCFI ≤ .001) provide confidence in these estimates. Secondary hypotheses concerning institutional moderation and implementation stage variations remain untested due to cross-sectional design constraints and absence of multilevel institutional measurements, constituting important theoretical propositions requiring longitudinal investigation.

These findings systematically address the study’s research questions. Regarding cognitive load’s influence pathways on innovative behavior, the dual-pathway model demonstrates that while cognitive load directly constrains innovation capacity, the predominant influence operates through diminished technological adaptability. This pattern suggests that cognitive demands’ impact on innovation stems primarily from impaired adaptive capabilities rather than solely from immediate cognitive resource depletion. The analysis revealed substantial mediation by technological adaptability, with bootstrap confidence intervals [−.211, −.131] confirming statistical significance. The underlying psychological mechanisms align with distributed cognition theory and cognitive offloading frameworks, wherein adaptability represents educators’ capacity to strategically orchestrate human-AI cognitive assemblages and externalize cognitive functions onto technological systems. The correlational evidence establishes that technological adaptability—particularly dimensions of cognitive flexibility and integration capability—constitutes a critical factor in the cognitive load-innovation relationship, though causal mechanisms require experimental elucidation.

Theoretically, this investigation advances understanding across multiple domains. The extension of Cognitive Load Theory to teacher cognitive processes in AI-enhanced instruction demonstrates that Sweller et al.’s [6] tripartite framework remains applicable yet requires reconceptualization when cognitive processing extends beyond individual mental architectures to distributed human-AI systems. The empirically validated mediation model bridges previously fragmented cognitive psychology and technology acceptance literatures, providing an integrative framework demonstrating that cognitive constraints influence innovation primarily through adaptive mechanisms rather than direct pathways. This finding represents a theoretical refinement with implications for conceptualizing technology integration processes. Methodologically, the development and validation of the AI-assisted Teaching Cognitive Load (ATCL) scale addresses a critical gap, providing researchers with a contextually appropriate instrument for assessing cognitive demands in AI-enhanced teaching environments. The demonstrated measurement invariance suggests these cognitive-adaptive mechanisms represent fundamental processes transcending specific institutional or demographic contexts, supporting theoretical generalizability.

The practical implications of these findings must be interpreted within the constraints imposed by correlational research design. The evidence suggests that technological adaptability represents a potential leverage point for mitigating cognitive barriers to innovation. However, the cross-sectional design’s inherent limitations on causal inference necessitate that these patterns be understood as theoretically-grounded hypotheses warranting rigorous experimental validation rather than evidence-based prescriptive recommendations. Educational institutions implementing AI technologies should prioritize systematic evaluation through randomized controlled trials to establish intervention efficacy. The strong association between extraneous cognitive load and diminished innovation (β = −.208, p < .001) suggests that institutional investments in interface simplification and technical infrastructure may prove particularly effective, though this hypothesis requires multilevel experimental confirmation before informing policy decisions.

This study establishes an empirical foundation for understanding how cognitive processes influence teaching innovation through adaptive mechanisms in AI-enhanced contexts. The theoretical framework, validated measurement instruments, and identified mediational pathways provide essential groundwork for future longitudinal investigations examining developmental trajectories of adaptive capabilities and experimental studies testing intervention efficacy. Advancing from correlational pattern identification to evidence-based practice recommendations requires sustained research programs employing diverse methodologies, systematic replication across varied international and institutional contexts, and rigorous experimental validation of the theoretically-derived hypotheses generated by this investigation. While acknowledging methodological constraints on causal inference, this research contributes to the evolving theoretical understanding of cognitive-adaptive processes in technology-enhanced educational environments.

References

  1. 1. Sweller J. Cognitive Load During Problem Solving: Effects on Learning. Cognit Sci. 1988;12(2):257–85.
  2. 2. Sweller J, Chandler P. Why Some Material Is Difficult to Learn. Cognit Ins. 1994;12(3):185–233.
  3. 3. Chandler P, Sweller J. Cognitive Load Theory and the Format of Instruction. Cognit Ins. 1991;8(4):293–332.
  4. 4. Du Y, Gao H. Determinants affecting teachers’ adoption of AI-based applications in EFL context: An analysis of analytic hierarchy process. Educ Inf Technol. 2022;27(7):9357–84.
  5. 5. Bayaga A. Leveraging AI-enhanced and emerging technologies for pedagogical innovations in higher education. Educ Inform Technol. 2024.
  6. 6. Sweller J, Ayres P, Kalyuga S. Cognitive load theory. Springer; 2011.
  7. 7. Skulmowski A, Xu KM. Understanding Cognitive Load in Digital and Online Learning: a New Perspective on Extraneous Cognitive Load. Educ Psychol Rev. 2021;34(1):171–96.
  8. 8. Dalinger T, Asino TI. Design of an Instrument Measuring P-12 Teachers’ Cognitive Load and Intent to Adopt Technology. TechTrends. 2021;65(3):288–302.
  9. 9. Davis FD. Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology. MIS Quarterly. 1989;13(3):319–40.
  10. 10. Venkatesh V, Davis FD. A Theoretical Extension of the Technology Acceptance Model: Four Longitudinal Field Studies. Manag Sci. 2000;46(2):186–204.
  11. 11. Gong Y, Xu C, Luo S, Lin J. Modeling teacher education students’ adoption of large language models through an extended technology acceptance framework. Sci Rep. 2025;15(1):32208. pmid:40890159
  12. 12. Hazzan Bishara A, Kol O, Levy S. The factors affecting teachers’ adoption of AI technologies: A unified model of external and internal determinants. Educ Inf Technol. 2025;30:15043–69.
  13. 13. Chen X, Lin X, Zou D, Xie H. Understanding influential factors for college instructors’ adoption of LLM-based applications using analytic hierarchy process. Journal of Computers in Education. 2025;null. https://doi.org/10.1007/s40692-025-00363-0
  14. 14. Zhang C, Hu M, Wu W, Chen Y, Wang K, Gao T, et al. AI versus teacher feedback in developing pre-service teachers’ teaching self-efficacy: A quasi-experimental study in simulated teaching. Teach Teach Educ. 2025;172:105355. https://doi.org/10.1016/j.tate.2025.105355
  15. 15. Rogers EM. Diffusion of Innovations. Free Press; 1962.
  16. 16. Rogers EM. Diffusion of Innovations (5th ed.). Free Press; 2003.
  17. 17. Hall GE, Hord SM. Change in schools: Facilitating the process. State University of New York Press; 1987.
  18. 18. Fullan M. The new meaning of educational change. 3rd ed. Teachers College Press. 2001.
  19. 19. Moore GC, Benbasat I. Development of an instrument to measure the perceptions of adopting an information technology innovation. Inf Syst Res. 1991:2(3):192–222.
  20. 20. Venkatesh V, Morris MG, Davis GB, Davis FD. User acceptance of information technology: Toward a unified view. MIS Quarterly. 2003:27(3):425–78.
  21. 21. Sahin I. Detailed review of Rogers’ diffusion of innovations theory and educational technology-related studies based on Rogers’ theory. Turkish Online J Educ Technol. 2006;5(2):14–23.
  22. 22. Straub ET. Understanding Technology Adoption: Theory and Future Directions for Informal Learning. Rev Educ Res. 2009;79(2):625–49.
  23. 23. Surry DW, Ely DP. Adoption, diffusion, implementation, and institutionalization of educational technology. In: Spector JM, Merrill MD, van Merriënboer J, Driscoll MP, editors. Handbook of research on educational communications and technology. 3rd ed. Lawrence Erlbaum Associates; 2007. p. 1045–56.
  24. 24. Wu D, Zhou C, Liang X, Li Y, Chen M. Integrating technology into teaching: Factors influencing rural teachers’ innovative behavior. Educ Inf Technol. 2022;27(4):5325–48.
  25. 25. Li K, Zhu G. Promoting teaching innovation of Chinese public-school teachers by team temporal leadership: The mediation of job autonomy and the moderation of work stress. PLoS One. 2022;17(7):e0271195. pmid:35802741
  26. 26. Rogers EM. Diffusion of innovations. 5th ed. Free Press. 2003.
  27. 27. Zhang Y, Meng Z, Liu X. Applying Structural Equation Modeling to Assess Factors of Primary School Mathematics Teachers’ Knowledge of Students’ Misconceptions. Int J Sci Math Educ. 2024;22(8):1643–61.
  28. 28. Teo T, Tsai LT. Applying structural equation modeling (SEM) in educational research: An introduction. In: Teo T, editor. Handbook of quantitative methods for educational research. Springer; 2015. p. 3–21.
  29. 29. In’nami Y, Koizumi R. Structural equation modeling in educational research. In: Teo T (Ed.), Handbook of quantitative methods for educational research. Springer; 2015.p. 23–51.
  30. 30. Wang D, Bian C, Chen G. Using explainable AI to unravel classroom dialogue analysis: Effects of explanations on teachers’ trust, technology acceptance and cognitive load. Brit J Educational Tech. 2024;55(6):2530–56.
  31. 31. Grinschgl S, Neubauer AC. Distributed cognition today and in an AI enhanced future. Front Artif Intell. 2022;5:908261.
  32. 32. Gerlich, M. AI tools in society: Impacts on cognitive offloading and the future of critical thinking. Societies. 2025;15(1):6. https://doi.org/10.3390/soc15010006
  33. 33. Zhai Y, Nezakatgoo B. Evaluating AI powered applications for enhancing undergraduate students’ metacognitive strategies, self determined motivation, and social learning in English language education. Scientific Reports. 2025;15:35199.
  34. 34. Gerlich M. AI tools in society: Impacts on cognitive offloading and the future of critical thinking. Societies. 2025;15(1):6.
  35. 35. Glickman M, Sharot T. How human–AI feedback loops alter human perceptual, emotional and social judgements. Nat Hum Behav. 2025;9:345–59.
  36. 36. Tsakeni M, Nwafor SC, Mosia M, Egara FO. Mapping the scaffolding of metacognition and learning by AI tools in STEM classrooms: A bibliometric–systematic review approach (2005–2025). J Intell. 2025;13(11):148.
  37. 37. Zhang Q. Mediation/moderation effects of engagement, foreign language enjoyment, and ambiguity tolerance in metaverse based foreign language learning. Int J Educ Technol High Educ. 2024;21:54.
  38. 38. Carayon P, Hoonakker P, Hundt AS, Loushine T. A sociotechnical systems framework for the application of artificial intelligence in health care delivery. J Med Syst. 2023; 16(4):194–206. https://doi.org/10.1177/15553434221097357
  39. 39. Zhu B, Chau KT, Mohamed Mokmin NA. Optimizing cognitive load and learning adaptability with adaptive microlearning for in service personnel. Scientific Reports. 2024;14:25960. https://doi.org/10.1038/s41598024771221
  40. 40. Vasilaki E, Walkington CA, Skulmowski A. Extending cognitive load theory: The CLAM framework for biometric, adaptive, and ethical learning. MDPI Education Journal, 2025;7(2):40–59.
  41. 41. Grinschgl S, Neubauer AC. Supporting cognition with modern technology: Distributed cognition today and in an AI enhanced future. Front Artif Intell. 2022;5:908261. https://doi.org/10.3389/frai.2022.908261
  42. 42. Chirayath G, Premamalini K, Joseph J. Cognitive offloading or cognitive overload? How AI alters the mental architecture of coping. Front Psychol. 2025;16:1699320. https://doi.org/10.3389/fpsyg.2025.1699320
  43. 43. Ployhart RE, Bliese PD. Understanding adaptability: A prerequisite for effective performance within complex environments. In: Burke CS, Pierce LG, Salas E, editors. Understanding adaptability: A prerequisite for effective performance within complex environments (Advances in Human Performance and Cognitive Engineering Research, Vol. 6). Elsevier; 2006.
  44. 44. Parasuraman A, Colby CL. An updated and streamlined technology readiness index: TRI 2.0. J Serv Res. 2015;18(1):59–74. https://doi.org/10.1177/1094670514539730
  45. 45. Mishra P, Koehler MJ. Technological Pedagogical Content Knowledge: A Framework for Teacher Knowledge. Teachers College Record. 2006;108(6):1017–54.
  46. 46. Thurlings M, Evers A, Vermeulen M. Toward a model of explaining teachers’ innovative behavior: A literature review. Rev Educ Res. 2015;85(3):430–71. https://doi.org/10.3102/0034654314557949
  47. 47. Janssen O. Job demands, perceptions of effort–reward fairness and innovative work behaviour. J Occup Organ Psychol. 2000;73:287–302.
  48. 48. Redifer JL, Bae CL, Zhao Q. Self efficacy and performance feedback: Impacts on cognitive load during creative thinking. Learning and Instruction. 2021;71:101395. https://doi.org/10.1016/j.learninstruc.2020.101395
  49. 49. Podsakoff PM, MacKenzie SB, Lee J-Y, Podsakoff NP. Common method biases in behavioral research: a critical review of the literature and recommended remedies. J Appl Psychol. 2003;88(5):879–903. pmid:14516251
  50. 50. Heller C. Connecting concepts helps put main ideas together”: Cognitive load and technology integration in higher education. Int J Educ Technol High Educ. 2022;19:Article 11.
  51. 51. Park B, Flowerday T, Brünken R. Cognitive and affective effects of seductive details in multimedia learning. Comput Human Behav. 2015;44:267–78.
  52. 52. Rey GD. A review of research and a meta-analysis of the seductive detail effect. Educ Res Rev. 2012;7(3):216–37.
  53. 53. Sanchez CA, Wiley J. An examination of the seductive details effect in terms of working memory capacity. Mem Cognit. 2006;34(2):344–55. pmid:16752598
  54. 54. Hizam SM, Akter H, Sentosa I, Ahmed W. Digital competency of educators in the virtual learning environment: A structural equation modeling analysis. arXiv preprint arXiv:2105.08927. 2021.
  55. 55. Shi L, Ding A.-C, Choi I. Investigating teachers’ use of an AI-enabled system and their perceptions of AI integration in science classrooms: A case study. Educ Inf Technol. 2024.
  56. 56. Kamalov F, Santandreu Calonge D, Gurrib I. New Era of Artificial Intelligence in Education: Towards a Sustainable Multifaceted Revolution. Sustainability. 2023;15(16):12451.
  57. 57. Maričić M, Anđić B, Soeharto S, Mumcu F, Cvjetićanin S, Lavicza Z. The exploration of continuous teaching intention in emerging-technology environments through perceived cognitive load, usability, and teacher’s attitudes. Educ Inf Technol. 2024;30(7):9341–70.
  58. 58. Yang Y, Huang F. Effects of reduction on teacher’s teaching load in the lesson development of AI-led instruction: A pilot study. J East China Normal Univ. 2024;42(2):46–62.
  59. 59. Xu X, Qiao L, Cheng N, Liu H, Zhao W. Enhancing self‐regulated learning and learning experience in generative AI environments: The critical role of metacognitive support. Brit J Educational Tech. 2025;56(5):1842–63.
  60. 60. Villarente JR, Paglinawan JL, Villarente WP. Digital expertise and stress coping skills as predictors of technological adaptability in secondary school teachers. International Journal of Latest Technology in Engineering Management & Applied Science. 2025;14(10):325–38. https://doi.org/10.51583/IJLTEMAS.2025.1410000042
  61. 61. Tinio JV, Dela Rosa AG. Teachers’ self-efficacy and attitudes towards the use of information technology in classrooms. Educ Sci. 2021;13(10):1001. https://doi.org/10.3390/educsci13101001
  62. 62. Granić A. Technology Acceptance and Adoption in Education. In: Handbook of Open, Distance and Digital Education. Springer Singapore. 2022. p. 1–15.
  63. 63. Garzón J, Kinshuk, Baldiris S, Gutiérrez J, Pavón J. How do pedagogical approaches affect the impact of augmented reality on education? A meta-analysis and research synthesis. Educ Res Rev. 2020;31:100334.