Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Predictive validity of early and mid-year literacy assessments for end-of-year word reading fluency

  • Saeed Saad Alqahtani

    Roles Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing

    saeed.alqahtani@psau.edu.sa

    Affiliation Department of Special Education, College of Education, Prince Sattam bin Abdulaziz University, Al-Kharj, Saudi Arabia

Abstract

This study explores the predictive validity of early and mid-year literacy assessments—Letter Naming Fluency (LNF), Letter Sound Fluency (LSF), and Phoneme Segmentation Fluency (PSF)—for end-of-year Word Reading Fluency (WRF) in kindergarten and grade 1 Arabic-speaking students. Using a longitudinal design, 412 students were assessed at three intervals (fall, mid-year, and year-end) to examine how these foundational literacy skills contribute to predicting WRF and identifying at-risk readers. Results from multiple regression analyses indicate that both LNF and LSF are significant predictors of end-of-year WRF across both fall and mid-year assessments, with mid-year measures demonstrating enhanced predictive accuracy. In contrast, PSF showed limited predictive value in comparison to LNF and LSF. Accuracy analyses revealed that mid-year assessments, particularly LNF and LSF, were more reliable in identifying at-risk students compared to fall assessments. Mid-year assessments correctly identified 64–73% of at-risk readers, emphasizing their value in early detection and intervention. These findings underline the importance of dynamic, multi-point assessments in improving educational practices and outcomes, particularly for students at risk of reading difficulties. The study highlights the critical role of foundational literacy skills in early reading development and provides actionable insights to enhance intervention strategies during key stages of literacy acquisition.

Introduction

Learning to read involves developing skills to understand written language. It’s a process where language skills and code-related skills work together to help students read words and understand their meaning [1]. Thus, one of the first challenges for young learners is connecting symbols to the sounds to read words accurately and quickly [2]. To decode accurately at this stage, students must first acquire foundational literacy skills, including phonemic awareness and alphabetic knowledge [3].

Early literacy skills, as identified by the National Early Literacy Panel, are essential for reading development even before the acquisition of formal reading skills among English-language learners in the United States [4].These skills encompass letter-sound relationships and phonemic awareness [5,6]. Early literacy skills in the early grades can predict students’ decoding ability in following years [7,8] and overall reading success [5], underscoring the importance of nurturing these skills during the early grades.

Predictive power of foundational literacy skills

Alphabetic knowledge and phonemic awareness are key predictors of reading achievements [911]. Alphabetic knowledge refers to the capacity to recognize printed letters and understand their associations with letter names and sounds. It comprises two main components: letter name knowledge, which indicate a student’s ability to match printed letters with their corresponding names, and letter sound knowledge, which indicate a student’s ability to match printed letters with their respective sounds [12,13]. Both skills are assessed using fluency-based measures which require students to name or pronounce letters within limited time (e.g., one minute) [14,15]. Alphabetic knowledge in early grades have strong relationship with reading achievement [1618]. Thus, students who are at risk or with reading disabilities will have difficulties learn letter name and sounds [19,20].

Phonemic awareness refers to the ability to identify and manipulate individual sounds, or phonemes, in spoken language [3]. It involves recognizing that words are made up of separate sounds and being able to manipulate these sounds through activities like blending, segmenting, and manipulating phonemes in spoken words. One of the most common measures for phonemic awareness is Phoneme segmentation which require students to identify letter sounds within words. Phonemic awareness is essential for developing strong reading and spelling skills [21,22]. It is considered as the best predictor of reading skills [2325]. Similar to alphabetic knowledge, students who are at risk or with reading disabilities have difficulties with phonemic awareness [25,26].

Letter naming fluency, letter sound fluency, and phoneme segmentation fluency are widely recognized as reliable predictors of word reading fluency (WRF). These measures provide crucial insights into students’ reading readiness, particularly for those at risk for reading difficulties. The speed and accuracy with which students perform these tasks correlate strongly with their ability to decode and recognize words fluently [16,18]. Consequently, these predictors are integral to identifying students who may require targeted interventions to achieve reading proficiency.

Word reading fluency as a key literacy outcome

After mastering foundational literacy skills, students will develop the ability to decode and read words sufficiently. This proficiency serves as a prerequisite for progressing to more complex reading competencies, including vocabulary acquisition, fluency development, and comprehension strategies [2729]. Word reading refers to the ability to accurately decode and recognize individual words in isolation, without the context of a larger text [30,31]. Proficient word reading requires students to read words with age- and grade-appropriate speed and accuracy [32,33]. The accepted levels of speed and accuracy vary depending on the student’s developmental stage and grade level expectations [34,35]. Within Curriculum-based Measurement (CBM) farmwork, word reading proficiency can be assessed requiring students to read a list of words in one minute [36,37]. Word reading skill can predict reading abilities and reading comprehension [38,39].

Significance of the study

Prior research has emphasized the predictive value of early assessments such as Letter Naming Fluency (LNF), Letter Sound Fluency (LSF), and Phoneme Segmentation Fluency (PSF) for end-of-year outcomes [40]. However, much of this work has centered on single timepoints, such as fall or year-end assessments, overlooking the potential of mid-year data to enhance early identification and intervention.

The current study investigates the evolving predictive accuracy of LNF, LSF, and PSF between fall and mid-year assessments, demonstrating their ability to predict end-of-year Word Reading Fluency (WRF). Research supports the utility of mid-year assessments in identifying at-risk students earlier in the academic year [41,42]. This approach provides educators with the opportunity to implement timely, targeted interventions during the school year, an improvement over practices that rely solely on end-of-year data, which often leave limited time for effective remediation.

Although several studies have explored the development and validation of Arabic reading assessments [4345], research in this area remains limited in scope and methodology. Prior studies have primarily focused on specific CBM tools such as Letter Naming Fluency (LNF) and Letter Sound Fluency (LSF) [46], word reading fluency [47,48], oral reading fluency [49], and reading comprehension using Maze tasks [50]. However, many of these studies are characterized by small sample sizes and limited technical adequacy, particularly in their ability to distinguish between skilled readers and students with reading difficulties. For example, Al-Hmouz [46] examined LNF and LSF performance in a sample of 150 students, while Abu-Hamour [48] evaluated word identification fluency in 75 students, divided between skilled readers and those with reading difficulties. Despite offering valuable benchmarks for instruction, these studies often faced methodological and analytical challenges that may limit the generalizability and practical application of their findings. Moreover, few studies have comprehensively assessed multiple components of early literacy, such as phonemic awareness, which are critical for the accurate identification of reading difficulties. This study addresses a significant gap in research by emphasizing how sequential assessments can be used to refine predictions over time. While studies have shown that LNF and LSF are robust predictors of WRF [42,51], few have systematically evaluated how predictive accuracy evolves from fall to mid-year. Additionally, while PSF is widely acknowledged for its role in phonological awareness and early reading skills [25], limited research has explored its specific predictive value at mid-year.

By addressing these gaps, this research provides both theoretical and practical contributions. The findings highlight the potential of mid-year assessments not only to enhance predictive accuracy but also to support real-time educational decision-making, offering insights into how assessment practices can be aligned with the dynamic nature of literacy development [52]. This perspective is supported by studies demonstrating that predictive models that incorporate multiple assessment periods outperform static models [53].

Research questions

  1. How do fall and mid-year assessments Letter Naming Fluency (LNF), Letter Sound Fluency (LSF), and Phoneme Segmentation Fluency (PSF) compare in their ability to predict end-of-year Word Reading Fluency (WRF) for kindergarten and grade 1 students?
  2. What is the unique contribution of Letter Naming Fluency (LNF), Letter Sound Fluency (LSF), and Phoneme Segmentation Fluency (PSF) to the prediction of end-of-year word reading fluency (WRF) across fall and mid-year assessments?
  3. To what extent do fall and mid-year assessments of Letter Naming Fluency (LNF), Letter Sound Fluency (LSF), and Phoneme Segmentation Fluency (PSF)accurately identify students at risk for reading failure by the end of the academic year?

Methodology

Participants

The study encompassed a sample of 412 Arabic-speaking students from kindergarten and first grade, enrolled during the academic year 2024. These participants were drawn from nine schools located in the central province of Saudi Arabia. The demographic composition of the sample was broadly representative of the Middle Eastern population, with no specific ethnic or social groups deliberately excluded. The gender distribution was balanced, with equal participation of males and females (50% each). Furthermore, the sample maintained an equal distribution across educational levels, with 50% of participants in kindergarten and 50% in first grade. The selection criteria involved choosing two classrooms from each participating school—one from kindergarten and one from first grade—resulting in a total of 18 classrooms being included in the study.

CBM materials

Letter Naming Fluency (LNF): The researcher developed three distinct versions of test materials. Each version consisted of an A4 sheet displaying 28 Arabic letters arranged randomly across 10 rows and 15 columns, with some letters appearing more than five times throughout the sheet. Participants were instructed to read the letter names aloud, earning a point for each correctly identified letter within a one-minute time limit. The average correlation coefficient between the three versions was (.96).

Letter Sound Fluency (LSF): The LSF test followed a similar procedure to the LNF test, with the exception that the sheets presented letter sounds rather than letter names. Participants were asked to articulate both the letter sounds and names, receiving a point for each accurately identified sound within one minute. The average correlation coefficient between the versions was (.94).

Phoneme Segmentation Fluency (PSF): This assessment required participants to isolate the phonemes of words presented orally. The researchers created three versions of test materials, each containing 30 Arabic words and 96 syllables, aligning with the EasyCBM [54] criteria. The words were randomly distributed across the sheets prepared. Participants earned a point for each correctly identified phoneme. The average correlation coefficient between the versions was (.92).

Word Reading Fluency (WRF): Three versions of test materials were developed, each featuring an A4 sheet with 76 Arabic words arranged across 4 rows and 19 columns. Participants were tasked with reading the words aloud. The words were ranged in length from 2 letters to 6 but were randomly distributed across the sheet rather than being organized by number of letters level. Participants received a point for each correctly identified word. The average correlation coefficient between the versions was (.80).

The materials used in this study consisted of four assessment tools, each with three alternate forms, developed by the researcher based on well-established English-language CBM tools such as easyCBM, FAST, and DIBELS [5456]. The Letter Naming Fluency (LNF) and Letter Sound Fluency (LSF) assessments consisted solely of Arabic letters. For the Phoneme Segmentation Fluency (PSF) and Word Reading Fluency (WRF) assessments, the word lists were developed using vocabulary drawn from the Saudi Arabian elementary Arabic school curriculum, ensuring that the most common and relevant words were represented. These tools have been used in previous research and were refined over time to improve their appropriateness and alignment with the target population. Additionally, the reliability of each test form was evaluated, and all forms demonstrated high internal consistency, with alternate form reliability coefficients exceeding 0.92.

The assessments were administered by two educators, both hold a bachelor’s degree in linguistics and had experience teaching language courses to elementary students. Prior to the study, the educators participated in a training session covering the research design and test implementation procedures. Over the course of one week, the researcher provided comprehensive explanations of the procedures, demonstrated the process, and allowed the educators to practice. Using a procedural checklist, the educators achieved an average accuracy of 95% when implementing the procedures. The research team conducted regular meetings to review the process and address any questions.

This study was reviewed and approved by the Standing Committee of Bioethics Research at Prince Sattam bin Abdulaziz University. The ethics approval number is [SCBR-195/2023], and the approval was granted on [12/19/2023]. After obtaining approval from the Institutional Review Board (IRB) and securing permission from the school district, the researcher contacted the school principals and obtained consent to conduct the study. Since the participants were young students, informed consent was obtained from their parents or legal guardians. Consent forms detailing the study’s purpose, procedures, and ethical considerations were sent to the parents, and written consent was secured prior to their children’s participation in the study. The assessments were administered individually to the participants. The same students were tested three times during the academic year (at the end of fall semester, at the end of winter semester, and at the end of spring semester). The recruitment of participants for this study took place between [1/1/2024] and [25/11/2024]. The study took place in the resource rooms of the selected schools. The educators provided participants with copies of the tests (except for the phonemic test) and utilized a stopwatch. The assessments were conducted daily over a two-week period at the end of each semester. In addition to the four tests, the educators completed a cover page for each participant, recording demographic information (e.g., age, grade, and sex). The educators were also instructed to record their voices during the test to facilitate reliability and procedural integrity procedures. The procedures and instructions were adapted from the CBM manual Book (Hosp et al., 2007). Furthermore, the classroom’s teachers assessed participants’ reading abilities based on their knowledge, assigning scores ranging from 1 to 10 each semester.

Reliability and procedural integrity

To ensure the reliability of the scoring process, an independent educator, not involved in the scoring or rating, was assigned to review 10% of the randomly selected data. The educator listened to the recordings of each participant, evaluating both reliability and procedural integrity. Cronbach’s alpha values for the four tests demonstrated high reliability between the primary and secondary scorers, measuring.998,.992,.999, and.987 as an average. The educator utilized a checklist to assess procedural integrity, resulting in an average score of 95% (range, 92–100%). This indicated that the educators responsible for implementation diligently followed the assessment instructions.

Analysis method

To answer the research questions, a series of analyses were conducted, including descriptive statistics, multiple regression analyses, hierarchical multiple regression analyses, and accuracy analysis for predicting at-risk students. The data were analyzed using JASP software [57].

1. Descriptive statistics:.

Descriptive statistics were calculated to summarize baseline data for each literacy measure (LNF, LSF, PSF) across the fall, winter, and spring semesters for kindergarten and grade 1. Means, standard deviations, minimum, and maximum scores were computed for each measure and are presented in Table 1.

2. Multiple regression analysis:.

To predict end-of-year word reading fluency (WRF), multiple regression analyses were conducted using each semester separately (fall and winter) scores of Letter Naming Fluency (LNF), Letter Sound Fluency (LSF), and Phoneme Segmentation Fluency (PSF) as predictors. Each model includes the three measures while word reading fluency by the end of year serve as dependent vibrable. Separate analyses were run for kindergarten and grade 1, allowing for an assessment of each measure’s unique contribution to the prediction of end-of-year WRF. Each predictor’s unstandardized coefficient (B), standard error (SE), standardized coefficient (β), t-value, and significance level (p) are provided in Table 2.

thumbnail
Table 2. Multiple Regression: Predicting end of year WRF status from fall and winter LNF, LSF, PSF for kindergarten and grade1.

https://doi.org/10.1371/journal.pone.0327242.t002

3. Hierarchical multiple regression analysis:.

To further investigate the incremental contributions of each literacy measure to WRF, hierarchical multiple regression was performed. Initial word reading ability (WRF1) was entered in the first block to control for prior reading skill, followed by LNF, LSF, and PSF in successive blocks to assess their unique contributions to end-of-year WRF. This method provided a clear picture of how each measure influenced end-of-year outcomes beyond prior ability. The hierarchical model results, including R2, F-values, p-values, and coefficients, are shown in Tables 3 and 4. These results clarify the impact of each predictor in explaining variance in end-of-year WRF.

thumbnail
Table 3. Hierarchical Multiple Regression Results: Predicting end of year WRF from fall LNF, LSF, PSF for kindergarten and grade1.

https://doi.org/10.1371/journal.pone.0327242.t003

thumbnail
Table 4. Hierarchical Multiple Regression Results: Predicting End-of-Year WRF from Mid-Year LNF, LSF, PSF for Kindergarten and Grade 1.

https://doi.org/10.1371/journal.pone.0327242.t004

4. Accuracy analysis for identifying at-risk students:.

An accuracy analysis was performed to assess each measure’s ability to identify students who fell below the 25th percentile in end-of-year WRF. Students were categorized based on their fall and spring LNF, LSF, and PSF scores, with accuracy rates calculated for each measure to determine predictive reliability. Table 5 summarizes the percentage accuracy for each measure in predicting end-of-year WRF performance. This analysis highlighted the efficacy of each measure, particularly emphasizing the predictive power of LNF and LSF in spring assessments over PSF.

thumbnail
Table 5. Accuracy in predicting students’ WRF performance.

https://doi.org/10.1371/journal.pone.0327242.t005

Results

Descriptive statistics

Table 1 presents the descriptive statistics for each grade (Kindergarten and First Grade) by measure and time point. The scores for each measure increased across the semesters (Fall, Winter, and Spring) for each grade. As expected, a large variation was observed across all measures.

Multiple regression analysis

Table 2 presents the results of multiple regression analyses conducted to predict end-of-year word reading fluency (WRF) for kindergarten and first-grade students based on fall and winter assessment scores. The models were analyzed simultaneously for each grade to examine the contribution of early literacy predictors measured at different time points (fall and winter). The table includes the predictor variables included in each model, regression coefficients (B), standard error (SE), standardized coefficients (β), t-values, and p-values for each predictor.

To predict end-of-year word reading fluency (WRF) for kindergarten and grade 1 students, a multiple regression analysis was conducted using Letter Naming Fluency (LNF), Letter Sound Fluency (LSF), and Phoneme Segmentation Fluency (PSF) as predictors. The results are presented separately for each grade level.

For kindergarten, the regression model showed that LNF1 significantly predicted end-of-year WRF (β = 0.33, p = .002), and LSF1 also contributed significantly (β = 0.38, p = .001). However, PSF1 did not significantly contribute to the prediction of WRF (p = .830). For the second semester, LNF2 was the largest significant predictor (β = 0.492, p < .001), followed by LSF2 (β = 0.303, p < .001). PSF2 did not contribute significantly (p = .107).

For grade 1, LNF1 (β = 0.39, p < .001) and LSF1 (β = 0.35, p < .001) were similar significant predictors of WRF, while PSF1 was not significant (p = .394). In the second semester, LNF2 (β = 0.364, p = .004) and LSF2 (β = 0.365, p = .003) remained significant predictors, while PSF2 did not significantly contribute (p = .982).

In summary, for both grades, initial and mid-year letter naming fluency (LNF) and letter sound fluency (LSF) were consistently significant predictors of end-of-year word reading fluency. However, phoneme segmentation fluency (PSF), both at the start and mid-year, did not show significant predictive power in either grade, suggesting it may play a less prominent role in end-of-year word reading outcomes.

Hieratical multiple regression analyses for fall assessments

Table 3 presents the results of hierarchical multiple regression analyses conducted to predict end-of-year word reading fluency (WRF) for kindergarten and first-grade students based on fall assessment scores. For each grade, four models were analyzed, with predictor variables added sequentially to examine their incremental contribution to predicting WRF. The table includes the predictor variables included in each model, the explained variance (R2) for each model, the results of the analysis of variance (ANOVA) test, and the regression coefficients for each predictor.

To predict end-of-year word reading fluency (WRF) for kindergarten and grade 1 students, a hierarchical multiple regression analysis was conducted with fall word reading (WRF1) entered in the first block to control for initial reading skill, followed by Letter Naming Fluency (LNF1), Letter Sound Fluency (LSF1), and Phoneme Segmentation Fluency (PSF1) in subsequent blocks. Results for each grade are as follows:

Initial WRF1 accounted for 48.2% of the variance in end-of-year WRF (R2 = 0.482, F = 121.19, p < .001), with a significant coefficient (β = .695, p < .001). Adding LNF1 increased the explained variance by 3.0% to a total of 51.2% (R2 = 0.512, F = 67.70, p < .001), with LNF1 significantly predicting WRF (β = .512, p = .006). Including LSF1 further increased the explained variance to 53.6% (R2 = 0.536, F = 49.27, p < .001), with a significant coefficient (β = .264, p = .012). However, adding PSF1 contributed only 0.1% additional variance, leaving the model at 53.7% explained variance (R2 = 0.537, F = 36.76, p < .001), and PSF1 did not significantly predict WRF (β = −.030, p = .660).

For grade 1, initial WRF1 accounted for 62.3% of the variance in end-of-year WRF (R2 = 0.623, F = 215.09, p < .001), with a significant coefficient (β = .789, p < .001). Adding LNF1 slightly increased the explained variance to 62.9% (R2 = 0.629, F = 109.45, p < .001), but LNF1 did not significantly predict WRF (β = .104, p = .168). Including LSF1 kept the explained variance at 62.9% (R2 = 0.629, F = 72.40, p < .001), with no significant contribution (β = .001, p = .987). Finally, adding PSF1 did not increase the explained variance further (R2 = 0.639, F = 54.55, p < .001), and PSF1 also showed no significant effect (β = −.055, p = .319).

In Summary, for kindergarten, WRF1, LNF1, and LSF1 were significant predictors of end-of-year WRF, with LNF1 and LSF1 each adding unique variance. PSF1, however, did not contribute significantly. For grade 1, only initial WRF1 was a significant predictor, while LNF1, LSF1, and PSF1 did not contribute additional significant variance in predicting end-of-year WRF. PSF1 did not.

Hieratical multiple regression analyses for mid-year assessments

Table 4 presents the results of hierarchical multiple regression analyses conducted to predict end-of-year word reading fluency (WRF) for kindergarten and first-grade students based on winter assessment scores. For each grade, four models were analyzed, with predictor variables added sequentially to examine their incremental contribution to predicting WRF. The table includes the predictor variables included in each model, the explained variance (R2) for each model, the results of the analysis of variance (ANOVA) test, and the regression coefficients for each predictor.

For Kindergarten, initial WRF2 accounted for 66.8% of the variance in end-of-year WRF (R2 = 0.668, F = 233.450, p < .001), with a significant coefficient (β = 0.817, p < .001). Adding LNF2 increased the explained variance by 5% to a total of 71.8% (R2 = 0.718, F = 146.623, p < .001), with LNF2 significantly predicting WRF (β = 0.325, p < .001). Including LSF2 slightly increased the explained variance to 72.0% (R2 = 0.720, F = 97.670, p < .001), with LSF2 not contributing significantly (β = 0.069, p = .421). Adding PSF2 did not further improve the explained variance, leaving the model at 72.0% (R2 = 0.720, F = 72.776, p < .001). PSF2 did not significantly predict WRF (β = 0.024, p = .667).

For grade 1, initial WRF2 accounted for 87.3% of the variance in end-of-year WRF (R2 = 0.873, F = 459.530, p < .001), with a significant coefficient (β = 0.934, p < .001). Adding LNF2 slightly increased the explained variance to 87.8% (R2 = 0.878, F = 236.439, p < .001), though LNF2 was not a significant predictor (β = 0.084, p = .114). Including LSF2 raised the explained variance to 88.0% (R2 = 0.880, F = 159.168, p < .001), with LNF2 becoming significant (β = 0.130, p = .049) but LSF2 not contributing significantly (β = −0.084, p = .234). Adding PSF2 increased the explained variance to 88.9% (R2 = 0.889, F = 127.525, p < .001), with PSF2 showing a small but significant contribution (β = 0.094, p = .032).

In summary, for kindergarten, WRF2 and LNF2 were significant predictors of end-of-year WRF, with LNF2 adding unique variance to the model. LSF2 and PSF2, however, did not contribute significantly to the prediction. For grade 1, WRF2 remained the primary significant predictor of end-of-year WRF. While LNF2 added a small but significant contribution to the model at certain stages, LSF2 and PSF2 did not contribute additional significant variance in predicting end-of-year WRF.

Accuracy in predicting at risk students for reading failure

The performance of students who scored below the 25th percentile in word reading fluency (WRF) by the end of the academic year was examined. These students were categorized, and their scores on Letter Naming Fluency (LNF), Letter Sound Fluency (LSF), and Phoneme Segmentation Fluency (PSF) from the fall and spring semesters were compared to assess the accuracy of these measures in predicting end-of-year WRF performance. The analysis aimed to determine the extent to which students who fell below the 25th percentile in WRF by the end of the academic year could be predicted based on their performance in the fall and spring assessments. A total of 47 kindergarten students and 37 grade 1 student were identified as falling below the 25th percentile in WRF by the end of the academic year. A test conducted between students who are below 25th percentile in WRF by the end of the academic year and the rest of the students. The results shows a significant differences between the groups for each grade (M = 3.19, for at risk students, M = 14.29, for non at risk t = −8.49, p < .001) for Kindergarten and (M = 9.02, for at risk students, M = 28.36, for non at risk, t = −9.64, p < .001).

For kindergarten students using fall measures, 45% of students were correctly identified as at-risk based on LNF1, while LSF1 showed a higher accuracy of 55% in predicting WRF performance by the year’s end. PSF1 was less predictive, with an accuracy of 43%. Spring measures showed improved accuracy, with 64% of students correctly classified using LNF2 and 60% with LSF2. PSF2 identified 53% accurately, indicating better predictive alignment in spring assessments.

For grade 1 students, Fall LNF1 and LSF1 scores identified 57% and 59% of students, respectively, as at-risk, with PSF1 showing a much lower accuracy of 14%. Spring measures proved more effective, with LNF2 achieving 68% accuracy, LSF2 73%, and PSF2 identifying 38% correctly. These results highlight that spring assessments, particularly LNF2 and LSF2, had higher predictive accuracy for both kindergarten and grade 1 students, while fall assessments showed moderate accuracy. PSF measures were generally less predictive, especially in the fall, indicating the greater reliability of letter fluency measures over phoneme segmentation in predicting end-of-year WRF performance.

Discussion

This study aimed to investigate the predictive validity of fall and mid-year literacy assessments, specifically Letter Naming Fluency (LNF), Letter Sound Fluency (LSF), and Phoneme Segmentation Fluency (PSF), in predicting end-of-year Word Reading Fluency (WRF) among kindergarten and grade 1 Arabic-speaking students. These assessments serve various educational purposes, including screening, monitoring progress, and identifying children at risk for reading difficulties. The findings contribute to the existing body of research by highlighting the evolving predictive power of these measures over the academic year, with a particular emphasis on the utility of mid-year assessments.

Predictive role of letter naming fluency and letter sound fluency

Letter naming fluency in fall and winter were significant predictors of WRF by the end of the year for both grades. However, using Hierarchical Multiple Regression, the unique strongest predictors for WRF was winter LNF for kindergarten, following by similar amount of predictions for fall and winter semesters range from.33 to 39 for both grades. In other words, assessing LNF for kindergarten and first grade in fall or winter can predict students’ words reading skills by the end of their academic year. LNF contribute to the prediction of later decoding acquisitions, this result has been replicated by many studies [4,18,40]. Learning letter naming helps young students move to the second stage in reading by mastering the ability to understand the words, and spelling especially for kindergarten children. Letter name knowledge enhances the connection between letters and sounds, which is fundamental for reading [58]. In the same time, improvements in LNF during the fall semester significantly predict better word reading outcomes later, suggesting that early gains in LNF are crucial. Thus, teaching letter names alongside letter sounds strengthens this relationship, highlighting its instructional value in early education [14].

Letter sound fluency requires students to pronounce the sounds of letters, which demands more in-depth decoding abilities than LNF. Consequently, the average scores for LSF were lower than those for LNF. LSF in fall and winter were significant predictors of WRF by the end of the year for both grades. However, using Hierarchical Multiple Regression, all LSF in fall and winter for both grades were uniquely strong predictors for WRF with similar amount of predictions. LSF contribute to the prediction of later decoding acquisitions, this results consistent with several studies [18,40,51,59]. Compared to LNF, students learn more advance step in learning letters sounds which requires to link the between sounds and letters that are shaping those sounds (i.e., grapheme–phoneme correspondences), which are the fundamental step to understand words and read them correctly. In the same time, improvements in LNF during the fall semester significantly predict better word reading outcomes later, suggesting that early gains in LSF are crucial. Thus, teaching letter sounds alongside letter sounds strengthens this relationship, highlighting its instructional value in early education. Teaching letter-sound fluency significantly improves reading fluency and word reading skills in children, including those with dyslexia [60,61].

Limited predictive value of phoneme segmentation fluency

Although there was a moderate to weak correlation between PSF and other measures, PSF does not significantly contribute to the prediction of word reading compared to LNF or LSF. Phoneme segmentation fluency (PSF) was not significant predictors of WRF for any grade in any times. In other studies, phonemic segmentation is a strong predictor of reading and spelling abilities in children, outperforming other phonological skills such as rhyme and alliteration sound categorization [21,6264]. However, [40] found a weaker effect of PSF on word reading developments compared to LSF. This is understandable because training in phoneme segmentation significantly improves reading readiness and early reading skills in kindergarten students (Ball & Blachman, 1988; Spector, 1992). One explanation is the time taking the test (intercept), the test administration in this study took place at the end the semester one, where other studies [40] administrate PSF at the beginning of the academic year before the students start learn. in their study, the PSF mean at the beginning at the end of fall increased from 7 at the beginning of the fall semester to 19 by the end of the same semester.

Fall vs. mid-year assessments

Mid-year assessments demonstrated significantly greater predictive power for the end-of-year Word Reading Fluency (WRF) compared to fall assessments, particularly in grade 1. For kindergarten, mid-year assessments explained 72% of the variance in WRF, compared to 53.7% in the fall, reflecting a 34% improvement in predictive accuracy. This trend was even more pronounced in grade 1, where mid-year R2 values exceeded 87%, compared to 63.9% for fall assessments, marking a substantial 36% increase. These findings are consistent with prior research highlighting the cumulative benefits of literacy instruction, which provide students with greater exposure to foundational skills, such as grapheme-phoneme correspondences and fluency development, over the course of the academic year [14,40]. The enhanced predictive accuracy of mid-year assessments underscores their value in capturing the progression of literacy skills, particularly as students begin to integrate phonological and orthographic knowledge into fluent word recognition [58,59].

Accuracy of fall and mid-year assessments in identifying at-risk students

The accuracy analyses demonstrated that mid-year assessments are consistently more reliable in identifying at-risk students compared to fall assessments. For kindergarten, mid-year LNF correctly identified 64% of students below the 25th percentile in WRF, a marked improvement over the 45% accuracy observed in the fall. Similarly, mid-year LSF outperformed its fall counterpart (60% vs. 55%), further supporting the assertion that additional literacy instruction enhances the predictive validity of mid-year assessments [41,43]. These findings are in line with studies indicating that LNF and LSF are strong indicators of reading risk, particularly when measured after students have received foundational literacy instruction [18,60]. In grade 1, the accuracy gains were even more pronounced, with mid-year LSF achieving the highest accuracy across all measures (73%), underscoring its importance for identifying at-risk readers at this stage of literacy development [19,61]. PSF, while the least accurate measure overall, improved from fall to mid-year in both grades, a finding consistent with research suggesting that phoneme segmentation skills develop more slowly and may have limited utility for predicting word reading fluency [40,62]. These results suggest that educators should prioritize mid-year assessments, particularly LNF and LSF, to maximize the effectiveness of literacy screenings and interventions during critical instructional periods.

Implications

The results of this study offer several important implications for educational practice, particularly in the context of early literacy instruction for Arabic-speaking students. The findings emphasize the importance of using multiple assessment points throughout the academic year, rather than relying solely on initial screening at the beginning of the school year. Mid-year assessments of Letter Naming Fluency (LNF) and Letter Sound Fluency (LSF) were found to substantially improve the accuracy of identifying students at risk for reading difficulties. This has direct relevance for schools seeking to implement effective early intervention strategies.

First, these results support the use of dynamic, curriculum-based measurement (CBM) tools as part of a systematic progress-monitoring framework. Teachers can use mid-year CBM assessments to track students’ development in foundational literacy skills and to adjust instruction based on individual student needs. Regular assessment enables timely interventions before reading difficulties become more severe, aligning with best practices in early intervention. Second, the findings highlight the importance of data-driven instruction. Providing teachers with professional development on how to administer, interpret, and utilize CBM data can empower them to make informed instructional decisions. Such training ensures that teachers are not only using assessments for identification but also integrating results into instructional planning.

Third, school administrators and policymakers should consider embedding multi-point assessments within the curriculum and literacy policies. This may involve allocating resources for assessment materials, teacher training, and intervention programs designed to support students who are identified as at-risk based on mid-year performance. Lastly, the study contributes to the broader field of literacy education in Arabic, where empirical research on early assessment practices remains limited. Implementing evidence-based assessment practices tailored to the linguistic characteristics of Arabic can support more effective teaching practices and improve reading outcomes for students in the early grades.

Limitations

Although the results are promising, several limitations should be taken into consideration. First, the study was conducted at the end of each semester. This timing was intentionally selected to represent actual real-life scenarios where teachers identify students with reading difficulties after working with them for an extended period. The goal of this approach was to capture real-world challenges and examine the issue from different perspectives. However, this method may limit the ability to capture ongoing learning progress throughout the semester. However, this was not the primary focus of the current study. Second, the study relied on a relatively short-term follow-up period, focusing on a single academic year, which restricts the ability to examine the long-term predictive power of these measures on reading outcomes. Third, although phoneme segmentation fluency (PSF) is widely used to measure phonological awareness, it may not fully capture the predictive validity of phonemic skills for end-of-year word reading fluency. Other phonemic measures, such as phoneme blending tasks, where students combine individual phonemes to form words, or elision tasks, which require students to manipulate phonemes by adding or omitting sounds, have been shown to provide stronger predictive power for reading outcomes [18,65].

References

  1. 1. Foorman B, Beyler N, Borradaile K, Coyne M, Denton CA, Dimino J, Furgeson J, Hayes L, Henke J, Justice L, Keating B. Foundational skills to support reading for understanding in kindergarten through 3rd grade (NCEE 2016-4008). Washington, DC: National Center for Education Evaluation and Regional Assistance (NCEE), Institute of Education Sciences, US Department of Education. 2016 Nov. Retrieved from the NCEE website: http://whatworks.ed.gov
  2. 2. Castles A, Rastle K, Nation K. Ending the Reading Wars: Reading Acquisition From Novice to Expert. Psychol Sci Public Interest. 2018;19(1):5–51. pmid:29890888
  3. 3. Caravolas M, Volín J, Hulme C. Phoneme awareness is a key component of alphabetic literacy skills in consistent and inconsistent orthographies: evidence from Czech and English children. J Exp Child Psychol. 2005;92(2):107–39. pmid:16038927
  4. 4. Lonigan CJ, Shanahan T. Developing early literacy: report of the National Early Literacy Panel. Washington (DC): National Institute for Literacy; 2008.
  5. 5. Suggate S, Schaughency E, McAnally H, Reese E. From infancy to adolescence: The longitudinal links between vocabulary, early literacy skills, oral narrative, and reading comprehension. Cognitive Development. 2018;47:82–95.
  6. 6. Molfese VJ, Beswick JL, Jacobi-Vessels JL, Armstrong NE, Culver BL, White JM, et al. Evidence of alphabetic knowledge in writing: connections to letter and word identification skills in preschool and kindergarten. Read Writ. 2010;24(2):133–50.
  7. 7. Sparks RL, Patton J, Ganschow L, Humbach N, Javorsky J. Early first-language reading and spelling skills predict later second-language reading and spelling skills. Journal of Educational Psychology. 2008;100(1):162–74.
  8. 8. Lonigan CJ, Burgess SR, Anthony JL. Development of emergent literacy and early reading skills in preschool children: evidence from a latent-variable longitudinal study. Dev Psychol. 2000;36(5):596–613. pmid:10976600
  9. 9. Elbro C, de Jong PF. Orthographic learning is verbal learning: the role of spelling pronunciations. In: Cain K, Compton DL, Parrila RK, editors. Theories of reading development. Amsterdam (NL): John Benjamins Publishing Company; 2017. p. 169–89. https://doi.org/10.1075/swll.15.10elb
  10. 10. Seidenberg MS. Connectionist Models of Word Reading. Curr Dir Psychol Sci. 2005;14(5):238–42.
  11. 11. Share DL. Orthographic learning, phonological recoding, and self-teaching. Adv Child Dev Behav. 2008;36:31–82. pmid:18808041
  12. 12. Treiman R, Rodriguez K. Young Children Use Letter Names in Learning to Read Words. Psychol Sci. 1999;10(4):334–8.
  13. 13. -McBride-Chang C. The ABCs of the ABCs: the development of letter-name and letter-sound knowledge. Merrill Palmer Q. 1999;45(2):285–308.
  14. 14. Piasta SB, Wagner RK. Developing Early Literacy Skills: A Meta-Analysis of Alphabet Learning and Instruction. Read Res Q. 2010;45(1):8–38. pmid:20671801
  15. 15. Ellefson MR, Treiman R, Kessler B. Learning to label letters by sounds or names: a comparison of England and the United States. J Exp Child Psychol. 2009;102(3):323–41. pmid:18675428
  16. 16. Sunde K, Furnes B, Lundetræ K. Does Introducing the Letters Faster Boost the Development of Children’s Letter Knowledge, Word Reading and Spelling in the First Year of School? Sci Stud Read. 2019;24(2):141–58.
  17. 17. Hjetland HN, Lervåg A, Lyster S-AH, Hagtvet BE, Hulme C, Melby-Lervåg M. Pathways to reading comprehension: A longitudinal study from 4 to 9 years of age. J Educ Psychol. 2019;111(5):751–63.
  18. 18. Schatschneider C, Fletcher JM, Francis DJ, Carlson CD, Foorman BR. Kindergarten Prediction of Reading Skills: A Longitudinal Comparative Analysis. J Educ Psychol. 2004;96(2):265–82.
  19. 19. Piasta SB, Park S, Fitzgerald LR, Libnoch HA. Young children’s alphabet learning as a function of instruction and letter difficulty. Learn Individ Differ. 2022;92:102113.
  20. 20. Torppa M, Lyytinen P, Erskine J, Eklund K, Lyytinen H. Language development, literacy skills, and predictive connections to reading in Finnish children with and without familial risk for dyslexia. J Learn Disabil. 2010;43(4):308–21. pmid:20479461
  21. 21. Muter V, Hulme C, Snowling M, Taylor S. Segmentation, not rhyming, predicts early progress in learning to read. J Exp Child Psychol. 1997;65(3):370–96. pmid:9178965
  22. 22. O’Connor RE. Phoneme awareness and the alphabetic principle. In: O’Connor RE, Vadasy PE, editors. Handbook of reading interventions. New York (NY): Guilford Press; 2011. p. 9–26.
  23. 23. Romupal EL, Rubio CM, Toquero CM. Learning by Doing Intervention: Addressing Phonological Difficulties of Children through Audiolingual Method and Total Physical Response. Elsya. 2021;3(3).
  24. 24. Wolff U. RAN as a predictor of reading skills, and vice versa: results from a randomised reading intervention. Ann Dyslexia. 2014;64(2):151–65. pmid:24803174
  25. 25. Melby-Lervåg M, Lyster S-AH, Hulme C. Phonological skills and their role in learning to read: a meta-analytic review. Psychol Bull. 2012;138(2):322–52. pmid:22250824
  26. 26. Fraser J, Goswami U, Conti-Ramsden G. Dyslexia and Specific Language Impairment: The Role of Phonology and Auditory Processing. Scientific Studies of Reading. 2010;14(1):8–29.
  27. 27. National Reading Panel. Teaching children to read: an evidence-based assessment of the scientific research literature on reading and its implications for reading instruction. Washington (DC): National Institute of Child Health and Human Development; 2000. Available from: https://www.nichd.nih.gov/sites/default/files/publications/pubs/nrp/Documents/report.pdf
  28. 28. Gough PB, Tunmer WE. Decoding, Reading, and Reading Disability. Rem Spec Educ. 1986;7(1):6–10.
  29. 29. Hoover WA, Gough PB. The simple view of reading. Read Writ. 1990;2(2):127–60.
  30. 30. Hudson RF, Lane HB, Pullen PC. Reading Fluency Assessment and Instruction: What, Why, and How? The Read Teach. 2005;58(8):702–14.
  31. 31. Adams MJ. Beginning to read: thinking and learning about print [book review]. Language. 1991;67(2):388.
  32. 32. Rasinski TV. Assessing reading fluency. Honolulu (HI): Pacific Resources for Education and Learning (PREL); 2004.
  33. 33. Hasbrouck J, Tindal GA. Oral Reading Fluency Norms: A Valuable Assessment Tool for Reading Teachers. The Reading Teacher. 2006;59(7):636–44.
  34. 34. Fuchs LS, Fuchs D, Hosp MK, Jenkins JR. Oral Reading Fluency as an Indicator of Reading Competence: A Theoretical, Empirical, and Historical Analysis. Sci Stud Read. 2001;5(3):239–56.
  35. 35. Miura Wayman M, Wallace T, Wiley HI, Tichá R, Espin CA. Literature Synthesis on Curriculum-Based Measurement in Reading. J Spec Educ. 2007;41(2):85–120.
  36. 36. Deno SL. Curriculum-based measurement: the emerging alternative. Except Child. 1985;52(3):219–32. pmid:2934262
  37. 37. Deno SL, Reschly A, Lembke E, Magnusson D, Callender S, Windram H, et al. Developing a school-wide progress-monitoring system. Psychol Sch. 2001;46(1):44–55.
  38. 38. Martin-Chang SL, Levy BA. Word reading fluency: A transfer appropriate processing account of fluency transfer. Read Writ. 2006;19(5):517–42.
  39. 39. Perfetti C. Reading Ability: Lexical Quality to Comprehension. Sci Stud Read. 2007;11(4):357–83.
  40. 40. Clemens NH, Lee K, Henri M, Simmons LE, Kwok O-M, Al Otaiba S. Growth on sublexical fluency progress monitoring measures in early kindergarten and relations to word reading acquisition. J Sch Psychol. 2020;79:43–62. pmid:32389248
  41. 41. Nelson JM. Psychometric Properties of the Texas Primary Reading Inventory for Early Reading Screening in Kindergarten. Assess Eff Interv. 2009;35(1):45–53.
  42. 42. Ritchey KD, Speece DL. From letter names to word reading: The nascent role of sublexical fluency. Contemp Educ Psychol. 2006;31(3):301–27.
  43. 43. Tibi S, Tock JL, Kirby JR. The development of a measure of root awareness to account for reading performance in the Arabic language: A development and validation study. Appl Psycholinguist. 2019;40(2):303–22.
  44. 44. Abou-Elsaad T, Ali R, Abd El-Hamid H. Assessment of Arabic phonological awareness and its relation to word reading ability. Logoped Phoniatr Vocol. 2016;41(4):174–80. pmid:26556648
  45. 45. Abou El-Ella MY, Sayed EM, Farghaly WM, Abdel-Haleem EK, Hussein ES. Construction of an Arabic reading test for assessment of dyslexic children. Neurosciences (Riyadh). 2004;9(3):199–206. pmid:23377428
  46. 46. Al-Hmouz H. The relationship between letter fluency measures and Arabic GPA. Int J Spec Educ. 2013;28(3):140–9.
  47. 47. Mahfouz MAS, Mohamed AHH. Using Word Reading Fluency Curriculum-Based Measurements to Monitor Students’ Reading Progress in Grade 2. Educ Sci. 2023;13(2):217.
  48. 48. Abu-Hamour B. Using Arabic word identification fluency to monitor first-grade reading progress. Dyslexia. 2014;20(2):167–74. pmid:24375873
  49. 49. Abu-Hamour B. A pilot study for standardizing curriculum-based measurement oral reading fluency (CBM ORF) in Arabic. J Int Assoc Spec Educ. 2014;15(1).
  50. 50. Abu-Hamour B. The use of the Arabic CBM maze among three levels of achievers in Jordan. Int J Spec Educ. 2013;28(3).
  51. 51. Clemens NH, Shapiro ES, Dunlosky J, Williams KM, Davis JL. Interrelations of growth in letter naming and sound fluency in kindergarten and implications for subsequent reading fluency. Sch Psychol Rev. 2017;46(3):272–87.
  52. 52. Boykin A, Tognatta N, Aikens N, Rennie R. The relationship between quarterly and end-of-grade reading assessments 2007–08. Washington (DC): U.S. Department of Education; 2009.
  53. 53. Koon S, Petscher Y, Foorman B, Smith K. Identifying North Carolina students at risk of scoring below proficient in reading at the end of grade 3. REL 2020–030. Washington (DC): U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance, Regional Educational Laboratory Southeast; 2020.
  54. 54. easyCBM. Eugene (OR): University of Oregon; [cited 2025 Jan 15]. Available from: https://www.easycbm.com/.
  55. 55. FastBridge. FastBridge Learning: Formative Assessment System for Teachers (FAST). Minneapolis (MN): Renaissance Learning; [cited 2025 Jan 15]. Available from: https://www.fastbridge.org/.
  56. 56. University of Oregon. DIBELS: Dynamic Indicators of Basic Early Literacy Skills. Eugene (OR): University of Oregon; [cited 2025 Jan 15]. Available from: https://dibels.uoregon.edu/.
  57. 57. JASP Team. JASP (Version 0.18) [Computer software]. Amsterdam (NL): University of Amsterdam; 2023. Available from:https://jasp-stats.org/.
  58. 58. Ehri LC. Phases of development in learning to read words by sight. J Res Read. 1995;18(2):116–25.
  59. 59. Caravolas M, Lervåg A, Mousikou P, Efrim C, Litavsky M, Onochie-Quintanilla E, et al. Common patterns of prediction of literacy development in different alphabetic orthographies. Psychol Sci. 2012;23(6):678–86. pmid:22555967
  60. 60. Fraga González G, Žarić G, Tijms J, Bonte M, Blomert L, van der Molen MW. A Randomized Controlled Trial on The Beneficial Effects of Training Letter-Speech Sound Integration on Reading Fluency in Children with Dyslexia. PLoS One. 2015;10(12):e0143914. pmid:26629707
  61. 61. Hulme C, Bowyer-Crane C, Carroll JM, Duff FJ, Snowling MJ. The causal role of phoneme awareness and letter-sound knowledge in learning to read: combining intervention studies with mediation analyses. Psychol Sci. 2012;23(6):572–7. pmid:22539335
  62. 62. Oslund EL, Hagan-Burke S, Taylor AB, Simmons DC, Simmons L, Kwok O-M, et al. Predicting Kindergarteners’ Response to Early Reading Intervention: An Examination of Progress-Monitoring Measures. Reading Psychology. 2012;33(1–2):78–103.
  63. 63. Kim Y-S, Pallante D. Predictors of reading skills for kindergartners and first grade students in Spanish: a longitudinal study. Read Writ. 2012;25(1):1–22.
  64. 64. Nation K, Hulme C. Phonemic Segmentation, Not Onset‐Rime Segmentation, Predicts Early Reading and Spelling Skills. Read Res Q. 1997;32(2):154–67.
  65. 65. Goldstein H, Olszewski A, Haring C, Greenwood CR, McCune L, Carta J, et al. Efficacy of a Supplemental Phonemic Awareness Curriculum to Instruct Preschoolers With Delays in Early Literacy Development. J Speech Lang Hear Res. 2017;60(1):89–103. pmid:28056468