Systems-Level Consultation to Improve Intervention Fidelity and Student Outcomes: A Collective Case Study of a Blended Learning Reading Program
ABSTRACT
School psychologists have the potential to improve students' reading skills by helping schools select evidence-based programs and then supporting their implementation. Blended learning offers a flexible implementation format to address long-standing challenges implementing and scaling reading programs in schools. The purpose of this study was to explore the unique implementation challenges associated with blended learning programs, how systems-level consultation can be used to support implementation and foster intervention fidelity, and the extent to which intervention fidelity of a blended learning reading program was associated with student reading skills. This collective case study included qualitative and quantitative data from 20 schools, 666 educators, and 6208 students in the U.S. We employed mixed methods, which included inductive thematic analysis of 47 implementation plans and means difference testing to examine differences in educator engagement, student-level intervention fidelity, and student outcomes across districts with varying levels of implementation supports. Results identified several challenges encountered throughout implementation and the types of strategies employed to address those challenges. Further, results suggested that there were greater levels of fidelity and more positive student outcomes in schools accessing systems-level consultation to support implementation.
Summary
-
Blended learning programs, which combine face-to-face and online instruction, are gaining in popularity.
-
Challenges encountered when implementing a blended learning reading program in schools can be addressed through systems-level consultation.
-
Systems-level consultation may improve implementation of a blended learning reading program and positively impact student outcomes.
Only 33% of fourth-grade students achieve standards of reading proficiency (National Assessment of Educational Progress NAEP. 2022) and these students are at heightened risk of experiencing subsequent academic and social difficulties (Connor et al. 2014). School psychologists have the potential to improve reading outcomes for these students by helping schools select evidence-based programs (EBPs) and then supporting their implementation (Aarons et al. 2011; Lyon et al. 2022). EBPs are standardized procedures, including programs, interventions, and practices, that have been shown to improve students' reading skills through rigorous research studies (e.g., randomized controlled trials; Every Student Succeeds Act [ESSA], 2015). It is imperative to support the development of fundamental reading skills in the early grades to improve long-term employment, behavioral, and mental health outcomes for students (National Center for Education Statistics [NCES] 2012–2015), yet how to successfully implement EBPs that support early reading skill development remains a critical challenge facing schools today.
A significant body of research documents the challenges of implementing EBPs in schools, which include poor intervention fidelity due to limited time and resources, lack of support, and misalignment between research settings and authentic school contexts (Fixsen et al. 2005; Lyon et al. 2022). Research from adjacent fields (e.g., organizational management, public health) illustrates the importance of implementation leadership as a key factor that influences how well EBPs are implemented and sustained (Aarons et al. 2014). Implementation leadership is defined as behaviors that promote organizational practices or procedures that align with a particular initiative or criterion (in this case, reading EBP implementation) (Aarons et al. 2014; Lyon et al. 2022). However, very few, if any, empirical studies have examined how to best facilitate strong implementation leadership to promote the implementation of reading EBPs and subsequent student outcomes, though some research has examined the role of implementation leadership on the implementation of mental health EBPs in schools (Lyon et al. 2022).
Further, no studies to date have examined the process of implementing reading EBPs that employ a blended learning approach, which leverages the strengths of both face-to-face and online intervention delivery (Pytash and O'Byrne 2018). This knowledge gap is critical to address given that approximately 20% of schools in the U.S. implement blended learning programs (Schwartz et al. 2020) and 35% of classroom teachers employ educational technology to address their students' skill deficits (Digital Promise Global 2019). Blended learning is gaining in popularity because it offers the promise of individualized, data-driven instruction in a flexible implementation format that frees up human resources to increase scalability and sustainability (Wilkes et al. 2020). However, it may be more complex to support the implementation of blended learning given its multiple modalities. The overarching purpose of this collective case study is to understand how implementation supports provided to local leadership teams improved the intervention fidelity1 of one blended learning reading EBP, Lexia Core5 Reading, and subsequent student reading outcomes.
1 Description of Blended Learning and Lexia Core5 Reading
Blended learning programs combine face-to-face instruction with online learning to offer a promising means of addressing poor reading outcomes (Pytash and O'Byrne 2018). A recent meta-analysis demonstrated the efficacy of blended learning in improving academic performance (Li and Wang 2022). The current study examined the implementation of Lexia Core5 Reading (hereafter referred to as Core5), a blended learning program designed to target fundamental literacy skills from Prekindergarten through Grade 5. During the 2022–2023 school year, Core5 was implemented in over 18,300 U.S. schools serving approximately 3.7 million students (Lexia 2023a).
Core5 is based on a research-validated theoretical framework (Gough and Tunmer 1986; Hoover and Gough 1990; Perfetti 1985). Its scope and sequence align with the Essential Elements of Reading identified by the National Reading Panel U.S. & National Institute of Child Health and Human Development U.S. (2000) and more recent recommendations for effective elementary reading instruction from the Institute for Educational Sciences (IES; Foorman et al. 2016). A large body of research evidence demonstrates Core5′s efficacy in improving reading skills among students in Grades PreK-5 (Lexia 2023b), including several randomized controlled trials (e.g., Hurwitz & Vanacore. 2022; O'Callaghan et al. 2016; Schechter et al. 2015).
As a blended learning program, Core5 provides an optimal focus for this collective case study. Blended learning programs combine face-to-face instruction with an educator with individualized instruction, assessment, and intervention delivered online. Thus, to implement blended learning programs with fidelity, implementation leaders are tasked with simultaneously supporting the implementation of both face-to-face and online educational activities. Despite the unique challenges of balancing these two modalities, blended learning offers a promising solution for the scalability and sustainability challenges of reading EBPs that have plagued education for decades (e.g., Fixsen et al. 2005). Further, its flexible implementation format can be adapted to meet the needs of a variety of schools with differing structures and schedules (Powell et al. 2015).
Students begin Core5 by completing an adaptive assessment to identify an appropriate starting point in the program based on their current skill level. Then, students work at their own pace through individualized, explicit, systematic online instruction, and complete offline teacher-directed activities to supplement and extend their learning. The online instruction leverages embedded assessment technology to identify personalized usage targets, which are determined by students' risk level and predicted likelihood of achieving end-of-year, grade-level benchmarks. Personalized usage targets generally range from 20 to 80 min per week depending on students' risk level, with students at greater risk of not meeting end-of-year, grade-level benchmarks receiving higher usage targets. Personalized usage targets are adjusted monthly based on real-time performance data collected as students progress through the program.
When students demonstrate skill mastery by completing a program level, educators can print a certificate of completion to reward student progress. Students can also complete offline activities (Skill Builders) to help generalize their learning. Students that do not make sufficient progress from the online instruction receive scaffolded support in the online program and/or face-to-face teacher-led intervention (Lexia Lessons). Thus, Core5 not only provides evidence-based assessment and intervention in reading, but also helps schools monitor students' response-to-intervention to differentiate instruction and provide increasingly intensive and individualized supports to improve all students' fundamental reading skills (Baron et al. 2019).
As previously described, blended learning programs like Core5 require educators to consider multiple facets of instruction to implement the program with fidelity. A large body of literature demonstrates the importance of intervention fidelity to achieve the effects found in research (Durlak and DuPre 2008; Prescott et al. 2017; Schechter et al. 2015). In the context of Core5 specifically, intervention fidelity is achieved when the following occur: (1) students meet their personalized usage targets, which requires teachers to allocate sufficient time for those activities, (2) teachers deliver offline program components (Skill Builders and Lexia Lessons) as appropriate, and (3) teachers use program data and data-based instructional recommendations to modify and inform classroom instruction to align with student skill development (Lexia 2023c). Therefore, achieving fidelity requires implementation leaders to support educators across various facets of implementation.
2 Implementation Leadership and Systems-Level Strategies to Improve Fidelity
Research has highlighted the importance of implementation leadership to improve the intervention fidelity of EBPs (Lyon et al. 2022) and this may be particularly true for complex or multifaceted blended learning programs. Implementation leadership differs from molar or general leadership styles (e.g., transformational leadership; Bass et al. 1993), which describe the types of behaviors or interactions leaders demonstrate with their employees and have been linked with numerous outcomes, such as employee well-being (Glisson et al. 2008; Meidelina et al. 2023). However, research has suggested that molar leadership styles are too broad to be effectively linked with EBP implementation and that strategic implementation climate, or the perceived importance of the innovation in the organization, is a stronger predictor of intervention fidelity (Aarons et al. 2014).
In schools, implementation leadership consists of seven components: (1) proactive leadership or the ability to anticipate and address implementation challenges, (2) knowledgeable leadership or deep understanding of the EBP, (3) perseverance or the ability to consistently respond to implementation challenges, (4) support for EBP implementation, (5) bidirectional communication between leaders and implementers related to the EBP, (6) vision or how well EBP implementation is integrated into the primary purpose of the school, and (7) the extent to which the leader is available and responsive to staff needs regarding EBP implementation (Lyon et al. 2022). Thus, implementation leadership includes behaviors in service of a specific objective (in this case, implementation of a reading EBP) and is critical to foster a climate conducive to EBP implementation (Aarons et al. 2014; Lyon et al. 2022).
Blended learning is a new and evolving educational approach, and little is known about how to best support its implementation (Mohammed. 2017). However, a considerable body of research has examined the effectiveness of implementation strategies to increase the intervention fidelity of school-based EBPs in general. Cook et al. (2019) identified 75 unique implementation strategies employed in schools to promote intervention fidelity. These 75 strategies fall under nine conceptual categories: (1) evaluative and iterative strategies (e.g., progress monitoring), (2) providing interactive assistance to implementers, (3) adapting and tailoring the EBP to fit with the local context, (4) developing relationships with all parties invested in the EBP, (5) training and educating all parties invested in the EBP, (6) supporting implementers, (7) engaging members of the school community, (8) using financial strategies, and (9) changing the infrastructure to support implementation. Strong implementation leaders and leadership teams identify the needs of the school and implementers, and then select and employ support strategies based on those needs to improve implementation.
Thus, fostering strong implementation leadership is crucial to promoting intervention fidelity and, ultimately, student outcomes. Implementation leaders may be school administrators or other leaders, but may also include other personnel who become “EBP champions” and support implementation efforts within a local context (Lyon et al. 2018). School psychologists could serve in either of these roles. Importantly, implementation leaders themselves often need support to understand and apply the most effective strategies to facilitate a strong implementation climate, promote fidelity, and address the inevitable challenges of any new school-based initiative (e.g., Aarons et al. 2015; Lyon et al. 2022).
Systems-level consultation delivered to local implementation leadership teams offers one promising method of providing such support. School-based consultation is frequently used by a variety of professionals, including school psychologists, to assist teachers and other personnel in addressing problems and developing solutions, and research has demonstrated that consultation can improve a variety of student outcomes (Newell et al. 2020). Consultation is considered an indirect service in that the consultant generally does not provide direct intervention to the student(s) in question (e.g., Kampwirth and Powers 2015). School psychologists may be particularly well-positioned to deliver these consultative services, given their preservice training in consultation (e.g., Newman et al. 2015) and that consultation is outlined as a core competency by the National Association of School Psychologists (2010) and the American Psychological Association (2006).
While traditional models of school-based consultation have emphasized a triadic relationship between consultant, consultee, and student (Kampwirth and Powers 2015), a consultative approach can also be applied at the systems level. Systems-level consultation involves the provision of consultation services to systems-level stakeholders and is often used to address systemic or organizational barriers and promote implementation of EBPs (Kratochwill et al. 2008). In systems-level consultation, the triadic relationship is established between the consultant, one or more systems-level consultees such as school administrators or teachers, and a systems-level client; the goal of consultation in this case is to address problems and improve functioning at the systems level so as to benefit individuals (e.g., students) impacted by the system (Meyers et al. 2009). Systems-level consultation, and the provision of systems-level services more broadly, has emerged as a core professional competency for school psychologists, and research attention to systems change efforts in the school psychology literature has grown in recent years (Coleman and Hendricker 2020). Research has demonstrated that systems-level consultation can effectively promote the establishment of data systems to monitor fidelity (Barrett 2021) and promote organizational change (Meyers et al. 2012). However, no studies to date have examined the types of challenges faced when implementing a blended learning reading EBP and how systems-level consultation provided to implementation leadership can address those barriers to increase intervention fidelity and student outcomes. Our study fills this gap in the literature.
3 Current Study and Research Questions
In the current study, we sought to expand upon prior research by examining the challenges implementation leadership faces when implementing a blended learning reading EBP, the implementation supports used in the context of systems-level consultation to address those challenges, and the impact of those supports on intervention fidelity and student outcomes. We employ a collective case study methodology to answer the following research questions: (1) How do implementation supports address the challenges implementation leadership faces when implementing a blended learning reading EBP? (2) To what extent do schools receiving systems-level consultation to support implementation have greater levels of intervention fidelity and student reading outcomes compared to schools without implementation supports?
4 Methods
We selected collective case study methodology given its examination of multiple cases simultaneously to generate a broad appreciation for the complex implementation process in an everyday setting (Crowe et al. 2011). Within our case study methodology, we employed mixed methods secondary data analysis, with equal weight placed on the qualitative and quantitative data. We began with qualitative analyses to answer the first research question and identify the types of challenges encountered and the implementation supports used to address those challenges. We then proceeded to quantitative analyses to answer the second research question examining the impact of systems-level consultation to support implementation (QUAL→QUAN; Palinkas et al. 2011). Before data analysis, the study was deemed nonhuman subjects research by the Michigan State University Institutional Review Board (IRB). Pseudonyms are used throughout the remainder of the article to maintain the anonymity of the school districts.
4.1 Case Selection and Participants
To increase comparability among the multiple cases selected for inclusion, we bounded this collective case study by geographic region (one Midwestern state), time (the 2022–2023 school year), and scope (inclusive of student outcome data, implementation processes, and systems-level implementation supports). Case selection was also bounded by access (Crowe et al. 2011), such that we needed to have access to student outcome data for all schools selected for participation and to systems-level consultation data for all schools that received this implementation support.
Within these bounds, we ultimately selected 3 districts (N = 20 schools) for inclusion in this collective case study. One district, hereafter referred to as Washington School District, implemented Core5 in 10 schools (N = 3,489 students and 370 educators). Each school in Washington School District received systems-level consultation intended to support implementation of Core5. One systems-level consultant provided support to all ten schools in Washington School District. Washington was selected from all possible cases meeting our inclusion criteria because it had the highest number of schools implementing Core5 and the most comprehensive data regarding implementation.
We selected two comparison districts from the same state using publicly available demographic information from the 2022–2023 school year obtained from the state department of education's website. We downloaded these data and sorted by district size. We then selected comparison districts based on their similarity to Washington School District in terms of student enrollment, urban location, racial/ethnic composition, the percentage of students qualifying for free or reduced-price lunch, the percentage of students with disabilities, and the percentage of students who were English Learners. For comparison purposes, we sought to identify schools that had implemented Core5 during the 2022–2023 school year but had not received systems-level consultation to support implementation. Therefore, for each close match identified, the first author searched Lexia's database to determine whether the district met these criteria. We identified one district that implemented Core5 but did not enroll in systems-level implementation support (Clinton School District; N = 6 schools, 1592 students, 211 educators). A second district (Arlington School District; N = 4 schools, 1127 students, 84 educators) implemented Core5 and did enroll in systems-level implementation support; however, due to a change in district leadership, they did not access (i.e., receive) the available consultation services.
In sum, our collective case study included a total sample of 20 schools, 666 educators, and 6208 students across 3 districts who were engaged with Lexia Core5 during the 2022–2023 school year. Table 1 presents additional characteristics of the sample. All three school districts were in their second year of implementation.
Characteristics | Washington School District (N or %) | Clinton School District (N or %) | Arlington School District (N or %) |
---|---|---|---|
School/district characteristics | |||
Total FTE teachers in district | 515.7 | 220.4 | 161.9 |
Teachers of color | 9.9% | 0.9% | 1.2% |
Total FTE school psychologists in district | 6 | 4 | 8 |
Schools using Core5 | 10 | 6 | 4 |
Educators using Core5 | 370 | 211 | 85 |
Student characteristics | |||
Students using Core5 | 3,489 | 1,592 | 1,127 |
Race/Ethnicity | |||
White | 48.7% | 95.8% | 58.6% |
Black/African American | 24.6% | 0.4% | 6.4% |
Hispanic/Latinx | 10.0% | 1.1% | 20.2% |
Multiracial | 16.0% | 2.5% | 14.4% |
Free/reduced meals | 100% | 100% | 100% |
Students with disabilities | 19.0% | 22.1% | 22.1% |
English learners | 5.6% | < 0.1% | 6.2% |
Third grade reading proficiency | 37.8% | 85.2% | 46.9% |
- Note: Pseudonyms are used for the names of the school districts. Reading proficiency reflects performance on the state assessment.
- Abbreviation: FTE, Full-Time Equivalent.
5 Procedure
5.1 Standard Implementation
Students in each school district began using Core5 early in the 2022–2023 school year, with the majority (92%) of the total sample recording their first login by the end of September 2022. Each district used Core5 as a universal (Tier1) supplemental literacy program for students in grades K through 5 in conjunction with the standard English Language Arts curriculum. Students' initial placement (starting) levels within the program were comparable across districts and are summarized in Table 2.
District | At or above grade level (%) | One level below current grade (%) | Two or more levels below current grade (%) |
---|---|---|---|
Washington school district | 14% | 33% | 51% |
Clinton school district | 20% | 36% | 43% |
Arlington school district | 9% | 30% | 59% |
- Note: A small number of students in each district were manually placed into a starting level by their teachers rather than completing the placement assessment. These students are not reflected in the above percentages, which therefore may not sum to exactly 100%.
Staff within each district had access to an online progress monitoring dashboard (myLexia), a standard program component available to all Core5 users. The myLexia dashboard allows teachers and administrators to regularly review student-level progress monitoring data and clearly identify how to differentiate instruction to best meet the needs of all students. The dashboard can be tailored for teachers and administrators so that teachers view their class-wide data, whereas administrators view school- or district-wide data. Specifically, the myLexia dashboard presents teachers with the following data to monitor student progress: (1) each students' personalized usage targets and whether they have been met, (2) which students need offline practice activities to promote skill generalization (Skill Builders), (3) which students need teacher-led instruction (Lexia Lessons), and (4) data-based recommendations to inform and guide teachers' instructional planning. These data reflect students' real-time performance in the online program and teachers can use this data to plan for or adapt offline instruction to best meet the needs of each student (Hilliard 2015; Horn and Staker 2011). In sum, the myLexia platform is an example of an evaluative and iterative implementation strategy (Cook et al. 2019) and educator use of the myLexia platform reflects the degree to which this implementation support strategy is delivered (Carroll et al. 2007). All schools also had access to the Lexia Helpdesk and standard informational materials provided to all Core5 users.
5.2 Systems-Level Implementation Support
In addition to the standard implementation support described above, schools implementing Lexia programs may opt to receive additional supports provided by systems-level consultants. These systems-level consultants are employed by Lexia and have extensive training in literacy and language learning, product knowledge, and implementation best practices, in addition to significant practitioner experience. Consultants leverage this expertise to support school-level implementation leadership teams. The exact composition of these teams varies by school and is determined on an individual basis based on needs, personnel, and other factors. In general, teams include at least one building- and/or district-level administrator (e.g., principal, curriculum director) in addition to teacher leaders and other relevant personnel (e.g., instructional coaches, reading specialists). As noted above, both Washington and Arlington enrolled in these supports, but only Washington accessed them during the 2022–2023 school year.
A systems-level consultant partnered with implementation leadership teams at each of the 10 schools in the Washington School District that used Core5 during the 2022–2023 school year. One consultant provided services to all schools in our sample. The overarching goal of the systems-level consultation was to support school teams in improving their implementation leadership skills and selecting and using implementation support strategies identified by Cook et al. (2019) to promote intervention fidelity. Specifically, the systems-level consultant and implementation leadership teams met one to two times before implementation began to develop implementation plans that outlined the unique needs, goals, milestones, and timelines of the school and/or district, as well as the specific resources, training, or other supports necessary for program implementation. Then, throughout the school year, the systems-level consultant and implementation leadership teams met 2–3 additional times to monitor Core5 intervention fidelity and student progress monitoring data, revise the implementation plan as needed, discuss implementation experiences and challenges, and identify implementation strategies that best met ongoing needs.
Each meeting between the systems-level consultant and implementation leadership teams lasted approximately 1 h. Meetings were led by the systems-level consultant, who engaged implementation leadership teams in a structured process of goal setting and planning. Throughout the school year, these meetings generally began with a summary overview of Core5 usage and progress monitoring data, and insights gleaned from this data informed subsequent conversations and decisions. Consistent with a collaborative consultation model (Kampwirth and Powers 2015), the consultant engaged each leadership team as partners in the decision-making and problem-solving process.
One of the primary implementation strategies employed by these teams was professional learning (PL) to train and educate implementers on how best to implement the multiple components of blended learning programs (Cook et al. 2019). Together, the systems-level consultant and local implementation teams collaboratively identified PL needs of implementation leadership and educators and coordinated appropriate PL sessions to address those needs. PL sessions were scheduled in between implementation leadership team meetings and occurred as needed throughout the school year. PL included sessions intended to help leadership and educators engage with the myLexia platform, use student data to plan and/or modify instruction, use school or district data to monitor overall progress, or use the program's offline resources to provide intervention, support, or practice matched to student needs.
6 Measures
6.1 Measures of Implementation Support Receipt and Access
Permanent products (e.g., implementation plans, meeting activities and notes) were reviewed and coded to assess systems-level implementation support receipt, or the extent to which Washington and Arlington School Districts received the systems-level consultation services as intended. These permanent products were proactively recorded by the systems-level consultant assigned to provide services to each district. Records indicated that Arlington School District did not access any systems-level consultation services. Therefore, no implementation support data were available for this district, confirming its appropriateness as a comparison district.
For Washington School District, permanent products included implementation support plans developed collaboratively in meetings between the systems-level consultant and leadership teams at each of the 10 participating schools. These semi-structured plans included narrative descriptions of goal setting, status updates, professional learning activities, and next step planning. We reviewed and coded implementation plans from Washington School District to assess the quality of the implementation support services provided.
A key implementation support available to all schools was the myLexia dashboard, which provides implementers with student-level progress monitoring data and instructional recommendations. Educator-level access, or educators' use of the myLexia dashboard, was assessed by (1) number of logins or the total number of times that an educator logged into the desktop myLexia platform during the 2022–2023 school year and (2) the number of unique days across the school year in which an educator logged into the desktop myLexia platform. For instance, an educator who logged into the platform three total times, with each login occurring on September 15, would have a value of 3 for number of logins and a value of 1 for days of use.
6.2 Student-Level Measures of Intervention Fidelity
Student-level intervention fidelity was assessed by two measures: (1) the average number of minutes per week that students engaged with the online instruction and (2) the number of weeks in which students' average usage time was equal to or greater than the personalized usage targets. Personalized usage targets were based on student risk level and generally ranged from 20 to 80 min per week. Students at higher risk of not achieving end-of-year, grade-level benchmarks received higher usage targets. Conversely, students who had already met grade-level benchmarks had usage targets of 0 min.
6.3 Educator-Level Measures of Intervention Usage
Educator-level intervention usage was assessed by the number of offline activities teachers delivered, which included: (1) offline practice activities to promote generalization (Skill Builders), (2) teacher-led intervention activities (Lexia Lessons), and (3) printing certificates of completion when a student demonstrated mastery in the online instruction. These data were self-reported by teachers and recorded in the myLexia platform. Educator-level intervention usage was calculated as the sum of the three types of offline activities.
6.4 Student-Level Outcome Data
Student-level outcome data reflected students' growth in reading skills, assessed through two data sources: (1) number of Core5 program levels completed and (2) grade-level material gained. The number of Core5 levels completed was calculated by subtracting the level at which a student started the program from the student's end-of-year level. Core5 has 21 total levels that span material from Grades PreK-5; thus, the maximum number of program levels that could be completed was 21. Grade-level material gained was calculated by subtracting the grade level of material a student was working on when they started the program from the grade level of material they were working on at the end of the school year. For example, a student who started the year working on material at the 1st grade level and ended the year working on material at the 3rd grade level would have a grade-level material gained value of 2.
7 Data Analysis
At the end of the 2022–2023 school year, the first author downloaded all deidentified secondary data from the Lexia data warehouse. This included the qualitative implementation support data, student-level intervention fidelity data, educator-level intervention usage data, and student outcome data. We extracted and organized the qualitative systems-level implementation support data (in the form of implementation support plans) in an Excel file before data analysis. The quantitative fidelity and outcome data were cleaned (e.g., to remove a small number of educators who transferred from the elementary to high school setting, and a small number of students outside our grade range of interest) and transferred to SPSS for analysis.
To ensure the trustworthiness of our qualitative analyses and decrease the potential that our own viewpoints or biases affected the results, we employed several strategies throughout the research process, as outlined by Ahmed (2024). First, we established credibility by building rapport between the Lexia staff and external researcher over a prolonged period of time (i.e., approximately 1 year), and explicitly incorporated discussions regarding reflexivity and potential biases throughout the research process. Second, our mixed methods approach triangulated multiple sources of data to support our overarching findings. Third, we sought to provide significant detail regarding our systematic sampling strategy and contextual factors to promote transferability. Finally, we employed a rigorous documentation system, audit trails, and memos to increase the dependability and confirmability of our findings.
7.1 Research Question #1: Qualitative Analysis
To better understand challenges implementation leadership teams face when implementing a blended learning EBP and the implementation supports delivered to address those challenges, we coded the implementation support plans for each of the ten schools in Washington School District using inductive thematic analysis procedures described by Braun and Clarke (2006) and Maguire and Delahunt (2017). All coding was conducted by the two authors, neither of whom were involved in the systems-level consultation or implementation process. In total, we coded 47 distinct implementation plans across schools. Each plan documented the discussions and activities that occurred during meetings between the systems-level consultant and school-level implementation leadership teams. Each of these meetings lasted approximately 1 h and teams met 2–3 times across the school year. Each plan documented a specific stage of the implementation support process: goal setting (10 out of 47, or 21% of plans coded), status updates (40% of plans coded), and next step planning (38% of plans coded). Additionally, when PL was identified as the appropriate implementation support, implementation plans noted the PL sessions selected.
After extracting, cleaning, and organizing all the qualitative data, the first author engaged in repeated reading of the data, noting common patterns or themes (Step 1). After a solid familiarity with the data was established, the first author generated an initial set of codes to reflect themes identified in Step 1. Although we were aware of implementation leadership and implementation strategies before coding, we did not a priori assume that the implementation plans would precisely map onto the constructs described in the literature. Rather, we allowed these codes to emerge from the data. These initial codes were applied to the data in several systematic, comprehensive reviews. Several initial codes were revised, condensed, or deleted based on this iterative review. When the first author was satisfied that the initial coding scheme had been satisfactorily and exhaustively applied to the data, codes were analyzed and sorted into themes (Braun and Clarke 2006). At this stage, the first author developed a visual representation of the emerging themes to aid in sorting and organizing the information.
Next, these themes were reviewed by (1) rereading all coded segments in relation to identified themes, and (2) rereading the entire data set in relation to identified themes. As this process progressed, the definitions of individual codes, themes, and their application to the data set were iteratively refined. This process continued until additional reviews did not result in any substantial changes to either theme/code definitions or their application to the data. At this point, the codebook, visual representation of initial thematic structure, and full data set were shared with the second author who served as the second coder. The second author independently analyzed the data set using the codebook developed by the first author.
After the second coder had completed her analyses, we compared the results obtained by the first and second coders. Discussion between both coders resulted in the decision to collapse several initial codes to better reflect the thematic structure of the data. All data were recoded using these collapsed codes and agreement between coders was calculated. We aimed to achieve at least 80% agreement (Miles and Huberman 1994), which we calculated as percent agreement divided by percent agreement plus percent disagreement between coders. Our calculations indicated 81% overall agreement between coders; as we had achieved our predetermined threshold, we proceeded to interpret the results of this analysis.
7.2 Research Question #2: Quantitative Analyses
Our second research question addressed the extent to which there were differences in intervention fidelity and student outcomes across schools with varying levels of systems-level supports. We calculated district-level descriptive statistics (M, SD) for all educator-level access and usage and student-level fidelity and outcome measures for our full sample. We then compared means for each variable across districts using one-way ANOVAs and/or Kruskal-Wallis tests as appropriate. When these tests indicated significant differences, we conducted post hoc pairwise comparisons using Fisher's least significant difference (LSD) test (ANOVA) or Dunn's test (Kruskal–Wallis) to determine the nature of these differences. To reduce the risk of Type I error, we adjusted the significance level for all post hoc tests using the Bonferroni correction for multiple comparisons. Because each post hoc test involved 3 comparisons, our adjusted α was 0.05/3 = 0.016.
8 Results
8.1 RQ1: Challenges Encountered and Implementation Supports Delivered
Six major themes were identified in the qualitative coding of implementation support plans: Student Usage, Data, Building Knowledge, Blended Learning, Contextual Fit, and Motivation/Buy-In. These themes, summarized in Table 3, highlight the challenges encountered by implementation leadership teams and the implementation supports delivered to address those challenges.
Theme | Definition | Frequency |
---|---|---|
Student Usage | Emphasizing student usage and/or progress in Core5 (e.g., helping students meet personalized weekly usage targets, celebrating student usage or progress). | 72% |
Data | Reviewing and using the data provided through the myLexia platform to monitor student progress, identify needs, and plan or modify instruction. | 43% |
Building Knowledge | Developing school staff's knowledge and skills related to literacy teaching and learning to facilitate program implementation. | 40% |
Blended Learning | Implementing the blended learning model, with an emphasis on using offline program components to supplement and extend the instruction provided in the online program. | 28% |
Contextual Fit | Addressing challenges pertaining to implementing Core5 within each individual school's context. | 26% |
Motivation/Buy-In | Recognizing the importance of student motivation to use the program, staff buy-in or motivation to implement the program, or both. | 19% |
- Note: Frequency reflects the percent of the 47 distinct implementation plans coded in which both coders agreed that the theme was present.
8.2 Student Usage
Student Usage was the most common theme discussed during meetings between the systems-level consultant and local implementation leadership teams, and was present in 34 of the 47 implementation plans coded. Teams noted that student usage could be a concern at the student- or teacher-levels: individual students struggled to meet weekly usage targets and some teachers had difficulty allocating sufficient time for students in their class to meet their usage targets.
When individual students struggled to meet usage targets, one implementation plan indicated that teachers would meet with these students one-on-one to “investigate why and then create a plan to increase progress in relation to their usage.” When teachers struggled to allocate time for their students to meet weekly usage targets, implementation supports often involved targeted assistance from a building-level instructional coach. For example, one implementation plan indicated that a coach would work with these teachers individually to “dig into the ‘why’ and support them.” Leadership teams noted that these individualized student and teacher supports were generally successful in increasing the numbers of students meeting their weekly usage targets.
8.3 Data
Data was the next most common theme and was present in 20 of the 47 implementation plans. Implementation leadership teams discussed the importance of ensuring that educators regularly accessed, interpreted, and used the class-level data provided in myLexia to monitor student progress, identify needs, and plan or modify instruction. For example, one implementation plan noted, “Teachers are ready to dig deeper into myLexia reports to identify where kids are struggling and plan small-group instruction that meets those skill needs.” School teams also noted the challenges involved in using literacy data to inform instruction. For example, another implementation plan identified a need to “Improve [teachers'] Lexia data analysis skills to design strategic instruction based on the data…We really want [teachers] to use the data in a meaningful way.”
This challenge was frequently addressed via PL. One implementation plan outlined a PL session in which teachers received guidance and resources to improve their analysis of myLexia reading data and help them identify instructional changes to support students based on this data. Another implementation plan outlined a PL session in which educators received a pedagogical overview of word-reading automaticity and reading fluency, explored Core5's word-reading automaticity and reading fluency strand, and were provided links to all Core5 resources related to word-reading automaticity and reading fluency to support the development of instructional action plans based on student data.
8.4 Building Knowledge
Building Knowledge was the next most common theme and was present in 19 of the 47 implementation plans. Building Knowledge included identifying educator PL needs, scheduling and delivering PL sessions to address those needs, and/or sharing specific resources (e.g., informational or training materials) designed to build the knowledge and skills necessary to implement Core5. Delivering appropriate PL was a common implementation support used to address a variety of challenges, and the Building Knowledge theme highlighted this process. For example, one implementation plan developed at the start of the school year states that the leadership team will “Identify educator needs to help focus professional learning and next steps needed for support.” Implementation leadership teams also noted the challenges of translating PL into practice. Another implementation plan addressed this challenge by “Intentionally [planning] time for teachers to process and apply new learning” gained through PL sessions.
Leadership teams themselves also needed to build knowledge so that they could best support teachers implementing the program. One implementation plan noted that, in preparation for upcoming grade-level meetings, building leadership received a reflection guide to “help guide conversations and ask probing questions around best practices” pertaining to Core5. Another plan outlined a process in which building-level instructional coaches met with grade-level teams to discuss how they used Core5 to “intentionally plan interventions;” these discussions, in turn, provided the leadership team with a “better idea [of] how to focus PL,” for instance by “considering instructional grouping and/or progress monitoring sessions by grade levels.”
8.5 Blended Learning
Blended Learning was present in 13 of the implementation plans, highlighting the unique challenges of this pedagogical approach. For example, one implementation plan stated, “Teachers will use Lexia as a teaching tool and not just a computer program…. We'll begin by making sure teachers understand the blended learning model.” Another implementation plan noted that the blended learning model required that teachers “are doing more than just putting kids on the computer to meet minutes.”
Implementation plans outlined strategies intended to ensure that the blended learning model was implemented appropriately. These strategies focused on familiarizing teachers with the program's offline resources (such as Lexia Lessons) and encouraging them to use these resources as needed (e.g., ‘Encourage more regular use of Lexia Lessons’). For example, one implementation plan stated, “Lexia Lessons will be delivered as students are flagged on the Needs Instruction tab.” Another implementation plan stated, “Teachers should deliver targeted skill instruction with Lexia Lessons when indicated.”
8.6 Contextual Fit
The next theme, termed Contextual Fit, was present in 12 of the 47 implementation plans. Challenges pertaining to Contextual Fit included aligning Core5 with other data sources or curriculum products; logistical considerations such as scheduling, rostering, or technological issues; or balancing Core5 with other priorities specific to the school context. For example, one implementation plan illustrated a logistical barrier: “Kids have historically been allowed to take [iPads] home, and we now have many missing. This is a major barrier for implementation of all of our tech programs.” Following its identification, this challenge was addressed via a policy change (students in this school were no longer permitted to take iPads home).
In another example, an implementation plan highlighted how competing priorities within a school context could impact implementation: “One 3rd grade teacher is piloting [a math program]. Even though she is still using Core5, we expect her usage to decline as she's struggling for time.” The need for support strategies addressing challenges related to contextual fit was highlighted in another implementation plan, which stated, “When meeting with district leaders, [the systems-level consultant] will discuss the need for schools to have guidance around balancing data and instruction between [interim assessment] and Lexia.”
8.7 Motivation/Buy-In
Finally, Motivation/Buy-In was present in 9 of the implementation plans coded. Some implementation plans highlighted a need to increase motivation or buy-in. For example, one plan identified a need to “motivate and engage older students” and noted that “this starts with motivating and engaging staff.” Other plans celebrated growth in this area as the result of implementation supports. For example, one plan indicated an intention to incorporate school-wide celebrations, noting that “teachers are excited about Lexia and there's much more buy-in” and “we want to continue that momentum.”
9 RQ2: Implementation Supports, Intervention Fidelity, and Student Outcomes
We addressed our second research question by quantitatively examining and comparing educator-level implementation support access, educator-level intervention usage, student-level intervention fidelity, and student-level outcomes across the 3 districts included in our sample: Washington (which received implementation support via systems-level consultation services), Arlington (which enrolled in systems-level consultation services but did not access or receive these services), and Clinton (which did not enroll in or receive systems-level consultation services).
9.1 Educator-Level Implementation Support Access
Means and standard deviations for all educator implementation variables are summarized in Table 4. Results from the ANOVA indicated significant differences across districts in both total logins, F(2, 663) = 5.93, p = 0.003, η2 = 0.018, and days of use, F(2, 663) = 10.55, p < 0.001, η2 = 0.031. Post hoc comparisons using Fisher's LSD indicated that educators in Washington School District had a significantly greater number of logins (p = 0.001) and days of use (p < 0.001) compared to Clinton School District. Arlington School District also had significantly greater days of use (p = 0.002) compared to Clinton School District. There was no significant difference between Washington School District and Arlington School District on either variable (number of logins p = 0.88; days of use p = 0.92).
Characteristics | Washington school district1 M (SD) | Clinton school district2 M (SD) | Arlington school district3 M (SD) |
---|---|---|---|
Educator implementation | |||
Number of logins | 64.12 (82.25) | 40.51 (80.23) | 65.69 (95.73) |
Days of use | 39.48 (42.30) | 23.49 (40.93) | 40.00 (43.74) |
Offline components delivered | 66.37 (145.33) | 18.75 (43.54) | 49.51 (126.27) |
Student Implementation | |||
Average minutes per week | 79.55 (37.60) | 49.40 (27.20) | 57.34 (32.76) |
Weeks met usage | 18.52 (10.39) | 14.03 (12.66) | 13.36 (11.77) |
Student Outcomes | |||
Levels completed | 3.80 (2.60) | 3.08 (2.49) | 2.82 (2.31) |
GLM gained | 1.13 (0.87) | 0.89 (0.80) | 0.85 (0.78) |
- Note: Pseudonyms are used for the names of the school districts.
- Abbreviation: GLM, Grade Level Material.
- 1 Washington received systems-level implementation supports.
- 2 Clinton did not enroll in systems-level supports and did not receive them.
- 3 Arlington enrolled in systems-level supports but did not access or receive them.
9.2 Educator-Level Intervention Usage
Levene's test indicated that the assumption of homogeneity of variance was violated for the offline components delivered variable, F(2, 663) = 25.73, p < 0.001. Therefore, we used the Kruskal-Wallis nonparametric test to examine differences in this variable across districts. Results showed significant differences in offline components delivered, H(2) = 31.98, p < 0.001, η2 = 0.05. Post hoc pairwise comparisons using Dunn's test found that educators in Washington and Arlington School Districts delivered significantly more offline components than educators in Clinton (both p < 0.001). The difference between Washington and Arlington was not significant (p = 0.94).
9.3 Student-Level Intervention Fidelity
Means and standard deviations for the student implementation variables are also reported in Table 4. Levene's test indicated that the assumption of homogeneity of variance was violated for both the average minutes per week variable, F(2, 6205) = 73.91, p < 0.001, and the weeks met usage variable, F(2, 6205) = 93.04, p < 0.001. Therefore, we proceeded with Kruskal-Wallis tests to examine differences in student-level intervention fidelity across districts.
Results indicated that average minutes per week differed significantly across districts, H(2) = 1000.90, p < 0.001, η2 = 0.16. Post hoc pairwise comparisons using Dunn's test found that students in Washington School District used Core5 for significantly more minutes per week, on average, than students in either Clinton (p < 0.001) or Arlington (p < 0.001). The difference on average minutes per week also differed significantly between Clinton and Arlington (p < 0.001).
The number of weeks in which students met recommended usage targets also differed significantly across districts, H(2) = 310.10, p < 0.001, η2 = 0.05. Post hoc comparisons indicated that students in Washington School District met recommended usage targets in significantly more weeks than did students in either Clinton (p < 0.001) or Arlington (p < 0.001). Clinton and Arlington did not differ significantly on this variable (p = 0.26).
9.4 Student-Level Outcomes
Means and standard deviations for student outcome variables are reported in Table 4. Levene's test indicated that the assumption of homogeneity of variance was violated for the number of levels completed variable, F(2, 6205) = 3.93, p = 0.02. Therefore, we examined differences in this variable across districts using the Kruskal-Wallis test. Results indicated that the number of Core5 levels completed differed significantly across districts, H(2) = 167.84, p < 0.001, η2 = 0.03. Post hoc pairwise comparisons using Dunn's test indicated that students in Washington School District completed significantly more Core5 levels than students in Clinton (p < 0.001) or Arlington (p < 0.001). The difference in levels completed also differed significantly between Clinton and Arlington (p = 0.01).
Finally, we examined differences in grade level material (GLM) gained using one-way ANOVA, as the assumption of homogeneity of variance was satisfied for this variable, F(2, 6205) = 2.70, p = 0.07. Results indicated significant differences in GLM gained across districts, F(2,6205) = 70.67, p < 0.001, η2 = 0.02. Post hoc LSD tests indicated that GLM gains in Washington School District were significantly higher than in Clinton (p < 0.001) or Arlington (p < 0.001). The difference in GLM gained between Clinton and Arlington was not significant (p = 0.21).
10 Discussion
Adopting and supporting EBP implementation is essential to improve students' reading proficiency. Despite the critical need to improve student reading skills, research has highlighted the challenges of adopting, implementing, and scaling evidence-based literacy programs (e.g., Fixsen et al. 2005). Blended learning programs offer a unique approach to tailor supports for individual students in a flexible, scalable format. This study was the first to explore how systems-level consultation services can support implementation leadership, increase intervention fidelity of a blended learning program, and ultimately improve student reading skills. We employed mixed methods to identify the challenges encountered during implementation, understand how implementation supports addressed those challenges, and the extent to which these supports were associated with increased intervention fidelity and student outcomes. Our results align with prior research regarding effective implementation leadership and implementation supports (Lyon et al. 2018; Cook et al. 2019) and extends this literature by highlighting the unique dynamic of blended learning reading programs, which we describe next.
10.1 Challenges Implementing Blended Learning Reading Programs and How to Address Them
Our qualitative results aligned with prior research, suggesting that educator knowledge, skills, and buy-in are common challenges faced during school-based EBP implementation (Fixsen et al. 2005), that implementation strategies are needed to address those challenges (Cook et al. 2019), and that implementation leadership is needed to ensure that those strategies are delivered to the educators who need them (Lyon et al. 2022). Our results identified topic areas that implementation leadership may want to be aware of when implementing blended learning reading programs, as these may represent common areas in need of additional training and coaching for educators.
First, the most common challenge faced by implementation leadership pertained to ensuring that students had adequate time to engage with the EBP. This challenge was evident at both the student and teacher levels, and is consistent with a large body of research identifying time as an important resource that can impact implementation (e.g., Fixsen et al. 2005). Here, this challenge was identified in the context of regular reviews of progress monitoring data (inclusive of usage time) that were coordinated by the systems-level consultant. When usage concerns surfaced, implementation leadership identified the level at which the barrier occurred (e.g., student or teacher) and addressed these barriers via individualized supports. When students were struggling to meet usage targets despite being provided adequate time to do so, teachers were encouraged to meet with students individually to trouble-shoot and offer support. When teachers struggled to provide students adequate time, they were supported by building-level instructional coaches who met with teachers individually to understand the time barriers they were facing and develop a plan to address these barriers. Thus, systems-level consultants supported implementation leadership in using evaluative and iterative strategies and interactive assistance to implementers (Cook et al. 2019) to identify and address this important barrier.
Second, several themes highlighted the need for knowledge of the specific content of the EBP, in this case, knowledge about literacy and blended learning. More specifically, these themes highlighted how complex it is to foster student reading skills (Build Knowledge), use data to inform and adjust instruction (Data), balance and integrate individualized online activities with in-person whole group instruction (Blended Learning), and manage implementation within the context of multiple competing demands and priorities (Contextual Fit). Implementation leaders must have sufficient knowledge of these topics to answer questions about EBP implementation (Lyon et al. 2022) and qualitative findings highlighted how systems-level consultants could simultaneously identify needs and deliver PL to leaders and educators managing these activities in the classroom (Cook et al. 2019), as well as foster knowledgeable leadership during systems-level consultation sessions (Lyon et al. 2022).
Next, results highlighted the need for buy-in across leadership, teachers, and students with implementation plans identifying specific challenges with buy-in among teachers and students, and how to address those challenges. Cook et al. (2019) highlight the need to engage all parties affected by EBP implementation and implementation plans often noted the need to engage teachers and students individually through instructional coaches and other leaders. This was further nuanced by the buy-in needed to implement blended learning with fidelity, not just to place students on technology for a predetermined number of minutes per week. Instead, it was critical for educators to shift their view of blended learning as a means to better understand their students' instructional needs, address those needs by adjusting their classroom instruction, and provide more intensive, face-to-face instruction to students that needed it.
In sum, these themes or challenges were typically addressed through a cadre of strategies identified by Cook et al. (2019) including both in-person and online professional learning, engaging and developing relationships with all parties, supporting implementers through praise and other incentive structures, tailoring the EBP to fit the local context, and providing interactive assistance to implementers through a combination of activities such as consultation and coaching.
10.2 Systems-Level Implementation Supports Are Associated With Improved Outcomes
Results from our quantitative analyses provide promising evidence that systems-level consultation can foster implementation leadership and the use of implementation strategies (Cook et al. 2019; Lyon et al. 2022), improve student-level intervention fidelity and educator-level intervention usage, and positively impact student reading outcomes. Students in schools that received implementation support via systems-level consultation demonstrated significantly higher levels of intervention fidelity than students in either comparison district. They used the program for more minutes per week, on average, and met personalized usage targets in significantly more weeks than students in districts that did not receive implementation support. Further, students in schools that received implementation support also made substantially more progress in the program than students in comparison schools. Specifically, students in Washington School District advanced almost 4 of the program's 21 total levels, on average, across the school year compared to students in comparison districts who completed, on average, 3 or fewer levels. Additionally, students in Washington School District gained more than one grade level across the school year, while students in comparison districts gained less than one grade level in the same year. These results are significant because this study is the first to provide preliminary evidence that systems-level consultation to support implementation of a blended learning reading program is empirically linked with positive student outcomes.
The relationship between systems-level services and educator-level engagement with the program is somewhat less clear. Washington School District, who received implementation support, had high levels of educator access and usage on all measures, including access of the myLexia platform and delivery of offline reading lessons. However, Arlington School District also had high levels of myLexia platform use, but this did not translate into greater delivery of offline reading lessons, student-level fidelity or program usage, or student reading outcomes. It may be that educators need to do more than just access progress monitoring data on an online platform; rather they need PL and coaching to translate those progress monitoring data into instructional strategies that can be implemented in the classroom (Joyce and Showers 2002).
This hypothesis aligns with our qualitative results, which illustrated that incorporating literacy data from a blended learning program into classroom instruction was a common challenge faced by educators. When these needs were identified by implementation leadership during systems-level consultation, leaders were able to engage the consultants to provide PL to their educators to improve these skills and increase fidelity. Importantly, PL sessions were selected based on identified needs of implementers, provided coaching or expert support to address those needs, and included adequate time for participants to reflect on what they learned and plan for applications to classroom practice (e.g., Darling-Hammond et al. 2017).
Consistent with the notion of Contextual Fit, contextual factors may also have impacted the fidelity and student outcomes in Arlington School District. Despite the availability of systems-level implementation supports for schools in this district, records indicated that Arlington schools did not access or receive these supports. It is possible that this lack of systems-level consultation receipt was due to a change in district leadership that occurred just before the school year in which this study was conducted. In this case, the prior administrator enrolled in the systems-level supports but the subsequent administrator did not have buy-in regarding Core5 implementation or systems-level services. In their second year of implementation, educators in Arlington may have been motivated to individually log-in and review their progress monitoring data, but without implementation leadership and PL delivered by systems-level consultants, this willingness did not translate to intervention usage or student outcomes.
11 Limitations and Suggestions for Future Research
Despite the merits of this study, there were also several limitations which are important to note. First, although data from 20 schools, 666 educators, and 6,208 students were included in this study, this is a very limited representation of the 18,300 schools and approximately 3.7 million students that are currently engaged with Core5 in the U.S. This small sample size was necessary to employ a mixed methods approach, which included a resource-intensive process to individually code 47 implementation plans. However, the small sample size from one state minimizes the generalizability of the study. This will be important to address in future research given the growing popularity of blended learning (e.g., Wilkes et al. 2020).
Second, the study lacked experimental design, such that we cannot make causal inferences that the systems-level supports were the primary mechanism for improved intervention fidelity and student outcomes. It is possible that any number of other factors may have contributed to the improved outcomes, including selection bias or other district-level reading initiatives. Indeed, it is worth noting that the systems-level consultation services described here require an additional financial commitment on the part of participating districts and thus may indicate increased investment in the success of program implementation. Future research that controls for these and other potential confounds via random assignment to condition will be imperative in establishing the true impact of these services.
Additionally, our reliance on indirect measures to assess educator engagement with the online platform and delivery of offline program components is a limitation of this study, particularly the use of self-report data to assess educator intervention usage. Limitations of self-report data include the potential for social desirability bias, reporter error, and the overall subjectivity of data obtained in this manner (e.g., Nederhof 1985; Bauhoff 2023). Future research should address this limitation by incorporating direct observation methods to assess educator implementation fidelity of offline program components delivered to students.
Further, we did not employ explicit measures of implementation leadership (Lyon et al. 2022) nor school-based implementation strategies (Cook et al. 2019). However, we believe our approach, which used secondary data, was a more authentic assessment of the implementation process of blended learning reading programs in contemporary schools, thereby increasing external validity. Future research may wish to employ a greater variety of qualitative methods, such as interviews, focus groups, and direct observations, to gain deeper understanding of how implementation leadership is developed, the unique challenges implementing blended learning reading programs across diverse contexts, and how blended learning reading programs translate into student outcomes. Future research may also wish to employ a greater variety of quantitative data, including standardized assessments of academic skills external to the EBP (e.g., state assessment data, computer adaptive programming), validated measures of implementation leadership (Lyon et al. 2022), and other validated measures of leadership and organizational factors to better understand the systems-level implementation process of blended learning reading programs. Finally, future research may also wish to employ more rigorous causal research designs and quantitative methods to increase our confidence that systems-level consultation to support implementation does, in fact, improve educator engagement and student-level intervention fidelity, in turn improving student reading skills.
12 Conclusion
School psychologists must support students' reading skill development, and they may do this at-scale by supporting schools to adopt and implement blended learning reading programs. This study was the first to document the unique challenges faced by implementation leadership throughout the implementation process, explore how systems-level consultation and implementation strategies can be used to improve intervention fidelity, and the extent to which intervention fidelity is associated with improved student outcomes. Given school psychologists' competency in systems change and systems-level consultation (National Association of School Psychologists. 2020), they are well positioned to deliver services such as those described here. Results of this study highlighted the importance of implementation leadership to enact strategies to support educators and students. This study offers a starting point for researchers and practitioners to deepen their understanding of the unique role of implementation leadership in fostering student reading proficiency.
Acknowledgments
The authors wish to acknowledge Jennie Tober, John Abner, Raj Chattergoon, and Liz Brooke for their assistance in conducting this study.
Ethics Statement
This study was conducted in accordance with the ethical standards described in the 1964 Declaration of Helsinki and determined to be exempt by the Michigan State University Human Research Protection Program, Office of Regulatory Affairs on August 21, 2023.
Consent
When schools purchase Lexia® products or services, they provide consent for their anonymized data to be used by Lexia for research purposes.
Conflicts of Interest
This submission evaluates the effectiveness of a commercial product and services. Cara M. Bliss is employed by Lexia®, the company offering the product and services evaluated herein. Cara M. Bliss does not receive commission on sales of these products or services. The results and conclusions presented in this article are those of the authors and do not necessarily reflect the views of Lexia®. The other author declares no conflicts of interest.
Endnotes
Open Research
Data Availability Statement
Supporting data for this study are not available. This study was conducted using data Lexia receives as part of its educational programming, through which Lexia receives consent from the data subjects to use their anonymized data solely for Lexia's own research and analysis purposes. Participants have not given specific consent to share such data, in any form, with third parties.