Leslie Villegas
Senior Policy Analyst, Education Policy
Recent iterations of the Elementary and Secondary Education Act of 1965 (ESEA), from No Child Left Behind (NCLB) in 2002, to most recently the Every Student Succeeds Act (ESSA) in 2015, has brought improvements to accountability systems used to track how well schools are serving the academic and linguistic development of students identified as English learners (ELs). NCLB it was common for schools not to assess ELs鈥 content knowledge in areas such as English language arts (ELA) and math, and until ESSA, ELs鈥 progress towards English proficiency was excluded from these systems. This means that ELs鈥 linguistic and academic outcomes were not considered when determining how well a school was performing and meeting the needs of its students.
Today, ELs鈥 performance on statewide summative assessments in subjects like math and ELA and whether they are making progress towards English language proficiency (ELP) are integral parts of how we hold schools accountable. Despite these evolutions, researchers and advocates have been calling attention to the limitations of the data produced by statewide summative assessments for years and raising questions about how useful these data actually are in differentiating how well schools are serving EL students. These limitations are rooted in the fact that for students identified as ELs, these tests their language proficiency even if that is not the intended purpose. And as a result, it is difficult to parse out what they actually know in the subject being tested (e.g., math or science).
A new by the Migration Policy Institute (MPI) dives into this issue by examining whether there is a more nuanced way to measure ELs鈥 academic performance by taking into consideration their ELP. The report also explores how information about the learning environments in which students are entrenched, such as access to qualified teachers, could provide useful context for their academic and linguistic outcomes.
The report is composed of two parts, the first, directed by Dr. Pete Goldschmidt of California State University Northridge, examined how states鈥 current statistical models can be refined to better align ELs鈥 ELP progress with their outcomes and growth expectations on state content assessments. The second piece, led by Dr. Megan Hopkins of the University of California, San Diego and Dr. Julie Sugarman of MPI sought to identify a set of opportunity-to-learn (OTL) indicators that could be used to better contextualize the academic and linguistic outcomes of students identified as ELs. Together, the findings from these two studies seek to provide a more accurate understanding of schools鈥 contributions to EL success.
Accounting for English learners鈥 ELP level on academic assessments
The first study focused on parsing out the impact an EL student鈥檚 language proficiency may have on both their academic performance on statewide assessments in the current year (their 鈥渟tatus鈥) and their progress over time (their 鈥済rowth鈥). Currently, states calculate an EL student鈥檚 status and growth in ELA and math solely based on the score they get on a standardized test. But as research has , these test scores may not accurately reflect what they know and how much they have learned within a given timeframe if they are still in the process of developing their academic English proficiency.
As a result, the lead researcher makes the case for developing 鈥渁djusted academic proficiency鈥 cut scores, or expectations about where a student鈥檚 performance should be based on their ELP level and grade level. According to author, these differentiated proficiency expectations (i.e. status) could provide a more complete picture of their academic performance as it it would be 鈥渕isleading to say that EL students were behind in ELA without considering their ELP level.鈥 But that is exactly what we have been doing up until now.
For example, under the current system a state would judge an EL student鈥檚 performance on the ELA test based on the statewide standardized grade level expectation for all students in their respective grade. However, as the paper showed, an EL student鈥檚 ELP level considerably influences their ability to show their skills and knowledge in ELA and math compared to their non-ELs peers. Under a more refined system, an EL student would be judged on how their performance compares to grade level expectations for students at a certain ELP level.
To be clear, this does not mean states should create lower expectations for students identified as ELs. Instead, it allows the state to pinpoint whether their performance is a function of their ELP and grade level, or truly an issue with their grasp of the content. According to the author, doing so 鈥渙ffers more explicit insight into how ELs are performing in academic content areas and thus how schools are serving them.鈥 And as their ELP progresses, these academic expectations will increase accordingly until they are reclassified.
In terms of measuring an EL student鈥檚 growth from one year to the next in a given subject, the author proposes two alternative statistical methods to determine if an EL鈥檚 growth (or lack thereof) in a particular subject is due to a school鈥檚 contributions to the student or simply the byproduct of the student having a better grasp of English since the last time they were assessed.
Currently, states measure growth by calculating how many points a student has increased or decreased on a given subject as measured by a statewide assessment from one year to the next without considering the student鈥檚 ELP level in either year. This method, however, does not provide insight as to whether any growth reflected was because the student learned more math or because they made progress in their English abilities. As a result, the author proposes two statistical methods that a state could use to capture the influence of ELP in academic growth model calculations.
These methods are capable of 鈥済enerating less biased estimates about schools鈥 contribution to ELs鈥 academic growth,鈥 and they increase 鈥渢he likelihood that test scores capture schools鈥 true contribution to student learning.鈥 Neither are currently being used for accountability.
Looking beyond assessment scores for EL accountability
In the second study, the authors talked to representatives from state and local education agency staff, community advocacy organizations, and parents of EL students to identify EL-specific indicators that could tell us more about the quality of language instruction support provided to these students. ESSA required states to adopt opportunity-to-learn (OTL) indicators, but none to date have solely focused on measuring EL program quality, as was explored in the study.
According to the participants, four indicators would be helpful in assessing quality of EL programming at any given school:
These OTL indicators would supplement and complement academic assessment data and according to the authors, these data would 鈥渃ontextualize EL student outcomes in order to understand patterns of progress or lack thereof, and gauge whether schools are meeting their civil rights obligations to provide support for EL students that facilitates their ELP and academic achievement.鈥
Although the report was mainly geared towards how states can improve their accountability over EL education, the report was rounded out with implications for how to improve both state and federal data collection infrastructure. Together, the findings make great strides to dismantle the between English learners and their non-EL peers which has been perpetuated by a lack of attention to the impact that English learners鈥 ELP level has on their performance on academic assessments. And without changes like these we will continue to have a misguided understanding of not only ELs鈥 perceived academic abilities, but also how well schools are serving these students.