How to have assessment without testing—and without losing valuable instructional time
By Chiae Byun-Kitayama
Most educators can agree that frequent progress monitoring is critical to achieving a data-driven culture. However, under my direction at Cahuenga Elementary School in Los Angeles, we employed a different strategy for progress monitoring of reading skills development: We tested less.
Sounds counterintuitive, doesn’t it? Yet, our results demonstrate remarkable success—and they’ve led to a newfound focus on instruction.
Located in the “Koreatown” neighborhood just five miles east of downtown Los Angeles, Cahuenga is a year-round school that finds on any given day nearly three-quarters of its 870 K–5 students on campus. With 19 different ethnic groups represented in total, nearly 70 percent of the students are Latinos, with the remaining 30 percent comprised of students with Korean or Asian heritage.
In 2009, we began piloting Lexia Reading to support a period of intensive intervention with our at-risk students. The software provided students with independent, individualized instruction on foundational reading skills. During the course of this instruction, the program identified the students who were at the greatest risk of reading failure and recommended teacher-led, direct instruction to address specific skill gaps. However, it’s important to note that this program was not only for our students with needs; it served all students regardless of their ability.
Our teachers were able to use the data gleaned from this program to guide their small group instructions, from intensive to gifted. Students who were at or above grade level also benefited from the program, because it took them to the next level. In the end, this program has allowed us to gather detailed, skill-specific data—without interrupting the flow of instruction to administer a test. This is a welcome respite, and a strategy that we will employ for years to come.
This approach gave my teachers real-time student data, based on norm-referenced predictions of each student’s chance of reaching the end-of-year benchmark (expressed as a percent). Think about it: They say that hindsight is 20-20, and it’s easy to be a Monday morning quarterback. However, by using the predictive data to show each student’s likely end-of-year outcomes, we could affect instruction in real time, and help improve each child’s chance of meeting his or her grade-level benchmarks.