How to have assessment without testing—and without losing valuable instructional time

Lexia Reading “has allowed us to gather detailed, skill-specific data—without interrupting the flow of instruction to administer a test.”

Most educators can agree that frequent progress monitoring is critical to achieving a data-driven culture. However, under my direction at Cahuenga Elementary School in Los Angeles, we employed a different strategy for progress monitoring of reading skills development: We tested less.

Sounds counterintuitive, doesn’t it? Yet, our results demonstrate remarkable success—and they’ve led to a newfound focus on instruction.

Located in the “Koreatown” neighborhood just five miles east of downtown Los Angeles, Cahuenga is a year-round school that finds on any given day nearly three-quarters of its 870 K–5 students on campus. With 19 different ethnic groups represented in total, nearly 70 percent of the students are Latinos, with the remaining 30 percent comprised of students with Korean or Asian heritage.

In 2009, we began piloting Lexia Reading to support a period of intensive intervention with our at-risk students. The software provided students with independent, individualized instruction on foundational reading skills. During the course of this instruction, the program identified the students who were at the greatest risk of reading failure and recommended teacher-led, direct instruction to address specific skill gaps. However, it’s important to note that this program was not only for our students with needs; it served all students regardless of their ability.

Our teachers were able to use the data gleaned from this program to guide their small group instructions, from intensive to gifted. Students who were at or above grade level also benefited from the program, because it took them to the next level. In the end, this program has allowed us to gather detailed, skill-specific data—without interrupting the flow of instruction to administer a test. This is a welcome respite, and a strategy that we will employ for years to come.

This approach gave my teachers real-time student data, based on norm-referenced predictions of each student’s chance of reaching the end-of-year benchmark (expressed as a percent). Think about it: They say that hindsight is 20-20, and it’s easy to be a Monday morning quarterback. However, by using the predictive data to show each student’s likely end-of-year outcomes, we could affect instruction in real time, and help improve each child’s chance of meeting his or her grade-level benchmarks.

Lexia accurately identified and prioritized students for small group or individual instruction. Equipped with this information, our teachers now were well versed in understanding what our students needed to improve their reading skills; however, not all teachers were equipped to provide targeted instruction. What were most helpful were the teacher resources that accompanied these data: Based on each student’s assessment data, the software automatically provided my teachers and intervention specialists with helpful, targeted instructional strategies and structured lesson plans, including the minutes-per-week of software usage that would help each student get back on track.

Unlike other software programs that we used at Cahuenga, this software genuinely engaged the students—and they were learning. I don’t think they realized just how much they were improving  (but the staff sure did!). We could see how the program automatically advanced the students to higher levels as they demonstrated proficiency. Because each of the activities is aligned with the Common Core State Standards, this also became a great program for us to prepare to meet the new expectations.

Following our first year of this new approach, we posted an Academic Performance Indicator (API) of 835—up 10 points from the previous year. During the following school year (2010–11), we pumped up our efforts and created a unique learning center with the software. Our students who were most at-risk automatically were identified through the software, as well as through other district and state-level assessments. Using six computers, a teaching assistant, and a dedicated teacher, our lowest performing students rotated among these three assets within the learning center for 60 minutes, four times a week. In addition, students of all abilities received at least 20 minutes on the program, as a central part of their regular classrooms instruction.

The results in our second year of implementation saw us register an API score of 867—more than tripling our gains from the previous year. We also had all of our subgroups—including English language learners, Latinos, and students supported by Supplemental Education Services—meet AYP goals that elevated us out of the Program Improvement status.

I give a great deal of credit to the teachers, who truly embraced this tech-driven approach, relying less on traditional instructional and testing methods. We witnessed tremendous growth in the percentage of our students meeting grade-level standards in comprehension, fluency, and reading ability as measured by the district’s periodic literacy assessments and DIBELS, which was validated by the closely correlated results in the software’s norm-referenced measures.

With these proof-positive data in hand, I could speak to the program’s economy of scale—it was highly effective in driving student gains and it was cost-efficient, saving us valuable time that could be spent teaching instead of testing. Furthermore, we weren’t spending as much money on administering, grading, and analyzing the tests. Even though the district had adopted DIBELS for assessing the acquisition of early literacy skills for K–3 students, I moved quickly to secure a waiver from our district’s leadership, exempting us from using these assessments in favor of the embedded assessments derived from the technology of the Lexia reading software.

As the school year progressed, we continued to link the insight from the student performance data gathered while students were working in the program to student-specific action plans to inform individualize instruction. We now had an actionable strategy for improved student outcomes in reading, without stopping to administer a test.

From impressions to impact: A new culture of reading instruction

In my opinion, one of the key differentiators was our decision to train our teaching assistants to maximize the value of the data by producing individual class and grade-level reports as an integral part of every grade-level meeting. We also provided workshops for our parents so they, too, could understand what the data reflected about their child’s reading proficiency. Everyone—teachers and parents alike—could see the progress we were making and exactly what we were doing to address any reading skill deficiencies.

Districts like Los Angeles Unified are making huge investments in assessments, but the return depends not merely on the quality of the test but often on what the teachers are able to yield from the data. In our case, we took the guesswork out by investing in a solution that has become an essential component of our reading curriculum.

Perhaps the most compelling aspect of our three-year journey: Teachers now can focus the vast majority of their time on teaching—using technology-driven performance data to see which students have hit an obstacle, and providing highly targeted instruction to help them progress.

Although the concept of “test less, teach more” is within educators’ grasp, it requires school districts to make a significant leap from timeworn traditions that have been ingrained in our educational system for decades. We have made that move, and our students are excelling because of it.

Dr. Chiae Byun-Kitayama is the instructional director for the Educational Service Center in East Los Angeles. She’s the former principal at the city’s Cahuenga Elementary School.

Want to share a great resource? Let us know at