How to have assessment without testing—and without losing valuable instructional time


Lexia accurately identified and prioritized students for small group or individual instruction. Equipped with this information, our teachers now were well versed in understanding what our students needed to improve their reading skills; however, not all teachers were equipped to provide targeted instruction. What were most helpful were the teacher resources that accompanied these data: Based on each student’s assessment data, the software automatically provided my teachers and intervention specialists with helpful, targeted instructional strategies and structured lesson plans, including the minutes-per-week of software usage that would help each student get back on track.

Unlike other software programs that we used at Cahuenga, this software genuinely engaged the students—and they were learning. I don’t think they realized just how much they were improving  (but the staff sure did!). We could see how the program automatically advanced the students to higher levels as they demonstrated proficiency. Because each of the activities is aligned with the Common Core State Standards, this also became a great program for us to prepare to meet the new expectations.

Following our first year of this new approach, we posted an Academic Performance Indicator (API) of 835—up 10 points from the previous year. During the following school year (2010–11), we pumped up our efforts and created a unique learning center with the software. Our students who were most at-risk automatically were identified through the software, as well as through other district and state-level assessments. Using six computers, a teaching assistant, and a dedicated teacher, our lowest performing students rotated among these three assets within the learning center for 60 minutes, four times a week. In addition, students of all abilities received at least 20 minutes on the program, as a central part of their regular classrooms instruction.

The results in our second year of implementation saw us register an API score of 867—more than tripling our gains from the previous year. We also had all of our subgroups—including English language learners, Latinos, and students supported by Supplemental Education Services—meet AYP goals that elevated us out of the Program Improvement status.

I give a great deal of credit to the teachers, who truly embraced this tech-driven approach, relying less on traditional instructional and testing methods. We witnessed tremendous growth in the percentage of our students meeting grade-level standards in comprehension, fluency, and reading ability as measured by the district’s periodic literacy assessments and DIBELS, which was validated by the closely correlated results in the software’s norm-referenced measures.

With these proof-positive data in hand, I could speak to the program’s economy of scale—it was highly effective in driving student gains and it was cost-efficient, saving us valuable time that could be spent teaching instead of testing. Furthermore, we weren’t spending as much money on administering, grading, and analyzing the tests. Even though the district had adopted DIBELS for assessing the acquisition of early literacy skills for K–3 students, I moved quickly to secure a waiver from our district’s leadership, exempting us from using these assessments in favor of the embedded assessments derived from the technology of the Lexia reading software.

Want to share a great resource? Let us know at submissions@eschoolmedia.com.