At the beginning of the 2016-17 school year, 82 percent of Morgen Owings Elementary School’s students were working below grade level. Now, six months later, just 40 percent are working below grade level. We have work to do, but shifting our mindset regarding assessment has made a huge impact.
We all know the purpose of assessment/testing is to gather information that will lead to improved instruction and learning. And I’m quite certain we all agree–that in some form or fashion it’s absolutely essential. But deciding which measure can and should be used to gather data for each area of elementary literacy can sometimes be daunting for administrators.
With only so many hours in a day, and days in a week how do we decide which assessments we need? Do we just test students in timed intervals–once a week, a month, a quarter? Analyze student work samples? Observe students performing literacy tasks or interview students on their readings skills? Do we administer all of these methods to collect data? How do you choose the best method for measuring reading progress?
Assessment Fatigue and Frustration
At Morgen Owings Elementary School–just like in many schools and districts around the country–we too, had assessment fatigue and frustration over which was the best method for evaluating reading progress.
Past discussions about assessments had been met with resistance; our teachers were feeling the fatigue of frequent assessments and the frustration of not understanding the purpose and goal of the seemingly unending series of testing requirements. And then there were our numerous concerns about deciphering the data and the need for immediate real time data analysis.
What we now know is that when assessments are properly administered and integrated into instruction, the resulting data can provide valuable information about progress towards instructional goals, success of interventions, and overall curriculum implementation.
This all came to light after we implemented Lexia Reading Core5 and Lexia’s RAPID Assessment. This assessment was developed in partnership with the experts at the Florida Center for Reading Research, and using it in tandem with the literacy program is what changed our mindset on testing.
(Next page: Focusing on teachers; choosing the assessment path)
We are Teachers, Not Cobblers
Before we used these two new programs, our teachers were often left trying to cobble together an account of student progress and needs from assessments that were not designed to give that cohesive of a report.
We started with DIBELS, and while it tested fluency skills, it didn’t provide direction for how a teacher could fill those holes in fluency. As a result, our teachers really didn’t know what do with the information once they received it.
We also used NWEA MAP. Again, we were getting information and data, but once more, we weren’t sure what to do with it. We didn’t have the detailed data that identified the underlying issues/skills that were keeping students from becoming proficient readers.
In addition, we progress monitored students every three to four weeks, and it took a team of paraprofessionals and teachers to manage the process. This approach meant the teachers had to interrupt their schedule, putting aside their instruction time with students, to administer a test.
When the testing was completed, our intervention specialist would spend hours upon hours inputting data and making spreadsheets so the data was in a somewhat use-able format for the teachers. Assessment results were not real-time or immediate. Sound familiar?
The Right Assessment Path
We were introduced to the possibilities of the Core5 program as we used it for our bottom quartile of students during the 2015-16 school year. It was during this initial implementation that we saw tremendous growth opportunities to not only fill those proficiency holes but also move students forward and accelerate their learning. We finally had a personalized learning program that is student-driven online, and teacher-directed through small-group instruction. At the beginning of the 2016-17 school year, we began using the program for all of our students, along with RAPID, which we also adopted in 2016.
Now, we have replaced both the DIBELS assessment and NWEA MAP literacy assessment with the powerful combination of the differentiated literacy instruction program and the computer-adaptive assessment that provides an in-depth and reliable measurement of critical reading and language skills. The teachers also appreciate that the assessment program provides an actionable profile for each student and that it connects them to just the right instructional strategies to target the student’s needs.
We are able to check the “test less, teach more” box, the “usable data” box, and the “resources-at-our-fingertips” box; because of this, our teachers have everything they need at their fingertips–and that includes on their phones–to monitor their students on what they know and what they need to learn next. They can finally answer the question “what do I do now?”
And what they ARE doing is helping our students. Here at Morgen Owings, we now have the tools to support our teachers in making strategic connections to their classroom instruction quickly, effortlessly, and on an individualized level. These connections are driving consistency across classrooms and grade levels; they are driving confidence around instruction; and they are supporting a common language for teachers and students alike around literacy instruction and assessment.
- 5 ways we used our literacy platforms to engage students - November 14, 2024
- Equitable access to AI in classrooms is a problem–the solution is professional learning - November 13, 2024
- How AI can unleash student curiosity, creativity, and critical thinking - November 12, 2024