The chorus singing the praises of data in education has been ever-present for years now, but it’s not always clear how educators can effectively put that data to use. Should we be using data to solve problems at the individual student level, the school level, or district level?
And in the final analysis, how can the constant steam of data we’re faced with help us improve teaching and learning? In our experience, the right can help solve a variety of issues, from improving student literacy to helping school leaders make better-informed purchasing decisions.
As a reading intervention specialist at Franklin Local School District, I rely on actionable data from one of my most indispensable tools, Renaissance Star Reading, to differentiate learning and help students boost their reading ability and achievement scores.
Recently, I’ve been able to “bring it all together” by using screening and progress-monitoring data to inform my intervention planning decisions. In Ohio, that means planning for both high-stakes tests and the state’s Third Grade Reading Guarantee, which mandates that students in third grade must pass an English language arts test with a set level of proficiency. This law was prompted by research demonstrating that a child’s reading proficiency in third grade has a positive correlation with that student’s success as an adult.
For this planning, I’ve used data from bi-monthly assessments to set goals and guide instruction for the upcoming months. An instructional planning report shows me the focus skills that each group needs additional work on, along with areas where individual students need further support and practice.
Shortly, I’ll be taking things one step further with Star Custom and using data to solve problems beyond this. Once progress-monitoring and screening data have helped me identify individual student needs, I’ll be able to further tailor assessment for each particular student. These assessments can be constructed based on the specific skill or standard that I choose, based on monitoring reports, instead of the full, broad tests I’ve used in the past for formative assessment. Talk about differentiation and personalized learning!
Purchasing decisions when using data to solve problems
As an executive director of technology, I used to be in the dark about the actual usage of web applications we purchased. I could log into the administrative dashboard for each program and look for reports, but each one had a different way of reporting login statistics. It was difficult to make any kind of “apples to apples” comparison of usage between different apps. That made it challenging to justify the renewal of software subscriptions based on usage data.
At Pickens County School District, we’ve adopted ClassLink Analytics, which produces login reports for all the apps we’ve purchased using a common methodology in a consistent format. While usage data does not show us everything, it does give us a common understanding of the worth of the web application to our administrators, teachers, and students. We were even pleasantly surprised to find that we could track usage data on teacher-requested apps that the schools purchased.
The simple fact of the matter is, you don’t know what you don’t know. We thought we had a fairly good idea of what apps were actually being used, but after looking at “apples to apples” analytics, we were surprised. Some of the apps we thought were being used actually weren’t, and some of those we thought were largely ignored were actually being put to frequent use.
We’ve even been able to use our usage data to help inform state-wide education policies. Ours was one of five districts selected to pilot e-learning days for South Carolina, and using login data we were able to successfully demonstrate that students were actively learning online during snow days.