While technology may be useful to monitor students with special needs, it cannot replace the effectiveness of a skilled educator in the classroom
With the reauthorization of the Individuals With Disabilities Education Improvement Act in 2004 (IDEIA 2004), Congress introduced the Response to Intervention and Instruction (RtII) framework as a way to address the diversity of students and learning issues in U.S. schools.
Through early identification and intervention with students who have language and cultural differences and learning delays, the framework promises to address problems early on, thereby decreasing the number of students incorrectly assigned to receive special education services.
States are likely to mandate RtII systems for local schools because it reduces the costs of providing special education services, a cost that falls heavily on state budgets.
(Next page: How different tools help fulfill the RtII framework)
With the dearth of resources currently available to assist schools with the difficult implementation process, it is important to investigate how different tools, such as technology-based versus teacher-led assessment and instruction, contribute to the successful execution of RtII framework.
This year, I had the opportunity to study the implementation of the RtII framework in a Philadelphia charter school that used both types of tools for the assessment (problem identification) component and for the intervention (problem-solving) component. This case study allowed me to form the following opinions on the framework:
The advantages of using technology-based tools, given the requirement that RtII is a universal program, include the possibility of greater efficiency and, perhaps, the elimination of teacher subjectivity.
For example, the adaptation of technology for assessment promises “real-time feedback” and greater instructional responsiveness. However, research on the use of technology also points to issues with depending on it for assessment and for instruction.
The chance of a test administered in two different test modes (i.e. computer vs. paper tests) receiving the same results is only about 50 percent. This discrepancy is due to both the technology itself, as well as the individual participant’s level of computer familiarity.
For instance, problems with computer interface legibility, or how user-friendly the computer software is, can create testing barriers that do not reflect a student’s understanding of the knowledge being tested, but instead the student’s computer literacy.
Moreover, a lack of human proctor can cause students to lose focus and guess answers when an instructor is not present to keep them on track.
(Next page: Comparing technology-based with teacher-guided instruction)
On the plus side, given the size of the task of universal testing, technology might prove cost-effective. Schools can purchase software licenses at relatively low cost and test all students within a 20-minute window.
The software can generate and perform preliminary analysis of the data quickly and efficiently, allowing teachers to organize their intervention plan of choice. There is well-designed computer software that can account for the issues of legibility and lack of familiarity.
While a lack of teacher presence will create barriers, software that creates an extremely user-friendly interface can demonstrate that comprehensive computer skills are not necessary for students to take computer-based tests.
Additionally, technology-based assessment tools prove to be more efficient and more uniform screeners. This is because they provide a detailed numerical measure of the individual student. Ultimately, when administered correctly, technology-based assessment has sizable diagnostic potential.
In my personal comparison of the use of technology-based assessment with teacher-based assessment, I did not find a test mode effect. Students’ scores did not vary significantly when assessment was technology based or teacher based. However, comparing technology-based with teacher-guided instruction, I found discrepancies.
The use of computer software for instruction created an environment in which students continually lost focus, became frustrated, or simply grew tired as they absently stared at the computer screen. I saw students constantly guess the answers to multiple choice literacy questions using the unscientific “eeny, meeny, miny, moe” method, or demonstrate false growth due to a teacher or tutor “helping” a student by giving them the correct answer.
In contrast, in my observations of teacher-led “guided reading” sessions students demonstrated immense growth through structured, regulated instruction. There is a trade-off between efficiency and effectiveness. When it comes to learning how to read or do math, effectiveness must be the top priority.
My observation of the use of both technology tools and teacher-led strategies for implementing RtII at one school suggests that, while technology may be useful for assessment and monitoring progress of students identified as needing extra attention to bring them up to speed, there is no substitute for a well-trained teacher in the instructional or intervention process.
As a society we have a general temptation to do everything as efficiently as possible, especially when required to operate at a large scale. However, efficiency does not always lead to accuracy in instruction. When addressing the challenges of students with special needs, we need thoughtful teachers who can make valid judgments about the most appropriate interventions and carry them out sensitively.
Nicole Survis is a graduating senior at the University of Pennsylvania and incoming TFA Corps Member.
- 3 tips to authentically engage students in real-world STEM learning - December 4, 2024
- How AI is transforming learning for dyslexic students - December 4, 2024
- AI and teacher burnout: Can technology really help? - December 3, 2024