Assessments in education must produce data that enhances opportunities for all learners, and AI could help eliminate inequities.

Can assessments be used to eliminate inequities in education? AI could help


Stakeholders can harness the opportunities offered by AI to re-imagine what personalized assessments can look like at a time when students need them the most

This contributed guest piece is by Dr. Mahnaz R. Charania, former education senior research fellow at the Christensen Institute. It was originally published on the Christensen Institute’s blog and is reposted with permission.

What’s being called the craziest college admission season ever is also proving to be a natural experiment for the American education system. 

Test-optional and test-blind admissions in recent years could mean a radical expansion of access to selective colleges. Yet, despite well-intentioned efforts to disrupt systemic inequities in who gets admitted, universities are reverting back to standardized assessments in the hopes of being able to better predict which students will be successful in their environment and graduate on time. While reverting back to what’s known may be more efficient, it also risks perpetuating the pre-existing inequities that they themselves are working so tirelessly to eliminate. 

The patchwork of admission test policies in college admissions underscores a larger challenge—and opportunity—for both K-12 and higher education. While the purpose and aspirations of education systems keeps shifting, how schools define and measure student success has not kept pace—a disconnect that’s spurring higher ed’s current admissions conundrum. To catch up, measurement in education must go beyond using one set of scores (SATs) to predict another (postsecondary success) and produce data that enhances opportunities for all learners—inside and outside the classroom. Emerging technologies, like artificial intelligence, may be able to help.

Artificial intelligence to personalize, not standardize assessments

Technology has long shaped schools’ approach to assessment. In the early 2000s, I gained first-hand experience in how large districts make decisions about edtech adoption and the roll out of AI-enabled personalized learning. At that time, adoption of adaptive learning and diagnostic solutions such as DreamBox, i-Ready, IXL, and even NWEA Map was exploding across the nation. These edtech tools were viewed as a breakthrough technology that offered classrooms real-time reporting and analytics to track and adjust teaching as students play. Since then, online learning continues to shatter the boundaries of traditional, monolithic approaches to K-12 teaching and learning. Digital Promise’s work on digital equity and safety and Getting Smart’s synthesis of the evolution of AI-enabled innovations shaping teaching and learning are a testament to just how much progress we have made.

Generative AI could supercharge those existing approaches. But it could also disrupt them. Used properly, AI could usher in a generation of assessments that mitigate our over-reliance on standardization in favor of a far more personalized—and equitable—approach. 

This raises a special challenge to system leaders: How can we unleash AI to enable measurement of the things we know matter but we’re not yet good at measuring? How can we leverage AI to personalize, not standardize, assessments so every student is supported in equitable ways for success inside and outside the classroom?  

The answer lies in expanding our efforts in at least three areas: learner-centered assessments; integrated, invisible assessments; and disaggregated data. 

1. Develop learner-centered assessments aligned with learner-centered systems

The skills that make us uniquely human are the skills that a learner-centered framework champions. They are also the skills that will be very difficult, if not impossible, for technology to replicate reliably. Instead of focusing our energy on teaching kids what robots can do, we need to focus on teaching them what only humans can do.

For example, to become better writers, readers, and critical thinkers, Quill.org offers low-income students an AI-powered literacy tutor that provides students real-time coaching and feedback on literacy activities that pair nonfiction reading with informational writing. In addition, Quill’s new Reading for Evidence tool offers students the opportunity to demonstrate their comprehension of nonfiction texts by writing arguments based on feedback from Quill’s AI tool on how to strengthen the logic, evidence, and syntax in their responses. As a result, students are equitably receiving the feedback they need, particularly those coming from under-resourced communities or multi-lingual learners who may benefit from additional scaffolding.

AI-powered literacy tools also have the potential to strengthen students’ capacity for historical thinking, and in turn, civic dialogue—an increasingly necessary skill for all individuals. Thinking Nation, for instance, a nonprofit dedicated to improving social studies education, recently switched from paying educators to grade essays based on a rubric to an AI chatbot. The chatbot is getting trained on the same rubric to instantly give students feedback on their ability to critically evaluate historical text. This, in turn, can free up teacher bandwidth to elevate student voice and engage learners in the art of negotiation and debate—activities that nurture students’ ability to show empathy, understanding, and respect in order to carry out individual and collective actions. 

2. Shift from pen and paper assessments to integrated, invisible assessments

Assessment methods that are woven into the fabric of learning and invisible to students offer another opportunity to leverage AI to transform how we measure student progress. During COVID, particularly in the first year when school doors remained shut, stealth assessments became the lifeline for most families. Stealth assessments have also been found to reduce test anxiety and increase student engagement. This type of assessment offers unbounded opportunities to measure for higher-order thinking skills. Video game-based assessments, for example, are particularly attractive as a means to cultivate skills that are unique to the human brain and can help increase engagement. 

A recent survey from Gallup and the Walton Family Foundation found that less than half of Gen Z students enrolled in middle and high school felt motivated to go to school. Only about half reported doing something interesting in school everyday. A contributing factor to this increasing level of disengagement is the narrow focus on curriculum coupled with high-stakes testing as the primary means to measure student knowledge and skills. 

To counter this decline in student engagement, programs like Labster are on a mission to democratize access to education by making it possible for remote students to participate in virtual science labs. Students join this virtual community and receive simulated, real-world learning with real-time feedback on their own time and at their own pace. This shift from pen-and-paper to real-life simulations has not only increased student engagement but also student interest in STEM careers. 

3. Disaggregate data to shift the focus from the average student to every student

From an equity lens, norm referenced tests—essentially all standardized tests—are particularly problematic. First, rarely are they appropriate for students with limited English proficiency, or any speakers of dialects other than General American English. The format of these tests can also introduce bias because they are reflective of traditional Western values. These values may show up embedded into the logic of questions, as well as on expectations of speed of completion. Those with access to resources may be able to work around these challenges through the use of tutors or test prep services. 

Leveraging AI to employ analytic techniques that allow for disaggregated data can shift the focus from the dominant group to ensuring every kid—including those that are Black, Hispanic, low-income, immigrants, English learners, and students with special needs—is viewed from an asset-based lens by understanding their expertise and strengths relative to their own reference group. A joint effort between the Carnegie Foundation for the Advancement of Teaching and the ETS Research Institute holds deep promise for the crucial transition that is needed from standardization to adaptive personalization in assessment. 

To effectively leverage AI will also require a change in the computational tools used. One promising technique for ensuring authentic and game-based assessments provide meaningful insights is by leveraging an evidence-centered assessment design. This design includes a student model that describes the traits, skills, or abilities to be assessed; a task model that describes activities students will do to produce evidence they are building those traits; and an evidence model that describes the variables and statistical techniques that will be used to connect the evidence to those traits. These features are especially useful for computer-based simulations and can be automated by AI to tease apart the desired student-level outcomes.  

A call to invest in assessment R&D to eliminate inequities

For students not served well by the limited lens offered by standardized tests, particularly for predicting success outside the classroom, amplifying the power of AI-driven assessments can be a game-changer. 

These new approaches hold immense disruptive potential: at first blush, this growing list of AI-powered opportunities in assessments may seem “lower quality” compared to the tried and tested standardized assessments dominating the current education market. But  they can get a foothold in the vast pockets of nonconsumption of assessmentwhere the only alternative is not to measure these outcomes at all. 

But to ensure that AI-powered assessments don’t scale in ways that reinforce the status quo, weaken human relationships, or worsen inequality, R&D dollars should help these disruptive approaches take root, and ensure that measures are created with fairness and transparency, and that they align with the programs that exist to support students. As I’ve suggested before, our inability to facilitate deep learning across peer organizations has handicapped our ability to scale solutions that could have the biggest impact on student learning, skill proficiency, and upward mobility. With facilitated knowledge sharing and readily accessible, nuanced insights, system leaders across K-12 will be better positioned to rapid-cycle test and scale what works for whom, and under what conditions. 

As schools continue to develop roadmaps and policies to drive the best use of technology and technology-integration tools, this is an emerging opportunity for educators, policymakers, and technologists to work together—and alongside students and their families—to harness the opportunities offered by AI to re-imagine what personalized assessments can look like at a time when students need them the most.

Related:
A taxonomy for using AI in education
AI’s role in the future of learning
For more news on AI in education, visit eSN’s Digital Learning hub

Sign up for our K-12 newsletter

Newsletter: Innovations in K12 Education
By submitting your information, you agree to our Terms & Conditions and Privacy Policy.

Want to share a great resource? Let us know at submissions@eschoolmedia.com.

eSchool News uses cookies to improve your experience. Visit our Privacy Policy for more information.