LIVE @ ISTE 2024: Exclusive Coverage

AI chatbots could become the new face of discrimination in education--they have the potential to exacerbate existing bias.

Why AI in the classroom needs its own ‘doll test’ 70 years post-Brown


AI chatbots could become the new face of discrimination in education--they have the potential to exacerbate existing inequalities and create new ones

Key points:

As we mark the 70th anniversary of the landmark Brown v. Board of Education decision, it’s worth reflecting on a simple experiment’s role in dismantling the doctrine of “separate but equal.” In the 1940s, psychologists Kenneth and Mamie Clark conducted the now-famous “doll test,” which revealed the negative impact of segregation on Black children’s self-esteem and racial identity. The Clarks’ findings helped overturn the “separate but equal” doctrine and win the case against school segregation.

Seven decades later, as artificial intelligence chatbots increasingly make their way into classrooms, we face a new challenge: ensuring that these seemingly helpful tools don’t perpetuate the inequalities Brown v. Board of Education sought to eradicate. Just as the “doll test” exposed the insidious effects of Jim Crow, we need a new metaphorical “doll test” to uncover the hidden biases that may lurk within AI systems and shape the minds of our students.

At first glance, AI chatbots offer a world of promise. They can provide personalized support to struggling students, engage learners with interactive content, and help teachers manage their workload. However, these tools are not harmless; they are only as unbiased as the data they’re trained on and the humans who design them.

If we’re not careful, AI chatbots could become the new face of discrimination in education. They have the potential to both exacerbate existing inequalities and create new ones. For instance, AI chatbots might favor certain ways of speaking or writing, leading students to believe that some dialects or language patterns are more “correct” or “intelligent” than others. AI chatbots also perpetuate biases through the content they generate by producing racially homogeneous or even stereotypical images and text. Additionally, AI chatbots might respond differently to students based on race, gender, or socioeconomic background. Because these biases are often subtle and difficult to detect, they can be even more insidious than overt forms of discrimination.

The reality is that AI chatbots are already here, and their presence in our students’ lives will only grow. We cannot afford to wait for a perfect understanding of their impact before engaging with them responsibly. Instead, we need a broader commitment to responsible AI integration in education, which includes ongoing research, monitoring, and adaptation.

To address this challenge, we need a comprehensive evaluation–a metaphorical “doll test”–that can reveal how AI shapes students’ perceptions, attitudes, and learning outcomes, especially when used extensively and at early ages. This evaluation should aim to uncover the subtle biases and limitations that may lurk within AI chatbots and impact our students’ development. 

We need to develop robust frameworks for assessing AI chatbots’ effects on learning outcomes, social-emotional development, and equity. We also need to provide teachers with the training and resources necessary to use these tools effectively and ethically, foster a culture of critical thinking and media literacy among students, and empower them to navigate the complexities of an AI-driven world. Moreover, we need to promote public dialogue and transparency around AI’s risks and benefits and ensure that the communities most affected by these technologies have a voice in decision-making.

As we confront the challenges and opportunities of AI in education, we must recognize that the rise of AI chatbots presents a new frontier in the fight for educational equity. We cannot ignore the potential for these tools to introduce new forms of bias and discrimination into our classrooms, reinforcing the injustices that Brown v. Board of Education sought to address 70 years ago.

We must ensure that AI chatbots do not become the new face of educational inequity by shaping our children’s minds and futures in ways that perpetuate historical injustices. By approaching this moment with care, critical thinking, and a commitment to ongoing learning and adaptation, we can work towards a future where AI is a tool for educational empowerment rather than a force for harm.

However, if we fail to be proactive, we may find ourselves needing to conduct real doll tests to uncover the damage done by biased AI chatbots. It is up to us to ensure that the integration of AI in education does not undermine the progress we have made towards educational equity and justice.

Sign up for our K-12 newsletter

Newsletter: Innovations in K12 Education
By submitting your information, you agree to our Terms & Conditions and Privacy Policy.

Want to share a great resource? Let us know at submissions@eschoolmedia.com.

New Resource Center
Explore the latest information we’ve curated to help educators understand and embrace the ever-evolving science of reading.
Get Free Access Today!

"*" indicates required fields

Hidden
Hidden
Hidden
Hidden
Hidden
Hidden
Hidden
Hidden
Hidden
Hidden
Email Newsletters:

By submitting your information, you agree to our Terms & Conditions and Privacy Policy.

eSchool News uses cookies to improve your experience. Visit our Privacy Policy for more information.