The Importance of Identifying and Understanding AI Bias in 3 Parts
top of page

The Importance of Identifying and Understanding AI Bias in 3 Parts


In an era where artificial intelligence (AI) is becoming a ubiquitous part of our educational tools and resources, it's imperative for educators and administrators to understand and teach about AI bias. This understanding is crucial not just for the ethical development and use of AI but for preparing students to navigate and shape a future where technology and humanity intersect more deeply every day.

Part 1: Identifying Bias in AI

At the heart of our journey into AI is the concept of bias. But what exactly is bias, and how can we identify it? To unravel this, we must encourage students to ask critical questions about AI outputs: Who is being included? How are they being represented? And crucially, who is being left out? This can be difficult to understand with text outputs, but becomes much more obvious when looking at AI image outputs. Take, for example, and AI output from Bing AI. I provided the prompt: "An image that displays people in the workforce, each person representing a different job" and received the following image:



What do you notice? Who is being included? How are they being represented? And crucially, who is being left out?

These questions are not just academic exercises; they are essential tools for dissecting the complex ways in which AI mirrors, and sometimes amplifies, the biases inherent in our societies. By prompting students to explore these questions, we equip them with the lenses needed to critically examine the digital world around them.

Part 2: Understanding the Genesis of AI Bias


AI, in its essence, learns from the data it's fed. This learning process is where the root of bias often takes hold. If the training data is skewed, lacking in diversity, or imbued with historical prejudices, the AI will inherently reflect these biases in its outputs. This understanding is fundamental for students, illuminating why AI might generate biased outputs and highlighting the importance of diverse, inclusive data sets in training AI systems.



This video from code.org does a great job of explaining how and why we see bias in AI.


Part 3: The Tricky Problem of Bias


The challenge of addressing AI bias is exemplified by Google's efforts with their AI, Gemini, to ensure diversity and representation in image generation. However, intentions and outcomes don't always align neatly. In the winter of 2024, when tasked with generating images Gemini's outputs included historically inaccurate representations. When someone asked for images of the founding fathers of America, for example, it showed an image of Native American men; and even worse still when somebody asked for an image from World War II, It showed an image of a Black man in a Nazi uniform. These instances starkly illustrate the double-edged sword of AI bias correction: striving for inclusivity but veering into inaccuracy and even cruelty.



Such examples underscore the nuanced nature of AI bias, demonstrating how efforts to correct one form of bias can inadvertently introduce new problems. This complexity is precisely why the conversation around AI bias is so critical and challenging.


A Path Forward with Optimism


Despite these challenges, the potential benefits of AI in education and beyond are immense. From personalized learning experiences to unlocking new frontiers of knowledge, AI holds promise for transformative advancements. However, realizing these benefits without exacerbating existing inequalities requires careful, intentional approaches to AI development and use.


As educators, our role in navigating this landscape is twofold: to instill in our students a critical understanding of AI bias and to model responsible, informed engagement with AI technologies. By doing so, we not only prepare them for a future shaped by AI but also contribute to the development of more equitable, thoughtful AI systems.


In the spirit of moving forward, I invite you to explore my lesson plans on identifying bias a part of a larger unit on digital citizenship (or, contact me if you are interested in having me speak to your staff for PD of AI and digital citizenship).


Together, we can equip our students with the knowledge and skills they need to thrive in an AI-driven world, ensuring they are not just passive consumers of technology but informed, critical participants in its evolution.


This post was written in collaboration with ChatGPT, here is how to properly cite AI in MLA style:


“Blog post about the importance of introducing students to the concept of AI bias...” follow-up prompt. ChatGPT, 13 Feb. version 4, OpenAI, 4 Mar. 2024, chat.openai.com/chat.

Monthly Tech Tips

bottom of page