Diversity Digest – AI and Bias

By Cindy Adair, Cross Campus Principal

Artificial Intelligence (AI) is a powerful tool, but it can sometimes be biased. This happens because AI systems learn from data created by people, and that data may include stereotypes or unfair patterns. For example, an AI trained mainly on images of men as “leaders” might wrongly assume women cannot be leaders. Bias in AI can affect things like search results, hiring tools, or even voice recognition. The good news is that by questioning results, using diverse data, and keeping humans involved in decisions, we can make AI fairer, more accurate, and more trustworthy for everyone.

  • Be Aware – Notice that AI can make mistakes if it learns from unfair data.
  • Use Diverse Data – Train AI on examples from many groups, cultures, and voices.
  • Spot Stereotypes – Question results that repeat unfair labels or assumptions.
  • Test and Check – Regularly review AI decisions to see if they are fair.
  • Stay Human – Remember AI is a tool; people must make the final judgement.

Share This Article

In this Issue

© 2025 Bangkok Patana School

Issue: 4
Volume: 28
Bangkok Patana School
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.