Yes, and it regularly is. AI models learn from training data created by humans, and that data contains the biases of the people and systems that produced it. Common examples: hiring AI that favors male candidates because historical hiring data skewed male, facial recognition that performs worse on darker skin tones because training data overrepresented lighter skin, and language models that associate certain professions with specific genders. Bias in AI is not a bug that gets fixed with a patch — it requires ongoing auditing, diverse training data, and human oversight. Companies deploying AI in hiring, lending, healthcare, or criminal justice must actively test for and mitigate bias.
Can AI be biased?
Answered by Hector Herrera