AI programs are trained on datasets introduced to them by humans. The datasets and the humans that introduce the sets may introduce biases to these AI programs. Some examples of these biases could include explicit bias, such as non-inclusive language towards certain demographics, or more implicit bias, such as bias towards certain languages or against newer ideas.
Hanna, M., Pantanowitz, L., Jackson, B., Palmer, O., Visweswaran, S., Pantanowitz, J., Deebajah, M., & Rashidi, H. (2024). Ethical and bias considerations in artificial intelligence (AI)/Machine learning.
Modern Pathology, 100686.
https://doi.org/10.1016/j.modpat.2024.100686
Types of Bias in AI
"Biases can lead to severe repercussions, especially when they contribute to social injustice or discrimination. This is because biased data can strengthen and worsen existing prejudices, resulting in systemic inequalities. Hence, it is crucial to stay alert in detecting and rectifying biases in data and models and aim for fairness and impartiality in all data-driven decision-making processes.
- Selection bias: This happens when the data used to train an AI system is not representative of the reality it's meant to model. It can occur due to various reasons, such as incomplete data, biased sampling, or other factors that may lead to an unrepresentative dataset. If a model is trained on a dataset that only includes male employees, for example, it will not be able to predict female employees' performance accurately.
- Confirmation bias: This type of bias happens when an AI system is tuned to rely too much on pre-existing beliefs or trends in the data. This can reinforce existing biases and fail to identify new patterns or trends.
- Measurement bias: This bias occurs when the data collected differs systematically from the actual variables of interest. For instance, if a model is trained to predict students' success in an online course, but the data collected is only from students who have completed the course, the model may not accurately predict the performance of students who drop out of the course.
- Stereotyping bias: This happens when an AI system reinforces harmful stereotypes. An example is when a facial recognition system is less accurate in identifying people of color or when a language translation system associates certain languages with certain genders or stereotypes.
- Out-group homogeneity bias: When this happens, an AI system is less capable of distinguishing between individuals who are not part of the majority group in the training data; it's a form of out-group homogeneity bias. This may result in misclassification or inaccuracy when dealing with minority groups."
Bias in AI. (n.d.). Chapman University. Retrieved January 28, 2025, from https://www.chapman.edu/ai/bias-in-ai.aspx.