Skip to Main Content
NWACC Library

Ethical AI: Using AI for Your Research

Identify Bias

AI programs are trained on datasets introduced to them by humans. The datasets and the humans that introduce the sets may introduce biases to these AI programs. Some examples of these biases could include explicit bias, such as non-inclusive language towards certain demographics, or more implicit bias, such as bias towards certain languages or against newer ideas. 

Hanna, M., Pantanowitz, L., Jackson, B., Palmer, O., Visweswaran, S., Pantanowitz, J., Deebajah, M., & Rashidi, H. (2024). Ethical and bias considerations in artificial intelligence (AI)/Machine learning. Modern Pathology, 100686. https://doi.org/10.1016/j.modpat.2024.100686

Types of Bias in AI

"Biases can lead to severe repercussions, especially when they contribute to social injustice or discrimination. This is because biased data can strengthen and worsen existing prejudices, resulting in systemic inequalities. Hence, it is crucial to stay alert in detecting and rectifying biases in data and models and aim for fairness and impartiality in all data-driven decision-making processes.

  • Selection bias: This happens when the data used to train an AI system is not representative of the reality it's meant to model. It can occur due to various reasons, such as incomplete data, biased sampling, or other factors that may lead to an unrepresentative dataset. If a model is trained on a dataset that only includes male employees, for example, it will not be able to predict female employees' performance accurately.
  • Confirmation bias: This type of bias happens when an AI system is tuned to rely too much on pre-existing beliefs or trends in the data. This can reinforce existing biases and fail to identify new patterns or trends.
  • Measurement bias: This bias occurs when the data collected differs systematically from the actual variables of interest. For instance, if a model is trained to predict students' success in an online course, but the data collected is only from students who have completed the course, the model may not accurately predict the performance of students who drop out of the course.
  • Stereotyping bias: This happens when an AI system reinforces harmful stereotypes. An example is when a facial recognition system is less accurate in identifying people of color or when a language translation system associates certain languages with certain genders or stereotypes.
  • Out-group homogeneity bias: When this happens, an AI system is less capable of distinguishing between individuals who are not part of the majority group in the training data; it's a form of out-group homogeneity bias. This may result in misclassification or inaccuracy when dealing with minority groups."
Bias in AI. (n.d.). Chapman University. Retrieved January 28, 2025, from https://www.chapman.edu/ai/bias-in-ai.aspx.

Weigh the Risks & Benefits*

A cartoon illustration of a balanced scale with the word "RISK" on one side and the word "BENEFIT" on the other

 

Fig. 3. A cartoon illustration of a scale with the word "RISK" on one side and the word "BENEFIT" on the other. The scale is balanced, with the risk and benefit being equal in weight. DALL·E, version 2024-09-17, OpenAI, 17 Sep. 2024, openai.com/dall-e.

Consider: What risks do we face using AI tools? What benefits do we gain?

Here's what our students said as analyzed by Gemini:

THE RISKS

  • Academic Integrity: Plagiarism, lack of authenticity, and the risk of not learning are major concerns.
  • Skill Development: AI could hinder the development of writing, problem-solving, and research skills.
  • Dependence: Overreliance on AI could lead to a loss of independence and creativity.
  • Bias and Misinformation: AI can perpetuate biases and spread misinformation.
  • Security and Reliability: There are concerns about the security and reliability of AI tools.

THE BENEFITS

  • Efficiency and Time-Saving: AI can save time on tasks like research, writing, and editing.
  • Support and Assistance: AI can provide ideas, perspectives, and feedback.
  • Creativity and Innovation: AI can help spark creativity and generate new ideas.
  • Learning and Development: AI can help students learn new skills and improve their writing.
  • Mental Health: AI can offer emotional support and reduce stress.

What would you add or remove from our list?

Use Responsibly*

What specific tasks can AI help us with if we use it ethically?

Here's what our students said as analyzed by Gemini:

A single, intricate key with various designs etched into its surface  KEY THEMES

  • Writing and Editing: AI can help with proofreading, rewording, and structuring writing.
  • Learning and Understanding: AI can explain complex topics, summarize information, and create study plans.
  • Organization and Planning: AI can assist with tasks like time management, outlining, and meal planning.
  • Research and Analysis: AI can help with research, finding information, and data entry.

SPECIFIC TASKS

  • Writing: Proofreading, rewording, and structuring writing.
  • Learning: Explaining complex topics, summarizing information, and creating study plans.
  • Organization: Time management, outlining, and meal planning.
  • Research: Finding information, data entry, and creating bibliographies in MLA format.
  • Problem-solving: Breaking down complex topics into simpler terms and understanding math problems.

What would you add to our brainstorm?

Fig. 4. A single, intricate key with various designs etched into its surface, symbolizing multiple themes or concepts. DALL-E 2, version 2024-09-17. OpenAI, 17 Sep. 2024, openai.com/index/dall-e-2/.