Addressing Bias in Artificial Intelligence (AI) - A Practical Guide
- Guest Author

- Jan 6
- 3 min read
By John Kenyon & Martin Dooley
Artificial Intelligence (AI) and Large Language Models (LLMs) systems learn from human-generated content, which means they can perpetuate stereotypes, exclude perspectives, or favor certain groups over others. AI is a mirror of humanity, reflecting the biases present in the people, data, and decisions that shape it. Recognizing and addressing these biases is essential for responsible and ethical use of these tools.
In the image output below, AI was asked to generate images of a ”Confident Fast Food Worker” and a “Confident Accountant”. The fast food workers presented are a mix of genders and colors, while the accountants are all white-appearing men (with facial hair for some reason). Clearly biased and perpetuating stereotypes, with whole groups being omitted. Where is the equity? Where are older people? Where are the female accountants, or those of color, or just clean shaven? These are some of the critical questions to consider when reviewing AI and LLM inputs and outputs.

Recognizing Bias in Your Prompts
Before submitting a prompt, examine your assumptions. Are you using language that stereotypes groups by race, gender, age, profession or other dimensions? Are you requesting outputs that might reinforce harmful generalizations? Awareness of your own potential biases is the first step in crafting more equitable prompts.
Biased: "Generate a professional communication style guide that avoids 'aggressive' or 'unprofessional' speech patterns."
Mitigated: "Generate a professional communication style guide that recognizes diverse communication styles as equally valid, while providing context-specific guidance that doesn't penalize cultural linguistic differences or label them as unprofessional."
This challenges white supremacy culture's elevation of white communication norms as the "professional" standard and validates diverse cultural expressions.

Evaluating AI Outputs Critically
Never accept AI-generated content at face value. Review outputs carefully for stereotypical representations, missing perspectives, or language that excludes certain groups. Ask yourself: Does this reflect diverse experiences? Are certain groups portrayed negatively or omitted entirely? Is it framed for the relevant context (US vs North America vs Globally)?This can present in subtle ways that require critical thinking:
Output: Our mental health services cater to those who are too unstable to function in regular society.
Bias: Stigmatizes mental health service users as "unstable" and separate from "regular society."
Output: Our homeless outreach program helps addicts get back on their feet.
Bias: Equates homelessness with addiction, ignoring diverse causes of homelessness.
Practical Corrections
To reduce bias in prompts, use inclusive language and explicitly request diverse perspectives. For example, instead of asking for "a doctor's perspective," specify "perspectives from doctors of various backgrounds and specialties."
When you identify bias in outputs, refine your prompt to request more balanced representation or directly instruct the AI to avoid stereotypes. You can also edit outputs yourself to ensure fair and inclusive content.
Ask the tool you are using to identify bias or prejudice present in the outputs it provides. The tools have gotten better at recognizing these issues and offering alternatives, so ask for suggestions on how to mitigate those issues.
By staying vigilant and proactive, we can harness AI's potential while minimizing the harm of unchecked bias.
Learn More at the Online Facilitation Unconference
Join us at the Online Facilitation Unconference (OFU) conference for the track “Social Impact and Regenerative Futures”, including our session with practical advice and real world examples to help you in Addressing Bias in AI - ofuexchange.net





