Question and Answer

How are generative AI models biased, and how can I avoid biased results?

Generative AI models like ChatGPT will often output biased results. For example, if you ask ChatGPT to write a short story about a boy and a girl choosing their careers, the story will likely portray the boy choosing engineering, while the girl chooses nursing. If you ask an image creation model for a depiction of a doctor, it will likely portray the doctor as a man. Why does this happen?

These models are often trained on large amounts of data from the Internet, and that data likely has more examples from particular countries, languages, and cultures—groups that aren’t representative of the entire world. So the model learns what to output from that data.

Many of the developers of these large AI models have implemented some guardrails to address this problem. But those developers may not have thought of every type of biased data to address, since like all of us, they see the world from their own viewpoint. 

Some models have behind-the-scenes instructions telling the model to use different ethnic groups and genders with equal probability when generating images of people. Other models use data that estimates the skin tone distribution of a user’s country, and applies it randomly to any image of a human that it generates. But neither  solves the problem for all situations.

So what can you do? Keep an eye out for possible biased outputs and modify your prompts to correct for it. For example, you could say, “Write a story about a boy and a girl choosing their careers. Choose careers that avoid gender stereotypes. For example, the boy should not choose engineering or computer science and the girl should not choose teaching or nursing.”

Learn more

Related FAQs

    Frequently Asked Questions