scale-unbalanced-flipIntro to AI Bias

Understanding AI Bias & Representation


The Problem of Visual Bias in AI

Generative AI is riddled with bias which reflects the systemic power imbalances in the world. An AI-driven culture must demand accountability and ecological considerations as well. Gooey.AI, in collaboration with Goethe-Institut, is reimagining a future where generative AI embraces and reflects the rich diversity of human experiences.

By crafting culturally representative image datasets and fine-tuning AI models, we aim to confront biases head-on and develop tools that are culturally sensitive, technically innovative and created with participatory design processes.

Why does this happen?

AI models learn from massive datasets scraped from the internet. These datasets overrepresent Western content while sidelining Indigenous knowledge, oral traditions, and non-Latin scripts. Research shows up to 38.6% of "facts" used by AI models contain bias (USC Viterbi, 2022). The result? AI systems that erase diverse experiences and flatten complex cultural traditions into consumable "styles."


Bias in AI training data

Key data on AI bias reveals that most models are Western-centric, influencing users to adapt to dominant cultural norms and often sidelining local languages and values. Access to advanced AI tools remains uneven, with significant portions of the global population excluded, which limits diversity and reinforces existing societal inequalities.

Power imbalances accentuate AI model bias

AI training data often reflects historical and intersectional biases, erasing diverse experiences. Tool design and governance show power imbalances and neglect consent, ownership, and community voices. Image-based models raise issues of cultural appropriation, misrepresentation, underrepresentation, and provenance.


How are we doing this?

Addressing bias in AI image models collaboratively

Gooey engaged community participation and involved local artists and diverse stakeholders. We developed a collaborative manifesto with key stakeholders grounded in ethical AI principles, and by designing a prototype tool using participatory design principles that enables people to create culturally representative fine-tune AI models.

Global consortium of key stakeholders

A global consortium of cultural stakeholders created guidelines for inclusive AI, fair creator compensation, clear provenance, and ecological transparency. Workshops held online and in Seattle, New Delhi, Bengaluru, Mumbai, and Pune fostered dialogue, while an AI fine-tuning tool was co-developed through active public participation.

Community insights lead AI tool development

Community insights shaped our tools with transparency and accountability. The Flux Image LoRA trainer reveals both financial and ecological costs of image runs, highlighting environmental impact. Beyond tools, participants valued co-authoring the open manifesto as a clarifying act of collective authorship.


What You'll Learn Next

In the following modules, you'll discover how to:

  • Train custom AI models that respect your creative ownership

  • Protect your work through consent frameworks and licensing

  • Generate images that authentically represent diverse cultures

You can learn more about our Beyond Bias initiative here:

Last updated

Was this helpful?