Ethics Series Post #2 — Gender Bias and AI: What Nonprofit Leaders Need to Watch For
I have been lucky.
One of my first bosses was a City Commissioner. Then the Executive Director of a major nonprofit. She was brilliant, capable, and effective.
She was also a mother who did not apologize for it.
Long before I was a mother myself I watched her set aside work priorities to attend her daughter's dance performances. Leave early for a school conference. Take an afternoon for a doctor's appointment. And she was unapologetic about all of it.
She told me point blank — family comes first. Work comes second.
At the time I was young enough that I did not fully understand what she was giving me. Not just permission to one day make the same choices. But a model. A way of moving through a career and a life that did not require you to pretend one of them did not exist.
She was no slouch. Nobody who watched her work would have said so. She simply refused to accept the premise that her value as a professional was diminished by her commitment to her family. And she quietly but confidently passed that refusal on to everyone around her.
I never forgot it. Over the years to come, I was lucky to work mostly for and with women. That experience shaped everything about how I do this work.
It taught me to view the world through a woman-first lens. To ask whose needs are centered in the programs we design. To notice when women — particularly women of color, low-income women, immigrant and refugee women — are invisible in the data, the planning, and the decision-making. To build programs that start with their lives, their barriers, and their strengths.
It is exactly that lens that makes me pay very close attention to gender bias in technology. And especially in AI.
Because here is what most of us working in the nonprofit sector know from experience — the systems that claim to be neutral rarely are. And the people most harmed by that false neutrality are almost always the people our organizations exist to serve.
What Gender Bias in AI Looks Like
Gender bias in AI is not always obvious. It sneaks in — through the data AI learns from, through the assumptions baked into the models, through the language it defaults to.
Here are some of the ways it shows up that nonprofit leaders need to know about.
In the language AI generates. AI systems trained on historical text absorb historical assumptions. Ask AI to write a job description for a nurse and it may default to "she." Ask it to write one for an engineer and it may default to "he." These are not random errors. They reflect patterns in the data AI learned from — patterns that encode decades of gender stereotyping into every piece of content it produces.
In hiring and recruitment tools. Amazon famously scrapped an AI hiring tool in 2018 after discovering it was systematically downgrading resumes from women. The tool had been trained on ten years of hiring data — data that reflected a male-dominated industry. It learned that male was preferred and acted accordingly. This is a data problem, and it happens quietly if your organization never checks for it.
In program design and service delivery. When nonprofits use AI to analyze data about the communities they serve — to identify who needs services, who is being reached, who is falling through the cracks — gender bias in those models can make women less visible, less counted, and less served. Particularly women whose lives do not fit the dominant data patterns. Women who are unhoused. Women in the criminal justice system. Women with disabilities. Women whose primary language is not English. If a group of women is underserved in your community, it is likely they will also be underrepresented in your data.
In communications and fundraising. AI writing tools can subtly perpetuate gendered assumptions in donor appeals, program descriptions, and impact stories. Language that frames women as passive recipients of services rather than active agents of change. Stories that center trauma rather than strength. Appeals that reinforce stereotypes rather than challenge them.
Why Nonprofit Leaders Are Uniquely Positioned to Address This
The nonprofit sector — and particularly the organizations in our network — serves more women than almost any other sector. Housing, domestic violence, workforce development, childcare, healthcare, education — the people walking through your doors are disproportionately women. Often low-income women. Often women of color. Often women whose lives have been shaped by systems that were never designed with them in mind.
That means when AI gets gender wrong in your organization — the harm lands on the people you have committed to protect.
But it also means you are uniquely positioned to catch it. You know your community and you know when something does not feel right. You know when the data is telling a story that does not match what you see every day.
Practical Steps for Watching for Gender Bias in Your AI Use
You do not need to be a data scientist to catch gender bias in AI output. You need to be paying attention. Here is where to start.
Read AI output with a gender lens. Every time AI produces content for your organization ask yourself — whose experience is centered here? Who is visible and who is invisible? What assumptions are baked into this language? It takes thirty seconds and catches a lot.
Check your job descriptions. If you run your volunteer and staff job descriptions through AI, read them carefully for gendered language and assumptions. Ask AI explicitly: "Does this job description contain any gendered language or assumptions? Rewrite it to be fully gender neutral."
Watch your impact stories. When AI helps you draft client impact stories pay close attention to how women are portrayed. Are they active agents of their own change? Or passive recipients of your organization's help? The difference matters enormously — to your clients, to your donors, and to the broader narrative about the communities you serve.
Ask AI directly about bias. One of the most underused techniques is simply asking AI to check its own work. "Does this content reflect any gender bias? Rewrite it with a gender equity lens." AI will not always catch everything but asking the question consistently builds a habit of critical review that makes a real difference.
Diversify your prompts. When AI defaults to gendered language in its output — push back. Ask for multiple versions. Ask for gender neutral language. Ask it to center the experience of women specifically. You are the author. AI is the tool. You get to decide whose story gets told.
A Personal Note
I have been lucky to work in a women-centered sphere. That helps me to notice when something is off — like women’s value being measured by how seamlessly they disappear into their workplace.
AI systems built on historical data carry historical assumptions about women, and can also erase and minimize their presence.
Our job — as nonprofit leaders who have been shaped by women like my boss — is to make sure those assumptions do not sneak into the work we do on behalf of the communities we serve.
Pay attention. Push back. And never stop asking whose experience is centered.
This is Post #2 in the Nonprofit AI Studio Ethics Series. Next up — racial bias in AI and what it means for the communities nonprofits serve. Follow us on LinkedIn and Instagram (@nonprofitAIstudio) so you never miss a post.