All sessions

Women, Work, and the AI Future

Contents

Executive Summary

This panel discussion examines the systemic invisibility of women—particularly in the Global South—within AI's value chain and data infrastructure, arguing that their exclusion from design and governance phases perpetuates technological bias and economic precarity. The speakers advocate for repositioning women from invisible workers to recognized knowledge holders and decision-makers, with concrete frameworks and policy interventions needed to redistribute power alongside representation.

Key Takeaways

  1. Women's invisibility in AI is structural: It results from industry choices (automation aesthetics, profit maximization) and requires intentional policy, design, and governance changes—not just inclusion initiatives.

  2. Knowledge ≠ power: Women hold critical contextual knowledge (language subtleties, community norms, lived experiences). Extracting this knowledge into datasets without redistributing control over those systems perpetuates exploitation and risks weaponizing that knowledge against them.

  3. Multilingual, community-defined frameworks work: The ALIGN Benchmark demonstrates that involving 20,000 women in defining gender bias across six Indian languages produces better, more culturally valid AI systems than English-centric approaches—and still captures men's harms.

  4. Policy precedes practice: Labor protections, algorithmic transparency requirements, and data governance laws (like Kenya's efforts) must precede or accompany technical solutions. Without legal scaffolding, representation becomes performance.

  5. Center communities, not charity: Shift from "how do we include women in AI?" to "how do we let women define what AI should be for them?" This reframes the conversation from access to agency and leadership.

Key Topics Covered

  • Invisible labor in AI: Women's concentration in data annotation, labeling, and platform gig work
  • Data bias & representation: How gendered language, cultural knowledge, and contextual awareness are missing from training datasets
  • Platform precarity: Job insecurity, algorithmic management, care work burden, and wage disparities for women digital workers
  • Governance blind spots: How representation without power redistribution becomes tokenism; risks of datafication of marginalized communities
  • Intersectionality: Caste, class, and geographic privilege create hierarchies within "women" as a category
  • Policy & institutional solutions: Labor protections, collective organizing, rural livelihoods models, and multilingual frameworks
  • Community-centered AI design: Bottom-up frameworks built on women's definitions and lived experiences
  • The ALIGN Benchmark: A multilingual gender bias detection framework co-created with 20,000 women across India

Key Points & Insights

  1. Structural invisibility is intentional: The AI industry's aesthetic of automation and intelligence obscures backend labor—particularly women's work in the Global South—as a strategic feature, not a bug. Companies hide their "factories" just as traditional industries do.

  2. The data labeling paradox: Women perform critical knowledge work (annotation, tagging, evaluation) that shapes model outputs, yet are concentrated in low-wage, invisible segments of the value chain. A Kenyan woman may label data under a false male identity (Filipino/Indian boy) to access higher-wage work, sacrificing authenticity and visibility.

  3. Representation without power is dangerous: Adding women to datasets without redistributing control over those datasets or the technologies built from them can enable harm (e.g., facial recognition improving surveillance in contexts with state oppression; Indigenous recipes being monetized by corporations while communities remain impoverished).

  4. Language and bias are inseparable: Gender bias manifests differently across languages. English expresses bias through occupation/action stereotypes; Hindi, Marathi, Malayalam, and Bhojpuri speakers experience and articulate bias through personality attributes and emotional framings—a critical distinction AI systems miss.

  5. Care work creates algorithmic invisibility: Women's domestic and caregiving responsibilities fragment their labor participation, making algorithmic management systems (designed on male-centered assumptions) misclassify them as "not working." Bus tickets and daily commutes become proxies for creditworthiness—mechanisms that exclude women who work from home.

  6. Minimum viable intelligence vs. superhuman AI: Communities define their AI needs locally and pragmatically (e.g., menopause symptom tracking, childcare support, reducing physical labor)—not AGI or large-scale automation. Problem-solving for specific women's contexts matters more than comprehensive datasets.

  7. Intersectionality is non-negotiable: Women from Dalit, Adivasi, and marginalized caste communities experience AI (as workers and consumers) fundamentally differently. Treating "women" as homogeneous reproduces dominant-community narratives and perpetuates caste/class hierarchies through data and policy.

  8. Collective power structures matter: Community-based organizations (self-help groups, federations, unions) provide bargaining power and social protection that platform work strips away. Rural livelihoods models demonstrate how collectivization can shift women from beneficiaries to economic actors and decision-makers.

  9. Stop infantilizing women: Women farmers explicitly rejected "helpful" AI apps built by developers without consultation, demanding labor-saving automation (robots, machines) rather than more information. Designers must listen to what women actually want, not assume their needs.

  10. Measurement gaps enable drift: Without shared language, metrics, and accountability frameworks for tracking women's power (not just participation), progress claims remain invisible and unverifiable. Fast-moving tech spaces require intentional, visible tracking systems.


Notable Quotes or Statements

"Women are present in the AI economy, but sadly they are concentrated in some of its least visible segments. We have to move beyond seeing women as only workers in the AI value chain. They're a lot more than that. They are fundamentally knowledge holders." — Opening speaker (Safia Safdar, implied from context)

"At best, we're going to risk building systems that are technically sophisticated but socially really shallow. And at worst, we're going to create technologies that actively do not work or speak to women's experiences."

"Representation has to come alongside a redistribution of power. Representation in itself does not redistribute power. And in fact, when representation comes without that redistribution of power, representation can easily become tokenism." — Urvashi Anijah (Digital Futures Lab)

"A black girl back in my village might want to label these images correctly... but has to make money to feed her child. She cannot own an account by herself. If you put an African girl's name in the profile, you get less work. So she has to pass it on to a Filipino boy or Indian boy to get great work." — Shiko Gao (Kala)

"Why does everyone think we need information and more information and still more information? How many apps are we supposed to have? Why doesn't anyone build robots and machines so we don't have to do the heavy physical job?" — Woman farmer participant in Bihar (paraphrased by Kalika Bhatnagar)

"Please stop infantilizing women... stop assuming what they need and listen to them." — Kalika Bhatnagar (Microsoft Research)

"When you're talking about structural barriers, you have to think about how representation actually means giving women power—because if you just add women to datasets without changing who controls those datasets, you might actually be doing more damage." — Synthesized from Urvashi Anijah's argument


Speakers & Organizations Mentioned

NameRole / Organization
Safia SafdarSession moderator; appears to be from Karya (research/AI org)
Sachi BhaalaDeputy Director, Gender Equity, Gates Foundation
Shiko GaoFounder & CEO, Kala (Africa's digital transformation, employment)
Kalika BhatnagarSenior Principal Researcher, Microsoft Research; multilingual & inclusive AI
Urvashi AnijahFounder & Executive Director, Digital Futures Lab; AI governance & digital labor
Arana SahDocumentary filmmaker (Humans in the Loop)
ManuCo-founder, Safia's technology partner (mentioned for tech support)
KarishmaAuthor of "The Human Touch" article on data labeling (acknowledged)
LakshmiDirector, AI Kiran (contributed to film research)

Institutions & Initiatives

  • Gates Foundation — Gender equity & women's livelihoods work
  • Microsoft Research — Multilingual AI and inclusive AI research
  • Digital Futures Lab — AI governance, digital labor policy research
  • Kala — Digital economy platform (Kenya/Africa focus)
  • National Rural Livelihoods Mission (NRLM) — India's rural women's collective organizing program
  • Africa AI Village — Hosts solutions addressing women-centered AI
  • New Zealand — Cited as example of protecting Maori cultural knowledge in AI contexts
  • Kenya — Front-runner in digital economy policy (but lacks digital worker labor protections)
  • India (Bihar, multiple states) — Test sites for women-centered AI development

Technical Concepts & Resources

Frameworks & Tools

  • ALIGN Benchmark — 13-parameter multilingual gender bias detection framework co-created with 20,000 women across 6 Indian languages (Hindi, Malayalam, Bhojpuri, and others)

    • Parameters: bias presence/direction, linguistic markers/patterns, target identification (implicit/explicit), harm function
    • Domains tracked: domestic, occupational, personality, emotional, representational
    • Harm types: psychological, social, representational
  • Minimum Viable Intelligence — Community-centered approach defining AI by local problem-solving needs rather than superhuman capability or maximum data scale

Key Papers & Works Referenced

  • "Ghost Work" by Mary L. Gray — Foundational text on invisible human labor in AI pipelines (highly recommended by panelists)
  • "The Human Touch" (article) by Karishma — Data labeling and human labor visibility
  • "Humans in the Loop" (documentary film) by Arana Sah — Visual documentation of data annotation labor, gender dynamics, and model failure from women's perspectives

Languages & Regional Focus

  • Hindi, Malayalam, Bhojpuri, Marathi, Kannada, Tamil (6 Indian languages in ALIGN Benchmark)
  • English-centric bias research acknowledged as insufficient for Global South contexts
  • Gendered language structures vary significantly (e.g., English lacks grammatical gender; Indian languages center it)

Methodologies

  • Ethnographic engagement — Participatory design with women in rural communities, informal settlements, platform gig ecosystems
  • Community co-creation — 20,000 women as collaborators and definers (not subjects) in research
  • Emic vs. etic framing — Inside (community-generated) vs. outside (researcher-imposed) knowledge frameworks; emphasis on emic approaches
  • Platform livelihoods tracking — Longitudinal study of women's digital work across 2020–2024, examining wages, working conditions, profile misrepresentation

Policy & Governance Concepts

  • Algorithmic management — How algorithms make employment decisions; documented biases in creditworthiness proxies
  • Data governance — Who owns, controls, and monetizes data; extraction vs. ownership models
  • Digital labor — Platform work, gig work, data annotation; informal and formal protections
  • Epistemic injustice — Systems that dismiss or delegitimize certain groups' knowledge (particularly women's contextual, community-grounded knowledge)

Relevant Context & Actionable Implications

For Technologists & Researchers

  • Involve affected communities early in problem definition, not as post-hoc audits
  • Recognize that women hold infrastructure-level knowledge (language subtleties, community norms) that code and data scraping cannot capture
  • Design multilingual, multilingual frameworks; don't assume English taxonomies of bias translate
  • Test systems with diverse users; listen when they reject your solutions

For Policymakers & Governance Bodies

  • Establish labor protections for platform workers (currently absent in most Global South contexts)
  • Mandate transparency in algorithmic decision-making, especially for creditworthiness and employment algorithms
  • Create data governance frameworks that ensure Indigenous and marginalized communities control their own knowledge
  • Measure progress on women's power, agency, and decision-making—not just participation numbers

For Funders & Institutions

  • Invest in community-based models (self-help groups, federations, unions) that provide collective bargaining power
  • Support long-term, participatory research (not extractive studies)
  • Fund local solutions defined by communities ("minimum viable intelligence"), not one-size-fits-all tools
  • Track outcome metrics aligned with women's own definitions of success

End of Summary