Fair AI Supply Chains: Building Safe and Trusted AI Systems
Contents
Executive Summary
This panel discussion examines the hidden human labor underpinning AI systems, particularly in Global South data annotation and moderation roles. The Fairwork Project presents a certification framework using five universal principles (fair pay, fair conditions, fair contracts, fair management, fair representation) to evaluate and improve working conditions across 800+ companies in 41 countries. Panelists emphasize that while standards and certification are valuable, they must complement—not substitute—government regulation and collective action to address structural exploitation in planetary labor markets.
Key Takeaways
-
AI is not immaterial—it is built on real, often exploited human labor, concentrated in the Global South, that remains largely invisible to users and policymakers. This labor must be centered in any serious discussion of responsible AI.
-
Standards and certification are not silver bullets but strategic tools that work best when embedded within a broader ecosystem combining regulation, market mechanisms, judicial precedent, and advocacy—each with distinct leverage points and limitations.
-
The structural problem is a planetary labor market where jobs are easily movable: Worker organizing and local regulation are undermined by capital flight. Solutions must involve lead firms in the Global North, who have the power and leverage to enforce standards across their supply chains.
-
Political will and customer demand are prerequisites: Governments must make labor standards a political priority and mandate compliance. Companies will not voluntarily improve conditions without external pressure (regulation, market perception, or competitive advantage). Individual consumer choices matter but are insufficient alone.
-
Measurable does not equal material: Certification systems risk invisibilizing critical dimensions of worker well-being (dignity, agency, voice) that resist quantification. Design of standards must remain humble about what they can capture and must be paired with qualitative research, community engagement, and direct worker input.
Key Topics Covered
- Hidden labor in AI supply chains: Data annotation, content moderation, and task labeling as invisible, exploitative work
- Planetary labor markets: How global supply chains enable downward pressure on wages and working conditions across borders
- The Fairwork certification scheme: Methodology, framework, and measurable outcomes
- Limitations of certification: What standards can and cannot capture
- Policy and regulatory gaps: Why governments prioritize job quantity over quality
- Corporate resistance and gaming: How platforms circumvent standards and blame technology
- Leverage mechanisms: How standards, regulation, certification, and advocacy must work together
- Regional context (India): State incentives favoring investment and job creation over worker protections
- Pathways to institutional change: Regulatory mandates, judicial precedent, market demand, and research-policy linkages
Key Points & Insights
-
The scale and nature of exploitation: Over 826 companies assessed across 41 countries; workers in data labeling perform repetitive tasks under extreme time pressure (15-second intervals) with algorithmic monitoring and dismissal of underperformers, with negligible incentive for companies to improve conditions.
-
Replaceable labor drives standards downward: Because data work is deskilled, commodified, and highly surveilled, workers are easily replaceable. Companies have little incentive to improve conditions; governments in Global South countries face "bad jobs or no jobs" dilemmas, preventing local regulation.
-
Supply chain leverage is asymmetrical: Lead firms (predominantly in Global North) developing AI have power to enforce labor standards upstream through procurement and supply agreements. Data work providers themselves have minimal bargaining power and cannot unilaterally improve conditions.
-
Certification creates measurable accountability and benchmarking: Fairwork's scorecards (0–10 scale) have catalyzed 400+ documented policy changes benefiting 16 million workers by making performance visible, highlighting best practices, and giving companies clear targets for improvement.
-
Standards measure only what is quantifiable: Certification systems capture binary/categorical data (wages, safety, contracts) but invisibilize dimensions such as dignity, agency, worker voice, and subjective well-being—critical elements of decent work that resist quantification.
-
Platforms strategically blame technology and exploit subcontracting: Companies invoke "AI blackboxes" to deny responsibility for algorithmic decisions; they disown subcontractors entirely, gaming voluntary standards (e.g., offering benefits as performance rewards rather than baseline rights), and eroding trust.
-
Political economy of the state matters: India's state prioritizes attracting venture capital and creating job quantity over quality. Platforms remain unprofitable; venture capital (often foreign) substitutes for sustainable business models, incentivizing state tolerance of poor labor standards.
-
Division of labor across mechanisms is essential: No single actor—government, certification body, union, or market—can solve this alone. Effective change requires coordinated action: governments mandate due diligence (e.g., EU Corporate Sustainability Due Diligence Directive), standards guide implementation, transparency enables accountability, and civil society mobilizes.
-
Voluntary adoption requires external pressure: Platforms will not accept standards unless mandated or unless standards affect broader public/community perception. Market-driven adoption (e.g., living wage certification) works only when tied to visible consumer or competitive advantage.
-
Research-to-policy feedback loops are underdeveloped: Contemporary empirical evidence on platform work conditions is insufficient; judicial pathways and tighter links between consumer/production conditions and policy action remain underexplored in most jurisdictions.
Notable Quotes or Statements
-
Mark Graham: "Most people who use AI don't really have a sense of the enormous amount of human labor that went into making that thing... What we're here to talk about is how we might move towards a fairer, more equitable, more just future of work."
-
Mark Graham: "These jobs are very simply not good jobs... The structure of the global supply chains that all of this work is traded in, it pushes standards down. It creates a huge number of harms for workers."
-
Mark Graham (on regulatory gaps): "If all of us are serious about responsible AI, then we can't forget about labor standards. Labor standards can't just be an afterthought. They need to be built into the very system itself."
-
Siru Prakash: "Certification systems only measure what can be measured... what cannot be quantified, what cannot be seen... are things that do not necessarily make it to certification systems."
-
Balaji Parthasarathy: "Platforms will not accept standards unless they're mandated. There's absolutely no two ways about that... They're operating a race to the bottom."
-
Balaji Parthasarathy: "A lot of the responsibility lies with people like us in the room as customers. What kind of demands are we making on platforms?... The buck stops with us ultimately."
-
Patrick Dugan: "Standards can help shift the whole debate. It's sort of set a standard, a target that is certified. Companies can work to it... It changes the whole debate about what is possible."
Speakers & Organizations Mentioned
Primary Speakers:
- Mark Graham – Professor of Internet Geography, University of Oxford; Director, Fairwork Project
- Patrick Dugan – Postdoctoral Researcher, Berlin Social Science Center; leads Fairwork certification scheme
- Siru Prakash – Founder, AP Institute (technology, equity, governance in India)
- Balaji Parthasarathy – Professor, IIIT Bangalore; Co-founder, Center for Information Technology and Public Policy; Principal Investigator, Fairwork India
Organizations:
- Fairwork Project – Joint initiative of Oxford Internet Institute (University of Oxford) and Wissenschaftszentrum Berlin für Sozialforschung (Berlin Social Science Center)
- AP Institute – Technology equity and governance research
- IIIT Bangalore (International Institute of Information Technology, Bangalore)
- Center for Information Technology and Public Policy
- Fairwork India – Regional implementation of Fairwork framework
Funding & Government Bodies:
- Germany's Federal Ministry for Economic Cooperation and Development (BMZ)
- GIZ (Deutsche Gesellschaft für Internationale Zusammenarbeit)
- European Union (Corporate Sustainability Due Diligence Directive cited as policy model)
- Karnataka and Rajasthan state governments (India)
Companies & Platforms Referenced:
- Large online shopping platforms (unnamed)
- SMA, Humans on the Loop, Appen (data work providers)
- Generic "platforms" in India (delivery, food, ride-sharing sectors)
Technical Concepts & Resources
Fairwork Framework & Methodology:
- Five Universal Principles: Fair Pay, Fair Conditions, Fair Contracts, Fair Management, Fair Representation
- Two-threshold operationalization: Each principle is operationalized through key indicators measured against lower and higher thresholds
- Scorecard system: 0–10 scale quantifying company performance on decent work principles
- Evaluation scope: 826+ companies assessed across 41 countries over 8 years; 400+ documented policy changes; 16+ million workers impacted
Due Diligence Framework:
- OECD-aligned due diligence cycle: Risk assessment → Risk mitigation planning → Implementation monitoring → Documentation/reporting
- Fairwork's role in mapping supply chains, benchmarking suppliers, identifying risks, recommending mitigation, and monitoring progress
Related Regulatory/Market Models Cited:
- EU Corporate Sustainability Due Diligence Directive – Mandates companies to exercise supply chain due diligence; sets direction of travel for global standards
- UK Living Wage Foundation – Certification scheme that shifted debate on minimum wages by demonstrating viability of higher standards; created market differentiation without legal mandate
Limitations & Gaps:
- Quantifiable vs. non-quantifiable labor dimensions (dignity, agency, voice, subjective well-being remain largely invisible in certification systems)
- Subcontracting and platform disownership strategies
- Algorithmic decision-making framed as "blackbox" to evade accountability
- Gaming of voluntary standards (e.g., performance-conditional benefits)
Research Domains Mentioned (without specific citations):
- Data annotation labor conditions
- Impact sourcing research
- Platform labor and digital economy
- Political economy of technological change
- Global supply chain governance
Methodological Notes
- Presentation style: Mixed academic (data-driven) and narrative (on-the-ground anecdotes, e.g., the chicken squawk alarm story) to illustrate systemic problems
- Geographic scope: Emphasis on Global South (Southeast Asia, Kenya, Ghana, India) as primary site of labor exploitation; Global North as location of lead firms and policy influence
- Timeframe: 8 years of Fairwork assessments; ~20 years of Mark Graham's research on outsourcing and digital labor
- Evidence base: Company scorecards, policy change documentation, qualitative field research, interviews with workers and policy actors
