AI for Good: Technology That Empowers People
Contents
Executive Summary
This session focused on edge AI deployment in underconnected regions, emphasizing how moving computation closer to data sources enables faster, more private, and more cost-effective AI solutions. Multiple speakers demonstrated that edge AI is not just a technical optimization but a critical tool for achieving inclusive, equitable AI development in the Global South, where cloud infrastructure and reliable connectivity are limited.
Key Takeaways
-
Edge AI is not optional for the Global South—it is foundational. Limited cloud infrastructure, connectivity gaps, and latency-sensitive applications make edge computation the pathway to inclusive AI development.
-
Start with the problem, not the model. Successful deployments begin by defining the specific task (crop disease detection, traffic prediction, patient monitoring), then design or distill appropriately-sized models—not the reverse.
-
Privacy and efficiency are co-benefits of edge architecture. Federated learning and on-device inference simultaneously reduce privacy risks, lower bandwidth costs, decrease latency, and enable real-time personalization.
-
Standardization across regions drives adoption at scale. Without common standards for MEC, RAN intelligence, quality of experience, and data formats, edge AI solutions remain siloed; ITU, 3GPP, and regional SDOs are actively closing this gap.
-
Open-source, sandbox-based validation before deployment reduces risk. Testing AI models in the AI for Good sandbox with real datasets before full standardization and deployment builds confidence and enables rapid iteration across geographies.
Key Topics Covered
- AI for Good framework and its three pillars: Solutions, Skills, and Standards
- Edge AI architecture and its role in 5G/6G networks
- Haptics and tactile feedback systems requiring sub-10ms latency
- Federated learning as a privacy-preserving alternative to centralized data collection
- Real-world XR and medical AI applications enabled by 5G/private networks
- Model optimization techniques (quantization, pruning, distillation) for resource-constrained devices
- Global AI governance dialogue and standards development
- Use cases across multiple geographies: agriculture (Portugal, Sri Lanka), healthcare, emergency response, traffic management
- Hardware accessibility and deployment challenges in underconnected regions
Key Points & Insights
-
Edge AI is essential for latency-critical applications: Tasks involving haptics, autonomous vehicles, and real-time medical response require sub-10ms latency, which can only be achieved by processing at the edge rather than in distant cloud data centers.
-
Privacy-by-architecture through federated learning: Rather than centralizing sensitive data (traffic patterns, health records, location history), federated learning trains models locally and shares only aggregated insights, preserving individual privacy while enabling global intelligence.
-
Task-driven model design, not model-first deployment: Successful edge AI requires starting with the specific problem to solve, then backward-engineering an appropriately sized model—not forcing large models onto constrained devices. Example: An agriculture AI should optimize for crop-related queries, not general knowledge.
-
Connectivity gaps are not disappearing: Even in developed regions (e.g., rural USA), reliable internet connectivity is sparse. Edge computing solves this by allowing offline inference on devices or through periodic mobile processing centers (e.g., "compute-on-wheels" tuktuk concept).
-
Standardization is critical for scalability and interoperability: ITU and regional SDOs (3GPP, TSDSI, O-RAN Alliance) are actively developing over 400 AI-related standards covering MEC, RAN intelligence, tactile applications, and 6G architectures—critical for reproducible, globally deployable solutions.
-
Open-source and sandbox approaches accelerate adoption: The AI for Good sandbox allows researchers and developers to test solutions with real data before standardization, leading to replicable deployments at scale.
-
Split control and intent-based signaling enable interoperability: Rather than tightly coupling device-specific haptic signals, converting to intent-level abstractions allows different manufacturers' systems to interoperate seamlessly.
-
Infrastructure-agnostic solutions work globally: The same AI agricultural advisory system deployed in Portugal with internet connectivity was successfully adapted for offline Sri Lankan villages using Raspberry Pi edge devices—demonstrating universal applicability.
-
AI availability on consumer devices is accelerating: Modern smartphones already run 10B+ parameter models on-device; AI is expanding to cars, IoT devices, and smart glasses, making edge inference a commodity feature.
-
Inclusive governance requires multistakeholder participation: The UN Global Dialogue on AI Governance emphasizes practical outcomes, human rights alignment, capacity building, and equal participation—not theoretical frameworks alone.
Notable Quotes or Statements
"What if the last thing that humans ever invent is invention itself?" — Fred Vana (ITU), posing the philosophical question of whether AI will become humanity's final invention, framing the urgency of ensuring AI serves humanity.
"Edge is needed especially in the global south...and sometimes when we think about edge we think it's needed only in places that are not really connected, but we have built solutions for America. When you go out of the city, you realize some parts are very disconnected." — Alagan Mahaling (Root Code), contextualizing edge AI as a global necessity, not just a developing-world patch.
"You don't want to use an LLM for everything...don't want the AI to tell why two famous CEOs didn't want to hold hands. You want it to answer about plants and agriculture." — Alagan Mahaling, emphasizing task-specific model design over generalist approaches.
"Edge AI means simply using AI closer to where things happen. That means closer to people, closer to services, communities rather than deepening only far away systems." — H.E. Claudia López (El Salvador, UN Permanent Representative), reframing edge AI as an equity issue.
"Capacity building was absolutely crucial element...the dialogue needs to be inclusive." — Ambassador Rein Tamsar (UN Global Dialogue on AI Governance), highlighting that AI governance must include underrepresented stakeholders.
Speakers & Organizations Mentioned
Primary Speakers
- Fred Vana — Chief, Strategic Engagement, ITU (UN specialized agency for digital technologies)
- Prof. Vijay Bhaskar (referenced as "Lal Brides") — Bharti School chairman, IIT Delhi; active in AI research and 6G initiatives
- Ranjida (full name not fully transcribed) — Federated Learning researcher, IntelliomLab, IIT Delhi; PI on security and federated learning projects
- Mala — Technologist, Center of Excellence, Wired and Wireless Technologies, ART Park (AI and Robotics Technological Park), India; researcher in 6G, millimeter-wave communications
- Alagan Mahaling — Founder & CEO, Root Code; ICT Entrepreneur of the Year (2021), Young Entrepreneur of the Year (2024); Estonia e-residency envoy
- Sakshi Gupta — Global Government Affairs, Qualcomm; AI and emerging technology policy professional
- H.E. Claudia López — Permanent Representative, Republic of El Salvador to UN; Co-chair, UN Global Dialogue on AI Governance
- Ambassador Rein Tamsar — Co-chair, UN Global Dialogue on AI Governance
Organizations & Institutions
- ITU (International Telecommunication Union) — Organizer of AI for Good; 50+ UN sister agencies as partners
- IIT Delhi — Research on haptics, MEC, federated learning
- TSDSI (Telecommunications Standards Development Society, India) — SDO developing edge-centric technical reports and standards
- 3GPP — Standards for mobile networks
- O-RAN Alliance — Open RAN with RAN Intelligence Controller (RAIC)
- ART Park (AI and Robotics Technological Park) — India DST innovation hub
- Root Code — AI solutions for agriculture, healthcare; operates in 27 countries, 92M+ users
- Qualcomm — Hardware manufacturer; Tech for Good program; on-device AI at scale
- UN Global Dialogue on AI Governance — Multi-stakeholder initiative on AI policy and cooperation
- Draxa Health — Qualcomm Tech for Good partner; on-device AI healthcare assistant (India)
Technical Concepts & Resources
Architectures & Frameworks
- Edge Computing / Multi-access Edge Computing (MEC) — Computing at the network edge, closer to users and data sources
- Split Control — Distributing AI processing between cloud and edge; enables haptic feedback, real-time control
- Intent-Based Signaling — Abstraction of device-specific sensor signals into intent-level messages for interoperability
- Federated Learning — Distributed training where models remain on local devices; only aggregated updates sent to central server
- AI-Native Networks — 5G/6G networks with AI embedded in RAN (Radio Access Network) layer, not as a peripheral add-on
- ORAN (Open RAN) with RAIC (RAN Intelligence Controller) — Open architecture for programmable radio access networks
- Hierarchical Architecture — Core network at top; UEs (user equipment) and MEC at edge; enables personalized model training locally
Standards & Organizations
- ITU AI for Good — Three pillars: Solutions, Skills, Standards
- 400+ AI standards (published/in development) covering:
- Future networks (5G, 6G)
- AI-native networks
- MEC and quality of experience
- Tactile/haptic applications
- V2X (Vehicle-to-Everything) communications
- Digital twins
- Security in distributed AI
- IMT-2030 Framework — ITU-R framework for ubiquitous intelligence and AI in next-generation networks
- 1M2M Standards — Machine-to-machine communications
- 3GPP Standards — Mobile network specifications including AI-in-RAN
AI Models & Techniques
- Gemma — LLM referenced as testable on edge devices (Raspberry Pi)
- Quantization — Reducing model precision to shrink size and memory footprint
- Pruning — Removing non-essential weights/connections from neural networks
- Distillation — Training smaller models to mimic larger models
- Convolutional Neural Networks (CNNs) — For image-based tasks (e.g., plant disease detection)
- 10B+ Parameter On-Device Models — Modern smartphones running ~10 billion parameter models natively
Applications & Use Cases
- XR (Extended Reality) for Medical Emergency Care — AR-guided CPR with real-time vital signs; IoT sensors integrated; public 5G
- XR Assisted Facility Tours — Multilingual immersive tours in regional languages; private 5G; inclusive experience design
- Agricultural Advisory AI — Soil nutrient sensing + image-based plant health diagnosis; offline-capable on Raspberry Pi; deployed Portugal, Sri Lanka
- Traffic Prediction (V2X) — Predicting spikes for stadium events; dynamic resource allocation; federated learning across base stations
- Vehicle-to-Everything (V2X) — Sharing road conditions, accident data; federated learning with cloud-trained global models
- Remote Patient Monitoring (Rural) — High-risk patient monitoring with edge processing; offline-capable; deployed in rural USA
- Security Incident Detection (Federated) — Distributed network detecting novel security threats; knowledge shared globally without exposing raw logs
- Haptic Feedback for Telesurgery & Remote Manipulation — Sub-10ms latency requirement; requires edge-local processing and intent abstraction
Hardware Targets
- Smartphones — 10B+ parameter models running natively
- Raspberry Pi — Low-cost edge device for agriculture and IoT
- IoT Devices — Sensors, wearables, embedded systems
- Smart Glasses / XR Headsets — AR/VR inference on-device
- Vehicles (Autonomous & Connected) — On-car AI processing
- Mobile Processing Centers ("Compute-on-wheels")** — Tuktuk/mobile van with local data center for periodic village processing
Quality of Experience (QoE) Metrics
- Latency — Sub-10ms requirement for mission-critical applications (haptics, telemanipulation)
- Bandwidth — Reduced by local processing; federated learning avoids centralizing raw data
- Privacy — Preserved through on-device inference and federated learning (no data exfiltration)
- Personalization — Real-time model adaptation at edge; localized fine-tuning without cloud round-trips
- Power Efficiency — Edge inference reduces transmission energy; critical in battery-constrained devices
Additional Context
Temporal Framing:
- AI for Good launched in 2017; evolved from hype-driven PowerPoints to substantive deployments
- 2023: Generative AI era (GPT-style models)
- 2024 (recent): AI agents, embodied AI, robotics, brain-computer interfaces, space AI computing
- UN Global Dialogue on AI Governance: First of its kind; scheduled July 2024, Geneva (back-to-back with ITU AI for Good Summit)
Policy & Governance Themes:
- Human-centered AI; protection and empowerment of people
- Closing AI capability gaps (Global South access)
- Cooperation over fragmentation across national/regional approaches
- Actionable, practical outcomes preferred over theoretical frameworks
- Inclusive participation of governments, industry, civil society, academia
