Responsible AI in India: Leadership, Ethics & Global Impact
Contents
Executive Summary
This AI summit panel discussion examines the transition from responsible AI principles to concrete practice across Indian enterprises, with leaders from aviation, payments, manufacturing, and creative technology industries sharing their implementation strategies. The overarching argument is that responsible AI—rooted in transparency, accountability, and fairness—is no longer optional but foundational to competitive enterprise strategy, compliance, and societal trust, especially as global regulations (EU AI Act, India's Digital Personal Data Protection Act) take effect in 2026.
Key Takeaways
-
2026 is the inflection point: Responsible AI governance transitions from voluntary best practice to regulatory mandate. Enterprises must move from principles on websites to provable, auditable practices embedded in systems.
-
Transparency requires open standards, not proprietary solutions: C2PA, UPI frameworks, and other cross-industry initiatives demonstrate that responsible AI infrastructure must be standardized, interoperable, and accessible to all enterprises—not locked behind proprietary tools.
-
In safety-critical and scale-critical systems, humans must retain control and visibility: Whether aviation, payments, or creative tools, responsible AI architecture requires human override capabilities, explainability, and continuous monitoring—not full autonomy.
-
Responsible AI is a competitive advantage, not just a cost: Enterprises that implement responsible AI early (Air India, NPCI, Adobe) report higher customer trust, lower operational risk, and regulatory readiness. It accelerates, rather than constrains, innovation.
-
Small enterprises cannot achieve responsible AI alone: Industry bodies, larger technology companies, and service providers must actively disseminate frameworks, methodologies, and tools to prevent a two-tiered system where only Fortune 500 companies have access to responsible AI governance.
Key Topics Covered
- Content authenticity and provenance: C2PA standard, content credentials, and transparent tracking of AI-generated media
- Responsible AI implementation in large enterprises: Governance frameworks, orchestration across business units, and balancing innovation with accountability
- AI in safety-critical systems: Aviation case study (generative AI virtual assistants, 13.5M+ customer queries handled)
- AI in digital payments infrastructure: Fraud detection, anomaly detection, and balancing security with customer experience at massive scale (UPI)
- Transparency and explainability: Why and how systems decline transactions, content provenance labels ("nutrition labels" for digital content)
- Regulatory landscape: EU AI Act, California legislation, India's Digital Personal Data Protection Act, and industry governance
- Skills, culture, and organizational change: Retraining legal and compliance teams, building awareness across value chains, cascading frameworks to MSMEs
- Global standards and interoperability: Open standards-based approaches, cross-industry coalitions, and avoiding proprietary lock-in
- Inclusivity and fairness: Ensuring responsible AI frameworks are accessible to smaller enterprises, not just Fortune 500 companies
Key Points & Insights
-
Regulatory enforcement is imminent (2026): Responsible AI governance is shifting from a voluntary corporate practice to a compliance mandate, with EU AI Act enforcement in August 2026 and California's first US AI law taking effect alongside new IT rules in India.
-
Content transparency requires standardization, not theory: Adobe's C2PA (Coalition for Content Provenance and Authenticity) initiative demonstrates that responsible AI principles must be embedded in open standards and working products, not just corporate mission statements. Content credentials should function as "nutrition labels" for digital media.
-
Orchestration across five AI layers is essential: Responsible AI governance cannot be a single compliance checkbox. It must span data sourcing, model training, product design, deployment, and monitoring—and differ by industry (aviation ≠ payments ≠ manufacturing).
-
Human-in-the-loop design is critical in safety systems: Air India's generative AI virtual assistant (handling 40,000 queries/day, 97% autonomous resolution) remains under constant monitoring and allows customers to flag inappropriate responses. Aviation's "red button override" principle—humans can always take control—is a model for responsible autonomy.
-
Fraud detection in payments requires balancing security with usability: NPCI's approach prioritizes low false-positive rates (genuine transactions wrongly declined) over perfect fraud detection. They've achieved this by starting with conservative models, gathering domain knowledge, and collaborating across the ecosystem to improve accuracy while maintaining customer trust.
-
Transparency and explainability build trust: NPCI's new small language model allows customers to understand why transactions were declined. This shifts AI from a "black box" to an explainable system, reducing customer friction and building confidence in the payments infrastructure.
-
Large enterprises must share frameworks to prevent a "responsible AI divide": Prativa (Adobe) and Amal (RPG) emphasize that creators of AI technology and large enterprises cannot reserve responsible AI practices for themselves. Frameworks, templates, and methodologies must be disseminated through industry bodies (e.g., NASSCOM's Vicki initiative) to help MSMEs and smaller organizations.
-
People, process, and technology must evolve together: Legal, compliance, and governance teams must be expanded and retrained. One-size-fits-all solutions fail; instead, "bring your own AI" frameworks with guardrails allow flexibility at scale while maintaining accountability.
-
Regulatory intervention is inevitable and beneficial: Rather than framing regulation as a constraint, panelists view it as a "catalyst for good practices" and a necessary safeguard to prevent system-wide failures and build ecosystem-wide trust.
-
Choice and shared human values are central: As Sarika Gulani (NASSCOM/Vicki) concludes, the decisions made now about responsible AI define future technological capability. Responsible AI is not a compliance checkbox but a commitment to develop technology aligned with shared human values.
Notable Quotes or Statements
"It stops being a slide in a deck and it will now be sort of a piece of our compliance strategy but also as I said an important opportunity."
— Andy Parsons, Adobe, on the shift from voluntary to mandatory responsible AI practice
"Can your systems actually prove that you have been responsible with AI and how do you go about doing that?"
— Andy Parsons, framing the central practical challenge
"It's more of a bring your own AI kind of a scenario in every function; you cannot provide one solution; one size doesn't fit all."
— Amal Desh Pande, RPG Group, on enterprise-scale AI governance
"If you dial the safety knob too much, it is an inconvenience to the customer. We practically cannot answer any question."
— Dr. Satya Raaswami, Air India, on balancing safety and usability in chatbots
"Responsible AI is not anymore a compliance check; it's a commitment of the technology that we should develop it with shared human values."
— Sarika Gulani, NASSCOM/Vicki
"The business case for Provenance has been challenging. Doing something that helps preserve democracy and democratic discourse is maybe not a good way to make money, but it is critically important."
— Andy Parsons, on the difficulty of monetizing trust and transparency
Speakers & Organizations Mentioned
| Role/Title | Name | Organization |
|---|---|---|
| Global Head, Content Authenticity | Andy Parsons | Adobe |
| Vice President & Managing Director | Prativa Mopatra | Adobe India |
| Chief Digital & Technology Officer | Dr. Satya Raaswami | Air India Limited |
| Chief Technology Officer | Vishal Anan Kandati | National Payments Corporation of India (NPCI) |
| Group Chief Digital Officer & Head of Innovation | Amal Desh Pande | RPG Group |
| Editor | Shanti Mallaya | The Economic Times (moderator) |
| Senior Director, AI Technology & Industry 4.0 | Sarika Gulani | NASSCOM/Vicki |
| — | — | Adobe, FIG (organizers) |
Institutions & Bodies Referenced:
- C2PA (Coalition for Content Provenance and Authenticity)
- EU (European Union)
- DGCA (Directorate General of Civil Aviation, India)
- UNESCO
- OECD
- NASSCOM Vicki (industry body)
- RBI (Reserve Bank of India, implied)
Technical Concepts & Resources
Standards & Frameworks
- C2PA (Coalition for Content Provenance and Authenticity): Open standard for embedding content credentials (provenance metadata) in digital media; includes participating entities like Microsoft, BBC, OpenAI, Sony, Meta, Qualcomm, Nikon
- Content Credentials: Machine-readable metadata attached to images, video, and audio indicating origin, creation tools, and modification history
- Responsible AI Framework: Principles-based approach covering accountability, responsibility, transparency (ART), fairness, privacy, and inclusivity
- Prompt Firewalls: Centralized control mechanisms for managing and blocking malicious prompts in generative AI systems
- RBA Framework (Reserve Bank of India): Comprehensive responsible AI governance framework referenced by NPCI
AI Models & Tools
- Firefly: Adobe's generative AI tool for image and content creation; embeds content credentials and uses only licensed training data
- Acrobat Assistant: Agentic AI assistant in Adobe Acrobat that maintains provenance and prevents hallucinations (fictional legal cases) by processing user-supplied documents
- Air India Virtual Assistant (A.G.): Generative AI chatbot handling 40,000 customer queries/day with 97% autonomous resolution rate; launched May 2023
- Small Language Models (SLM): Deployed by NPCI to provide explainable responses to customers regarding declined transactions
- Anomaly Detection Models: Used in UPI fraud detection; trained iteratively with low false-positive thresholds
Governance & Compliance Concepts
- Digital Personal Data Protection (DPDP) Act: India's regulatory framework
- EU AI Act Enforcement: August 2026 compliance deadline
- California AI Legislation: First U.S. state-level AI regulation taking effect 2026
- India's SGI Rules: New IT rules focusing on responsible AI deployment
- Human-in-the-Loop (HITL): Design principle ensuring humans retain override and decision-making capability
- Indemnity: Full liability coverage provided by vendors (e.g., Adobe to Air India) for AI-related failures
Concepts Referenced
- Jailbreaking: Attempting to bypass safety guardrails in AI systems
- Prompt Injection: Malicious input designed to manipulate AI behavior
- False Positive Rate: Genuine transactions incorrectly flagged as fraudulent (prioritized in NPCI's fraud detection)
- Explainability/Interpretability: Making AI decision-making transparent and understandable to users and stakeholders
Policy & Regulatory Context
- Timeline: 2026 is identified as the inflection point when responsible AI governance becomes legally mandated across major markets (EU, US, India)
- Multi-jurisdictional Compliance: Enterprises like Air India must comply with regulations in all operating jurisdictions (U.S. FAA, European EASA, Indian DGCA, etc.)
- Self-Regulation vs. Regulatory Intervention: Panel consensus is that regulation is inevitable and necessary to prevent systemic failures; industry-led governance alone is insufficient at scale
- Inclusive Regulatory Design: India is positioned as charting a unique path for responsible AI policy that balances innovation with inclusivity (avoiding a divide where only large enterprises can comply)
Document Context: AI Impact Summit, India (2024) | Adobe & NASSCOM FIG partnership
