All sessions

The AI Regulatory Landscape: Making Sense of Safety & Compliance

Contents

Executive Summary

This AI summit session addressed the critical challenge of developing AI safety and governance frameworks that work across jurisdictions, particularly between India and the European Union. Speakers emphasized that trustworthy AI requires balancing innovation with rigorous safety standards, and that effective governance demands collaboration across multiple disciplines, countries, and stakeholder groups rather than a one-size-fits-all regulatory approach.

Key Takeaways

  1. Safe AI is not compliance theater — It requires continuous, multi-dimensional validation across risk, data quality, transparency, oversight, accuracy, cybersecurity, and market monitoring. Single-point compliance (passing EU standards or DPDP audit) does not guarantee safe deployment in other contexts.

  2. The liability gap is the business reality today — Indian organizations deploying foreign AI models face full DPDP liability for black-box systems they cannot audit. Policy and contracts must address this practical nightmare scenario, not assume it away.

  3. Regulation must balance three impossible tensions — Policymakers must simultaneously foster innovation, protect citizens, keep pace with technology, and ensure practical implementation. There is no perfect recipe; the answer is "defense in depth" (layered, multi-stakeholder approaches).

  4. Data and tools are the missing links — Effective AI safety policy requires investment in three things policymakers lack: underreported/accessible incident data, AI safety detection tools, and forensic access mechanisms. Without these, laws remain speculative.

  5. India's lightweight approach may offer a model — Rather than prescriptive rules, India's focus on voluntary governance, sector-specific regulation, and innovation sandboxes may better serve diverse populations and avoid stifling early-stage AI development while still protecting citizens — though this requires robust post-market monitoring and enforcement.

Summit Talk Summary


Key Topics Covered

  • EU AI Act vs. Indian regulatory landscape — comparing the strict compliance requirements of Europe with India's lighter-touch innovation-focused approach
  • Safe AI parameters and risk classification — defining what constitutes "safe AI" through technical, ethical, and compliance frameworks
  • Cross-border compliance challenges — navigating regulatory differences when AI systems operate across multiple jurisdictions
  • Healthcare AI governance — specific requirements for medical device regulations and trustworthiness validation in healthcare contexts
  • Data privacy and GDPR/DPDP Act alignment — reconciling EU GDPR with India's Digital Personal Data Protection (DPDP) Act
  • AI safety tools and detection mechanisms — the gap between rapidly advancing AI capabilities and available detection/safety infrastructure
  • Liability and accountability frameworks — determining who is responsible when AI systems fail or cause harm
  • International cooperation initiatives — multilateral partnerships (INPACE, trade agreements, startup accelerator programs)
  • Policy-making challenges — balancing innovation, citizen protection, technological pace, and practical implementation
  • Systemic risks vs. malfunction vs. misuse — categorizing different types of AI failures and harms

Key Points & Insights

  1. Regulatory divergence creates compliance burden: India lacks a dedicated AI act but relies on sectoral regulations (IT Act 2000, Section 606D, DPDP Act), while the EU AI Act is one of the world's strictest frameworks. Companies operating across both regions face contradictory or non-overlapping requirements, making "cross-compliance" essential but complex.

  2. Safe AI requires multi-parameter validation: Safe AI isn't binary. It requires continuous assessment across multiple dimensions — risk classification, data quality/bias detection, technical documentation, transparency to users, human oversight, accuracy/robustness, cybersecurity, and post-market monitoring — not just compliance checkbox completion.

  3. The "black box liability gap": When Indian organizations deploy EU-compliant AI models, they inherit full liability under DPDP Act for any failures, yet they cannot technically audit the underlying black-box model. This creates an unresolvable accountability problem in the current framework.

  4. Information asymmetry between models and users is fundamental: Users cannot realistically assess or audit AI systems they deploy or use. Policy must account for this structural imbalance rather than assuming user agency or "transparency" alone solves safety.

  5. Underreporting and data gaps prevent effective policy-making: Policymakers lack reliable data on deepfakes, non-consensual imagery, children affected by AI, and systemic harms. Without this evidence, laws remain theoretical rather than evidence-based and enforceable.

  6. Detection tools lag behind AI capability advances: While AI models advance rapidly (improving persuasiveness, capability, scope), investment in AI safety tools, detection mechanisms, and forensic capabilities significantly lags behind, creating a growing safety gap.

  7. Persuasiveness scales with model capability: More powerful models (higher compute, training, parameters) don't just solve problems better — they become more persuasive to users, reducing user skepticism. This means regulation cannot rely on user behavior change alone.

  8. Healthcare AI faces compounded regulatory burden: Healthcare AI systems must satisfy both medical device regulations (MDR) and AI Act compliance, requiring four-layer validation frameworks: pre-market due diligence, process workflow tracking, temporal validation, and human-centered validation.

  9. India's policy approach differs fundamentally from EU/US: India is pursuing innovation-first governance with light-touch voluntary guidelines (seven sutras) and sandbox frameworks rather than prescriptive rules, reflecting different constitutional values and the need to serve 1.4 billion citizens with diverse languages and access levels.

  10. Multi-stakeholder international collaboration is necessary: No single country or regulatory approach is sufficient. INPACE (involving 30+ countries), trade agreements, startup accelerator programs, and transdisciplinary work (engineering + science + arts + humanities + psychology) are all needed to create frameworks acceptable across borders.


Notable Quotes or Statements

  • "A blessing in disguise" — (Lalit Chawla) — Describing India's lack of a dedicated AI act as an opportunity to focus on innovation while learning from EU's stricter approach.

  • "We are failing because technologies sometimes are in front of us and we cannot fail" — (Vid Duchal) — On the paradox that advancing technologies still require human oversight and responsibility.

  • "The finish line is unclear, but there is a race" — (Arti Sadhanandanda) — On the global AI development competition and the challenge of governing something whose destination is undefined.

  • "Policy makers will step up on how governance and safety need to work" — (Arti Sadhanandanda) — Affirming that regulatory frameworks will evolve alongside AI, not precede it.

  • "Welfare and happiness happen when we trust the system" — (Dr. S.D. Sudson, CEDAK) — Core principle underlying all governance discussions.

  • "AI is a public good now" — (Arti Sadhanandanda) — Reframing AI as infrastructure, not just a competitive technology.

  • "Natural intelligence must be in action to make artificial intelligence a reality that is legally and morally acceptable" — (Dr. S.D. Sudson) — On the irreducibility of human judgment in AI governance.


Speakers & Organizations Mentioned

SpeakerRole / OrganizationKey Focus
Lalit ChawlaCompliance/Policy speakerCross-compliance (India-EU), Safe AI parameters, risk classification
Dr. Vid DuchalLegal background; CLARION projectHealthcare AI, medical device regulation (MDR), EU-India cooperation, Czech Republic startup accelerator
Dr. S.D. SudsonExecutive Director, CEDAK BangaloreNational AI governance frameworks, multi-stakeholder coordination, standards bodies (ISO, IEC, ITA)
Arti SadhanandandaHead of Taiwan desk, national law firm (ACB Partners)AI safety, policy challenges, DPDP Act, liability frameworks, India's policy strategy
RomeshSession moderator(Moderating panelists)
Vit / VidCzech Republic representativeZelen Impact Accelerator, India-EU trade agreement implementation

Key Organizations:

  • CLARION — EU-funded center of excellence (€43M+), Czech Republic & EU
  • CEDAK Bangalore — National supercomputing/AI research body
  • INPACE — 30+ country consortium (5 Asian + 27 EU countries) for digital governance
  • Zelen Impact Accelerator — Czech Republic–India partnership for startup scaling
  • EU Medical Device Regulation (MDR) — Governance body for healthcare AI
  • RBI (Reserve Bank of India) — Banking regulator with recent AI compliance guidelines
  • International standards bodies — ISO, IEC, ITA cited for ongoing coordination

Technical Concepts & Resources

AI Safety & Compliance Parameters

  • Risk classification — High-risk, limited-risk, minimal-risk AI systems (EU AI Act framework)
  • Data authentication & bias detection — Continuous validation of datasets to ensure they are not poisoned, skewed, or unrepresentative
  • Technical documentation — System design specifications, testing methodologies, and risk management frameworks
  • Explainability/Interpretability — Ability to prove to regulators and users how and why an AI system makes decisions
  • Human-centered validation — Incorporation of human oversight, accountability, and decision-making at critical points

Healthcare-Specific Frameworks

  • Medical Device Regulation (MDR) — EU framework requiring approval and post-market surveillance
  • Four-layer validation framework:
    1. Pre-market due diligence & risk assessment
    2. Workflow process tracking
    3. Temporal validation (across time periods)
    4. Human-centered validation

Policy & Governance Concepts

  • Digital humanism principle — Ethical framework (signed 5 years ago in Poisdorf, Austria; acknowledged by EU Commission and UN)
  • Defense in depth strategy — Layered, multi-dimensional approach combining technological, organizational, and societal measures
  • Evidence dilemma — Gap between rapid AI advancement and available data/evidence for regulation
  • Systemic risk vs. malfunction vs. misuse — Three categories of AI failure requiring different governance approaches

Data Protection & Privacy

  • EU GDPR (General Data Protection Regulation) — Strict personal data protection framework
  • India's DPDP Act (Digital Personal Data Protection Act) — Lighter-touch privacy framework with data fiduciary liability
  • Liability mismatch — DPDP places full liability on data fiduciaries even when they deploy black-box third-party AI systems

AI Models & Tools Mentioned

  • ChatGPT, Gemini — General-purpose LLMs (noted for 70M+ Indian users)
  • Grok, Claude — Proprietary LLMs with increasing persuasiveness/capability across versions
  • LuMo, Proton — Privacy-focused chatbots (Swiss-based examples)
  • General Purpose AI (GPAI) — Broader category of models vs. task-specific systems

Emerging Frameworks & Initiatives

  • EU AI Act — Tiered risk-based regulation (in effect; strictest globally)
  • Seven Sutras (India) — Voluntary governance guidelines for AI development
  • India Stack model — Foundational infrastructure approach (payment system analogy for AI)
  • Sandbox frameworks — Regulatory safe zones for testing without full compliance burden
  • Horizon Call — Upcoming EU research/funding initiative mentioned
  • Indo-EU Trade Agreement — Signed 3 weeks before summit; implementation underway

Specific AI Safety Challenges Identified

  • Hallucination — AI generating false or misleading information
  • Data poisoning — Deliberate corruption of training datasets
  • Deepfakes & non-consensual imagery — Misuse of generative AI for harm
  • Bias in healthcare data — Race/ethnicity-dependent datasets producing biased medical recommendations
  • Prompt injection & user manipulation — Users unknowingly influenced by AI persuasiveness
  • Multi-agent liability — Challenges in assigning responsibility in distributed AI architectures
  • Liability cascades — Black-box model providers claiming EU compliance; deploying organizations claiming DPDP compliance; no clear fault assignment

Operational/Monitoring Concepts

  • Post-market monitoring — Continuous tracking of real-world AI system performance, user feedback, and correction procedures
  • Continuous risk management — Ongoing assessment and mitigation of risks throughout deployment lifecycle
  • Forensic access — Ability for regulators/auditors to examine and debug AI system decisions (currently unavailable for many proprietary models)

Summary Table: Regulatory Approaches by Region

AspectEUIndiaUSChinaBRICS/Others
ApproachPrescriptive, citizen-rights-focusedLight-touch, innovation-first, sector-specificLight-touch, market-drivenUnclear (no public framework)Developing independent charters
Key Act/FrameworkEU AI Act (strictest globally)DPDP Act, IT Act 2000, sectoral rulesSectoral regulation, no AI-specific act(Not disclosed)Regional/national approaches
Compliance BurdenVery highModerate (evolving)LowUnknownModerate-to-high
Innovation PriorityBalanced with safetyHigher priorityHighestUnknownBalanced
Healthcare AIRequires MDR + AI Act complianceRequires DPDP compliance; RBI guidelines for bankingSectoral (FDA for devices)UnknownEmerging
Cross-border ChallengeStrict enforcement of EU standards globallyLiability on deployers, not model providersLimited enforcement outside USUnknownCoordination challenges

Actionable Recommendations (Implicit in Discussion)

  1. For AI companies operating cross-border: Design compliance as intersection (EU + India + target jurisdiction requirements), not just EU compliance + optional DPDP.

  2. For policymakers: Invest in three missing links — underreported incident data collection systems, AI safety detection tools R&D, and forensic access standards/contracts.

  3. For Indian regulators: Develop explicit liability allocation contracts/standards for black-box model deployment (who audits? who's liable for failures?).

  4. For boards & data fiduciaries: Cannot assume third-party EU compliance absolves DPDP liability; must audit, contract, and monitor AI systems deployed in-house.

  5. For researchers & startups: Leverage India's sandbox approach and Czech Republic's Zelen Impact Accelerator for EU scaling; early compliance with both EU and India standards will reduce refactoring costs.

  6. For the international community: Continue INPACE and multilateral dialogue to develop converging (not uniform) AI safety standards, particularly on liability, data, and transparency.


End of Summary