All sessions

Secure Finance: Risk-Based AI Policy for the Banking Sector

Contents

Executive Summary

India's financial system stands at a critical juncture where AI integration must be governed through embedded, design-based controls rather than post-deployment compliance overlays. The keynote and panel discussions emphasize that effective AI governance in finance requires a balanced approach combining transparency, accountability, compartmentalization, and adaptive oversight—avoiding both the pitfalls of overregulation that stifles innovation and underregulation that accumulates systemic risk. India's unique position as a global digital infrastructure leader creates an opportunity to establish a distinctive governance model that prioritizes inclusion, sovereign resilience, and responsible scaling.

Key Takeaways

  1. Governance Must Be Embedded by Design, Not Applied After
    The single most important insight is that AI governance cannot be a compliance layer added post-deployment. Controls, audit mechanisms, explainability requirements, and accountability structures must be integrated into system architecture from inception—treating AI as systemically relevant financial infrastructure equivalent to payment systems or credit platforms.

  2. Bounded Problems, Not Everything-Connected Systems
    AI excels at solving well-defined, compartmentalized problems (fraud detection, credit scoring within specific segments) but fails or halluccinates on unbounded challenges (career planning, systemic optimization). India should deliberately architect regulatory sandboxes and deployment frameworks around bounded use cases to minimize emergent system failures.

  3. Skin in the Game Prevents Regulatory Capture
    Explicitly assign liability ex-ante (not ex-post). Whoever deploys an algorithm is responsible; they cannot blame data quality or downstream actors. This creates incentives for rigorous testing, transparency, and accountability—mirroring how financial directors are held responsible for audited accounts.

  4. India's Distinctive Path: Data Ownership + Domestic Infrastructure + Adaptive Supervision
    Rather than copying EU compliance-heaviness, China's state control, or US tort-based responses, India can leverage its digital infrastructure advantage (UPI, identity, payment scale) to build sovereign AI capability. This requires: (a) asserting ownership rights over population-scale data assets, (b) investing in domestic data centers and ML platform providers, and (c) using interoperable regulatory sandboxes (IFSC, RBI, SEBI, IRDAI) to test innovations before scaled deployment.

  5. Trust is Built Through Transparency and Predictable Accountability, Not Elimination of Risk
    Perfect AI-based systems are impossible. Instead, build trust by making AI systems explainable ("glass boxes" not "black boxes"), establishing clear incident reporting, enabling human override, and rewarding organizations that proactively maintain controls. This shifts from "zero risk" to "managed risk with visible governance."

Key Topics Covered

  • Embedded Governance Framework: Integration of governance into every stage of AI lifecycle (design, data acquisition, deployment, monitoring) rather than applying compliance retrospectively
  • Risk-Based vs. Adaptive Supervision: Debate over whether traditional risk classification is feasible for emergent AI systems versus dynamic, compartmentalized oversight approaches
  • Financial System Vulnerabilities: Model integrity, operational concentration risk, data governance, and adversarial cyber threats amplified by generative AI
  • Inclusion & Equity: Using AI to expand formal financial access while preventing algorithmic bias and perpetuation of structural inequalities
  • Sovereign AI Infrastructure: Diversifying supply chains and reducing dependency on concentrated chip manufacturing, cloud capacity, and foundational model providers
  • Institutional Capacity Building: AI literacy requirements at board and senior management levels; need for multidisciplinary governance structures
  • Safe Harbor & Regulatory Incentives: Rewarding proactive compliance, robust controls, and first-time incident resolution; distinguishing between negligence and probabilistic system failures
  • India's Governance Model: Positioning India between the EU (compliance-led), China (state-controlled), and US (tort-based, post-hoc) approaches
  • Judicial & Legal Frameworks: Addressing emerging questions of liability, copyright, and algorithmic accountability through strengthened judicial systems
  • Sandbox & Experimentation: Gift City IFSCA and interoperable regulatory sandboxes as controlled environments for AI innovation testing
  • Data Sovereignty & Processing: National ownership rights over data assets, data center infrastructure, and refineries for processing ("the new oil")

Key Points & Insights

  1. Emergence Cannot Be Fully Predicted: AI systems exhibit emergent, evolving behaviors fundamentally different from traditional financial infrastructure like UPI. Pre-determining all risks ex-ante is impossible; governance must therefore prioritize dynamic adaptability, audit trails, and shutdown mechanisms rather than fixed risk categories.

  2. Compartmentalization Over Interconnection: The "Internet of Everything" and "AI of Everything" approaches pose systemic dangers (evidenced by the 2024 Microsoft CrowdStrike outage affecting global systems). Bounded, compartmentalized AI applications are safer, more efficient energetically, and better at solving specific problems.

  3. Embedded Governance Requires Accountability Clarity Ex-Ante: Rather than determining blame after failure, institutions must pre-assign responsibility along the algorithm-to-deployment chain. Whoever deploys the algorithm to end users bears primary accountability—data quality issues cannot be used to deflect liability.

  4. Safe Harbor Incentivizes Transparency: RBI's framework proposes regulatory leniency for entities with robust controls, bias testing, continuous monitoring, and transparent disclosure. This rewards proactive governance over punitive post-incident responses, encouraging candid incident reporting.

  5. Sovereign Data Assets Are Strategic Infrastructure: India's population-scale data creates competitive advantage for model training and AI application. However, ownership of processing infrastructure ("oil rigs") and refineries (ML platforms) is equally critical; tax holidays for data centers and domestic LLM development are foundational investments.

  6. Inclusion Cannot Be Assumed—It Must Be Designed: AI can reduce reliance on collateral-heavy credit models and static credit histories by analyzing transaction data, cash flows, and behavior signals, expanding MSME access. However, models trained on historically skewed datasets risk perpetuating bias; impact audits and community feedback mechanisms are essential.

  7. Institutional Capacity Building Is Non-Negotiable: Board-level AI literacy, multidisciplinary governance committees, and third-party audits (analogous to chartered accountant reviews) are critical. Firms cannot outsource responsibility for outcomes to technology providers.

  8. Operational Concentration Risk Is an Emerging Systemic Threat: Over 90% of advanced chip production, significant cloud capacity concentration, and a handful of foundation models create single points of failure. Diversification through domestic innovation and international collaboration is strategically vital.

  9. Cyber Security Fundamentals Persist: While generative AI provides both defensive acceleration and new attack vectors, foundational principles—multi-factor authentication, strong passwords, regular updates, and verification of controls via ISO/NIST standards—remain essential. AI amplifies existing threats but doesn't fundamentally change attack surface.

  10. Judicial System Capacity Is a Critical Bottleneck: New questions (copyright ownership in AI-generated work, liability allocation, algorithmic fairness disputes) will emerge faster than regulations can address them. India's clogged judicial system requires reform to handle these novel, "philosophical problems" emerging from AI use cases.


Notable Quotes or Statements

"Mano"—Humility—captures the essence of responsible AI governance better than the phrase "responsible AI" itself, because it encompasses moral, ethical, and accountable governance while respecting sovereign national interests, accessibility, inclusivity, and legitimacy.
—AK Churri, Non-Executive Chairman, NPCI

"Management is doing things right. Leadership is doing right things. In the context of AI in finance, governance is not merely about tech correctness. It is about doing the right things at the right time in ways that preserve trust, resilience, and inclusion."
—AK Churri

"In the future, we should use AI, but we certainly should not trust it. Its governance should be based on healthy skepticism about its capabilities."
—Sanjiv Sanyal, Economic Adviser to the Prime Minister's Office

"You cannot put AI into any real risk bucket because this is an emergent, evolving thing. Even if something is innocuous, it might blow up the whole system because these things are emerging, evolving, and interconnecting."
—Sanjiv Sanyal

"The Europeans will either strangle the system by being too stringent or open things up for progress but ultimately won't control it. The Chinese system loses control (as seen with COVID). The American system relies on post-hoc tort law. None are perfect—we must design something different."
—Sanjiv Sanyal

"Over-regulation repels innovation. Under-regulation repels serious long-term capital. The question is where to draw the equilibrium line."
—Mr. Kamat, IFSCA

"If a company has embedded robust controls, model inventories, bias testing, and continuous monitoring, regulators should take a lenient supervisory approach rather than treat incidents as systemic risk."
—RBI Panelist (Mura)

"The fundamental nature is: there is no zero-risk. The question is how do you equip yourself to handle risks and remain nimble enough to adapt as technology adapts?"
—Vikram, Cloud Service Provider

"Data is the new oil. We must ensure India owns the rights to its data and has the oil rigs (data centers) and refineries (ML platforms) to process it—not just the raw material."
—Sanjiv Sanyal


Speakers & Organizations Mentioned

EntityRole / Context
AK ChurriNon-Executive Chairman, NPCI (National Payments Corporation of India)
Sanjiv SanyalEconomic Adviser to the Prime Minister's Office; macroeconomist, historian of structural cycles
Mr. KamatRegulator, IFSCA (International Financial Services Centers Authority), Gift City
Mura (RBI Panelist)RBI official; discussed RBI Framework for Ethical AI in Finance
VikramCloud Service Provider (implied AWS or similar); expertise in cyber security and generative AI
PriyankaPanel moderator
AditaFounder, First Hive (customer data platform company); raised questions on sovereign data assets
NPCINational Payments Corporation of India (manages UPI, payment systems)
RBIReserve Bank of India (central bank; released "Framework for Enablement of Ethical AI in Finance")
IFSCAInternational Financial Services Centers Authority (regulates Gift City)
SEBISecurities and Exchange Board of India (capital markets regulator)
IRDAIInsurance Regulatory and Development Authority of India
Gift CityGlobal International Financial Services Centre, Ahmedabad (founded 2015; 11 years old during talk)

Technical Concepts & Resources

Concept / ToolDescription / Context
Embedded GovernanceIntegration of compliance, risk management, transparency, and accountability into every stage of AI lifecycle (design → deployment → monitoring) rather than applied post-hoc.
Risk-Based SupervisionRegulatory framework allocating oversight intensity proportional to systemic impact. Critiqued in talk as impossible to execute ex-ante for emergent AI systems.
Bounded vs. Unbounded ProblemsAI excels at bounded problems (fraud detection with defined features) but fails on unbounded ones (career planning). Recommendation to design systems around bounded scopes.
Operational Concentration RiskSystemic vulnerability from dependency on limited suppliers (90%+ of advanced chips, few cloud providers, handful of foundation models).
Model DriftUnintended bias and feedback loops emerging as data patterns evolve and models retrain over economic cycles. Continuous oversight required.
Adversarial AIUse of AI by malicious actors (phishing, credential attacks, malicious code generation via generative AI). Organizations must anticipate and strengthen detection.
Safe HarborRegulatory leniency for first-time incidents when entities demonstrate robust controls, bias testing, continuous monitoring, and proactive incident reporting.
Interoperable SandboxMechanism allowing solutions to be tested across multiple regulators (RBI, SEBI, IFSCA, IRDAI) simultaneously. Currently on-demand (any type of product) rather than theme-based.
Glass Box vs. Black BoxExplainable AI systems vs. opaque ones. RBI framework advocates for "glass box" systems with transparent disclosure and understandable outputs.
Algorithmic TradingCapital markets example: deployed unregulated from 2004–2010; reached critical mass before SEBI issued safeguarding guidelines in 2010. Continued exponential growth post-regulation. Used as precedent for measured AI governance.
Chinese Walls / CompartmentalizationRegulatory concept (from finance) preventing conflicts of interest. Proposed for AI: separate different algorithmic domains to limit cascade failures.
Foundation Models (LLMs)Large Language Models identified as limited scope of AI application; many more bounded-problem uses exist beyond LLMs.
AI Stack (5 Layers)1. Semiconductor chips (concentrated) 2. Cloud/data infrastructure (concentrated) 3. Data sets (public/proprietary) 4. Foundation models (handful of providers) 5. Applications (finance, daily life). Concentration at base layers creates systemic vulnerability.
Bias Testing & Impact AuditsPeriodic evaluation of AI systems for gender-based, demographic, or historical-skew bias; particularly critical for inclusion-focused use cases (MSME credit, informal sector).
Data Governance PillarsData integrity, consent management, purpose limitation, minimization principles. Financial data reflects livelihood and behavioral choices, not just transactions.
Cyber Security StandardsISO, NIST standards for independent third-party validation of organizational controls. Fundamental principles (MFA, strong passwords, regular updates) persist even as generative AI evolves threat landscape.
Consent-Based Data SharingProposed mechanism for regulatory definition and monitoring; referenced in context of data processor participation in governance frameworks.
Synthetic DataAI Course (RBI initiative) beginning with synthetic datasets; may correlate with regulated entity data with proper consent for financial sector applications.
Chartered AI AuditProposed institutional mechanism: analogous to chartered accountant reviews, entities deploying AI at scale would undergo formal audit of algorithm explainability, bias, and governance.

Summary of India's Strategic Position

India's governance approach should:

  1. Assert data sovereignty while building domestic infrastructure (data centers, LLM providers)
  2. Prioritize bounded AI applications over interconnected systems to minimize cascade risk
  3. Use regulatory sandboxes (Gift City, interoperable IFSC-RBI-SEBI-IRDAI) to enable experimentation without scaled systemic exposure
  4. Embed accountability ex-ante (clear liability assignment) rather than determining blame ex-post
  5. Invest in judicial capacity to handle novel copyright, liability, and fairness disputes emerging from AI
  6. Reward proactive compliance over punitive approaches, building institutional AI literacy and multidisciplinary governance capability
  7. Leverage population-scale inclusion advantage to demonstrate AI can expand financial access while preventing bias

This approach positions India as a distinctive middle path between EU compliance-heaviness, Chinese state control, and US tort-based leniency.