All sessions

AI for Financial Inclusion: Fraud Prevention in BFSI

Contents

Executive Summary

This panel discussion examines AI's transformative role in building trust infrastructure within banking, financial services, and insurance (BFSI), moving beyond traditional rule-based fraud detection toward real-time, intelligent risk assessment at scale. Speakers from major financial institutions and investment firms argue that AI can simultaneously reduce fraud, lower customer friction, and expand financial inclusion—but only if deployed with strong governance frameworks (justifiability, contestability, traceability) and a cultural shift in how institutions and regulators evaluate algorithmic decision-making. The India-Singapore AI hub emerges as a strategic vehicle for cross-border intelligence sharing and scaling proven models across diverse digital ecosystems.

Key Takeaways

  1. AI is not a tool but infrastructure for trust: Institutions must stop treating AI as a tactical fraud-detection layer and architect it as foundational digital public infrastructure—continuously learning, adapting at scale, and supporting both direct and cross-border fraud intelligence.

  2. Governance frameworks (JCT/PURE) are prerequisites for scale: Explainability and human accountability are non-negotiable in regulated environments. Institutions that embed justifiability, contestability, and traceability into model governance can scale faster with regulator confidence and lower reputational risk.

  3. Data readiness and cross-border sharing unlock systemic impact: Individual institution AI is limited; systemic fraud prevention requires anonymized, shared registries (mule accounts, behavioral signals) across countries and institutions. India-Singapore partnership models offer template for trust-building infrastructure at regional scale.

  4. Inclusion and reduced friction are achievable together, but require intentional design: Tiered onboarding, context-aware decisioning, assisted digital models, and bias auditing ensure AI reduces false positives and exclusion. Conversely, AI systems trained on biased historical data risk automating discriminatory patterns at scale.

  5. Adoption velocity must match organizational capacity: Leaders face pressure to deploy "shiny object" AI; best practice is "less is more"—fewer, deeply executed pilots that scale rather than many pilots that remain siloed. Mindset shifts and change management (the "70") determine success more than algorithms.

Key Topics Covered

  • Fraud prevention at scale: Limitations of rule-based systems; transition from post-facto detection to in-flight, real-time risk intelligence
  • Mule networks and cross-border intelligence: The challenge of detecting sophisticated fraud (e.g., money movement across multiple accounts/institutions in milliseconds)
  • Trust infrastructure as foundational public good: AI framed as infrastructure, not merely a tool; analogies to India's digital public infrastructure (UPI, Aadhaar)
  • Explainability, accountability, and governance: JCT framework (Justifiable, Contestable, Traceable); PURE framework (Purposeful, Unsurprising, Respectful, Explainable); role of human-in-the-loop systems
  • Inclusion vs. exclusion: Tiered onboarding, assisted digital models, and risk of unintended exclusion of underserved populations (MSME, first-time users)
  • From pilots to scale: Organizational mindset shifts, data readiness, technology adoption velocity vs. adoption capacity
  • Capital allocation and financial markets: Long-term investor perspective on risk mitigation, cost reduction, and deployment of capital through trusted infrastructure
  • India-Singapore partnership: Shared registries (e.g., mule account registry), anonymized data sharing, best practice exchange, and cross-border financial corridor development
  • Broader AI adoption and workforce impacts: Job security, productivity gains, upskilling, and human agency in an AI-augmented workplace
  • Biases in historical data: Risk of excluding populations through discriminatory historical patterns; importance of diversity in training data

Key Points & Insights

  1. Scale demands intelligent systems: UPI processes 20+ billion transactions monthly with projected losses exceeding 1 lakh crores if unmitigated. Static, rule-based limits (transaction caps, daily thresholds) cannot keep pace with fraud velocity (milliseconds); AI enables millisecond-level adaptive response.

  2. Mule networks require cross-institutional intelligence: A single institution sees only partial money flows; AI analyzing behavioral anomalies across multiple accounts and institutions can detect network-level fraud patterns humans and rule engines cannot identify in real time.

  3. Mindset shift is the bottleneck, not technology: Humans are forgiven for errors; AI systems face disproportionate scrutiny. Scaling requires accepting that AI systems—even if parity with human performance—create new capacity (doing in 2 minutes what took 2 hours) before generating new capabilities. Leaders must move beyond 1/0 rule-based thinking.

  4. JCT framework operationalizes accountability for regulated environments:

    • Justifiable: Institutions must explain every decision (loan denial, transaction block) to customers and regulators
    • Contestable: Affected customers have recourse to challenge decisions
    • Traceable: Auditability of model lineage, versioning, bias sources, and decision pathways
    • Human-in-the-loop remains essential (e.g., video KYC verification in India requires human review despite algorithm checks)
  5. Tiered onboarding balances friction and risk: Low-risk products enable instant, frictionless onboarding; higher-risk/higher-value products require video KYC or physical presence. Context-aware real-time decisioning (e.g., "street mode" requiring face authentication for payments outside safe locations) reduces false positives and exclusion.

  6. Data is foundational but fragmented: AI systems cannot scale without data readiness—anonymization, standardization, single source of truth. Cross-border data sharing (with anonymization) to build shared registries (e.g., mule account registries) is critical to preventing fraud at network level.

  7. Trusted infrastructure creates a flywheel: Trusted data → trusted models → trusted capital allocation → trusted inclusion. Capital allocators deploy more capital (equity + credit) when risk is reduced through better data governance and fraud prevention, lowering cost of capital and enabling broader financial inclusion.

  8. Convergence happens in infrastructure, not applications: While fintech apps are visible, the strategic impact of AI for inclusion lies in embedding trusted financial infrastructure in supply chains and production networks (agriculture, renewables, labor markets), enabling financing to become a byproduct of economic growth rather than a friction point.

  9. Regulatory frameworks are enablers: Singapore's FEAT framework and India's RBI concept paper align with operational governance best practices (JCT, PURE). Regulators facilitating data-sharing between institutions—not just policing—is critical for systemic fraud prevention.

  10. The "10-20-70" rule: Technology (10) and algorithms (20) are necessary but insufficient without addressing organizational culture, change management, and people adoption (70). Inclusion/exclusion outcomes hinge on whether humans are genuinely empowered (increased agency) or displaced (job loss, unequal access).


Notable Quotes or Statements

"AI must be reframed not as a tool but as an infrastructure."
— Opening speaker (Suresh, inferred from context)

"Speed and reactiveness" and "network intelligence" are the two key things. Rule-based mechanisms work at a transactional level only; they're not good enough at this velocity.
— Suresh (discussing fraud prevention at UPI scale)

"Three key stakeholders—customer, auditor, regulator—each with different needs. JCT (Justifiable, Contestable, Traceable) framework addresses all three."
— Manish (11:81 / bank-within-bank operator)

"AI has a huge future, but we need guardrails. We want to be an AI-enabled bank with a heart—ensuring empathy doesn't disappear as AI becomes pervasive."
— Speaker from DBS (Singapore-based institution, inferred)

"When financing is embedded in production networks, inclusion is a byproduct of economic growth."
— SJ / TESCO representative (capital allocation perspective)

"Pilots are designed to be successful; scale-up is less a tech problem and more a mindset shift. We need to move from AI as rule-based system to AI that learns daily from data and patterns."
— Suresh (on the psychology of AI adoption)

"Less is more. Do three projects well versus five projects poorly. Scale depth before breadth."
— Moderator / BCG speaker (on best practices for pilot-to-scale)

"Technology (10), algorithms (20)—but people and organizational change (70) determine success. We talk far less about the 70 than the 10 and 20."
— Moderator / BCG speaker (closing framework)

"What worries me is dilution of human accountability. Machines will make many decisions, but human accountability must stay in the loop."
— SJ / TESCO (on risks)

"Two-thirds of our employees actively use AI in daily work. What I'm excited about: moving up the value chain as humanity. What worries me: we're still just touched the tip of the iceberg—and is our education system ready?"
— DBS speaker (on workforce impact)


Speakers & Organizations Mentioned

EntityRole / Context
SureshAppears to lead fraud/risk strategy; discussed UPI scale, rule-based limitations, network intelligence. Likely from NPCI or RBI-adjacent organization.
ManishHead of 11:81 (a "bank within a bank" acquiring engine in India); emphasized explainability, JCT framework, onboarding, video KYC.
DBS (Singapore)Large APAC banking institution; speaker emphasized AI-enabled bank with heart, employee AI adoption (2/3 of workforce), governance.
SJCapital allocator, TESCO (or "long-term capital" vehicle in SE Asia); focused on capital allocation, audit layer, supply chain financing, cross-border corridors.
Moderator (assumed BCG)Strategy consultant; introduced 10-20-70 framework, wrapped up discussion with emphasis on organizational change.
NPCINational Payments Corporation of India; referenced for UPI governance, fraud control efforts.
RBIReserve Bank of India; referenced for AI governance framework (concept paper).
TESCOMajor capital deployer in SE Asia (inferred from "one of the largest deployers of capital").
Singapore Financial Regulatory Authority (MAS, inferred)FEAT framework referenced; Singapore's regulatory posture on AI.
CapGeminiMentioned in Q&A by an audience member (Muskan, financial analyst).
UK fintech (unnamed)"Street mode" example cited by Manish (context-aware authentication, safe places, face authentication lag).
India Post Payments BankMentioned by Suresh as example of assisted digital literacy challenges and successes.
Kenya Mobile Money EcosystemReferenced as precedent for numerate-but-non-literate populations; contrast to India's literacy-heavy approach.

Technical Concepts & Resources

AI Governance & Accountability Frameworks

  • JCT Framework (Justifiable, Contestable, Traceable)

    • Justifiable: Ability to explain every decision
    • Contestable: Customer recourse to challenge decisions
    • Traceable: Model lineage, versioning, bias auditing, decision provenance
  • PURE Framework (Purposeful, Unsurprising, Respectful, Explainable)

    • Purposeful: AI deployment aligned with clear business/regulatory purpose
    • Unsurprising: Transparent to customers about where AI is in use
    • Respectful: Mitigation of bias; avoidance of discriminatory historical patterns
    • Explainable: Model interpretability and decision reasoning

Regulatory References

  • FEAT Framework (Singapore's AI/Fintech governance framework)
  • RBI Concept Paper on AI (India's AI governance for banking/insurance)
  • Video KYC Mandate (India, launched May 2020; requires human review despite algorithmic checks)

Data & Intelligence Infrastructure

  • Mule Account Registry: Cross-institutional, anonymized registry of fraudulent account networks
  • Programmable Registries: Large-scale data ecosystems (agriculture, education, individual identity)
  • Data Anonymization: Behavioral patterns, signals shared across institutions without customer PII
  • Model Drift, Bias Auditing, Lineage Tracking: Operational governance practices

UPI & Digital Payment Scale

  • UPI Monthly Transaction Volume: 20+ billion transactions/month (as of talk date)
  • Projected Fraud Losses: Exceeding 1 lakh crores (₹100,000 crores) if unmitigated
  • Transaction Velocity: Fraud happens in milliseconds; post-facto rule engines inadequate
  • Context-Aware Decisioning: Real-time risk assessment with transaction, behavioral, network, and geolocation signals

Operational Concepts

  • Tiered Onboarding: Low-friction (instant), medium-friction (video KYC), high-friction (physical presence) based on risk profile
  • "Street Mode" (fintech product example): Designates safe locations (home, office); requires face authentication + time lag for transactions outside safe zones
  • Human-in-the-Loop (HITL): Algorithmic decisions flagged for human review (e.g., loan approvals, high-risk transactions)
  • Assisted Digital Model: Human support for non-digitally-savvy users (e.g., postal workers in India Post Payments Bank)
  • 10-20-70 Framework: 10% technology, 20% algorithms, 70% organizational change/people adoption

AI Model Concepts (Implied)

  • Behavioral Anomaly Detection: Identifying deviations from typical user patterns across single and multiple accounts
  • Network Intelligence: Graph-based fraud detection across accounts and institutions
  • Real-Time Risk Scoring: Sub-second decision-making vs. batch processing
  • Model Governance: Versioning, retraining frequency, bias testing, production monitoring

Gaps & Limitations in Discussion

  1. Specific AI techniques not detailed: No discussion of neural networks, gradient boosting, graph algorithms, or other specific ML methods
  2. Privacy-preserving techniques underspecified: Federated learning, differential privacy, and synthetic data generation mentioned only indirectly (via "anonymization")
  3. Capital allocation metrics vague: Returns, cost of capital, and deployment volume mentioned but not quantified
  4. Cross-border corridor timeline unclear: India-Singapore hub framed as strategic vision but no concrete launch date or phased roadmap provided
  5. Regulatory harmonization challenges not explored: How FEAT and RBI frameworks will reconcile on data-sharing, model standards, and cross-border liability
  6. Workforce impact underexplored: Job displacement vs. upskilling trade-offs mentioned but not rigorously analyzed
  7. Bias auditing practices not detailed: How "respectful" and "non-discriminatory" AI is actually operationalized in production

Relevance & Significance

This talk is highly relevant to:

  • Policy makers & regulators designing AI governance frameworks for fintech and financial inclusion
  • Financial institution leaders building fraud prevention, KYC, and credit decisioning systems
  • Capital allocators evaluating fintech and digital infrastructure investments in India and SE Asia
  • Technologists & data scientists implementing explainable AI in regulated environments
  • Nonprofits & development organizations focused on financial inclusion for underserved populations

The talk articulates a maturing narrative in AI for financial inclusion: moving beyond "AI solves fraud" hype toward acknowledging the organizational, regulatory, and ethical challenges required to scale trustworthy systems. The India-Singapore partnership framing positions the discussion as a template for other cross-border, digital-native AI infrastructure initiatives (e.g., ASEAN fintech corridors, African payment rails).