Securing the Future: AI Security in the Age of Autonomous Agents
Contents
Executive Summary
This panel discussion brought together representatives from financial regulation (Reserve Bank of India), international policy bodies (UNESCO), and government technology agencies (Australian government) to address the intersection of AI innovation and trustworthy governance. The conversation emphasized that effective AI governance requires moving from abstract principles to practical implementation tools, with a focus on human oversight, transparency, and inclusive design rather than rushing toward regulation.
Key Takeaways
-
Trustworthy AI is a System Property: It emerges from transparency, accountability, bias mitigation, post-deployment monitoring, and human oversight working together—not from any single technical feature.
-
Principles to Practice Translation is Urgent: The AI community has largely agreed on ethical principles (UNESCO, Australian guidance, RBI framework). The implementation gap is the critical challenge—frameworks like UNESCO's RAM and ethical impact assessment tools are essential infrastructure.
-
Financial Inclusion is an AI Opportunity and Risk: AI can extend banking services to 500+ million underserved people via alternative credit models, but only with robust safeguards against agent-driven risks, data bias (especially against women), and hidden automated decision-making.
-
Human Oversight Remains Non-Negotiable: Even with autonomous agents, humans must remain in command, understanding what decisions are being made on their behalf and retaining veto authority.
-
Implementation Over Proclamation: The real work begins now—voluntary frameworks must transition to adoption, safety institutes must embed monitoring capabilities, and regulators must develop practical enforcement mechanisms rather than resting on written guidelines.
Key Topics Covered
- Autonomous AI Agents: Legal and regulatory frameworks for agents operating in financial systems
- Trustworthy AI & Trust: Defining trust as transparency, explainability, bias mitigation, and post-deployment accountability
- UNESCO's Governance Framework: The UNESCO Recommendation on AI Ethics (adopted by 193 countries) and implementation tools
- AI Safety & Regulation: Voluntary vs. mandatory governance approaches; the Australian AI Safety Institute
- AI Washing & Consumer Protection: Disclosure requirements and verification of actual AI deployment
- Inclusive AI Design: Language accessibility, representation in training data, and avoiding bias
- Post-Deployment Oversight: The challenge of monitoring and correcting AI systems after they enter "the wild"
- International Coordination: Balancing national contexts with global interoperability of AI frameworks
- Human-in-the-Loop: The necessity of human oversight and decision-making authority even with autonomous systems
- Financial Sector Applications: AI's role in expanding financial inclusion while maintaining system integrity
Key Points & Insights
-
Agents Are Not Sentient Decision-Makers: AI agents operating in regulated sectors like banking should always have an identifiable human responsible for their actions. As one panelist stated, regulatory bodies will assess the creditworthiness of the human entity, not the agent itself—agents remain tools, not independent actors.
-
Trust Requires Multiple Components: A single feature (e.g., explainability) is insufficient. Trustworthy AI requires transparency, explainability, bias detection, accuracy quantification, post-deployment monitoring, and accountability mechanisms working together.
-
Hallucination & Inaccuracy Are Core Risks: An AI system can be "trustworthy" while only 90% accurate if it clearly discloses this margin of error. The danger lies in systems that hallucinate or provide false information without signaling uncertainty—this breaks trust at a fundamental level.
-
Disclosure & AI Washing: Many organizations claim to use "AI" without genuine implementation. Regulatory solutions should mandate clear disclosure of which AI systems are deployed, for what purposes, and maintain auditable model repositories—avoiding regulation-by-buzzword.
-
Inclusive Design Improves AI Performance: Including diverse stakeholders (doctors, women, underrepresented populations) in AI design doesn't only benefit those groups—it produces better algorithms, reduces bias, and expands market reach. Exclusion creates data gaps that degrade system performance.
-
Post-Deployment Monitoring Is Often Neglected: AI systems are carefully designed but frequently lack post-sale accountability. Unlike traditional industries (e.g., automotive), many AI deployments have no systematic follow-up monitoring or correction mechanisms after going "live."
-
Regulation Should Define Boundaries, Not Prescribe Innovation: Rather than dictating what AI must do, frameworks should establish what AI must not do—protecting human rights, dignity, and fundamental freedoms while leaving creativity to innovators.
-
Language Barriers Limit Agent Adoption: Speech-to-text systems fail for non-English speakers and non-standard accents. As agents scale globally, linguistic inclusion is not just ethical—it's a technical requirement for market viability.
-
Multilateral Coordination Without Fragmentation: Countries need frameworks that reflect local contexts while maintaining interoperability. Divergent approaches increase compliance costs and undermine innovation; harmonized principles with flexible implementation are preferred.
-
Using AI to Supervise AI: Regulatory bodies and institutions can deploy AI systems to monitor other AI deployments, setting checkpoints and parameters. The human remains the supervisor of supervisory AI—human oversight cannot be automated away.
Notable Quotes or Statements
-
On trust and accuracy (Ankur Singh, RBI): "An inaccurate AI can be trustworthy because you know that 10% of the time it'll fail. You're aware about that. But if an AI starts claiming that I am 100% accurate AI and is lying to you...that is where the trust goes."
-
On inclusive design (Maria Garcia, UNESCO): "Inclusiveness does not only benefit the included...it enables better AI to be developed. The algorithm performs better. The algorithm can also seize a greater market share and overperform others that are inferior."
-
On post-deployment gaps (Maria Garcia): "If you think about a car and you think that you won't have any post-sales service you wouldn't buy that car right...[but] in majority of AI systems they are created with all care...by the time they go deployed in the wild there is very often lack of post-deployment follow-up."
-
On regulation vs. innovation (Maria Garcia): "We don't have to decide what do we want the technology to do...We nevertheless need to agree on what we surely don't want the technology to do because that's about ensuring our rights."
-
On the human-in-the-loop principle (Ankur Singh): "Human has to come into the picture somewhere...that human in command is going to be key when it comes to agent AI."
-
On the awareness shift (Maria Garcia): "If we were to have this conversation three years ago, the room will be half empty. It wasn't perceived to be the kind of challenge and opportunity that we are aware it is today."
-
On dual-use of AI for supervision (Ankur Singh): "Using AI itself to supervise AI isn't it? I mean that has to be the way going ahead...if people have their own agents we should have ours."
Speakers & Organizations Mentioned
| Speaker | Organization/Role |
|---|---|
| Syad Ahmed | Global Head, Responsible AI Office, Infosys (Moderator) |
| Ankur Singh | Fintech Department, Reserve Bank of India |
| Maria Garcia Cucherini | Social and Human Sciences Sector, UNESCO |
| Caitlyn (Full name incomplete in transcript) | Australian Government (Technology and Science Engagement) |
Supporting Organizations/Initiatives Referenced:
- UNESCO (Recommendation on AI Ethics, 193 countries; RAM—Readiness Assessment Methodology; Observatory for AI Ethics and Governance)
- Reserve Bank of India (Free AI Framework)
- Australian Government (National AI Plan; AI Safety Institute; Voluntary AI Safety Standards, 2019; Six Essential Practices Guidance, October 2023)
- G7 (Public Sector AI Toolkit collaboration with UNESCO)
- G20 South Africa Presidency 2025 (AI Inequality Toolkit)
- Infosys (Responsible AI Toolkit)
Technical Concepts & Resources
Tools & Methodologies
- UNESCO Readiness Assessment Methodology (RAM): Evaluates a country's infrastructure, human capital, institutional settings, and legislation for AI governance across 11 policy areas (education, privacy, data, culture, environment, gender, labor, health, justice, etc.)
- Ethical Impact Assessment: Tool ensuring AI systems comply with human rights, dignity, and fundamental freedoms regardless of development stage
- Global AI Ethics and Governance Observatory: Knowledge-sharing platform documenting implementations across countries
- AI Without Borders (Expert Initiative): Multidisciplinary expert network (economists, IT specialists, lawyers, regulators) deployed to countries needing policy support
Frameworks & Governance Models
- UNESCO Recommendation on AI Ethics (2021): Adopted by 193 countries; defines AI via key components (not rigid definitions) with 11 principle-based policy areas and enforcement mechanisms (redress, human oversight)
- RBI's Free AI Framework: Principles-based approach emphasizing trust, disclosure, model auditing, human oversight; sector-specific for financial services
- Australian National AI Plan (2023): Three pillars: innovation, keeping Australians safe, international engagement; includes voluntary adoption guidance and Safety Institute
- Soul Declaration: International agreement on AI safety; Australia is signatory
Key Concepts
- AI Washing: False or exaggerated claims of AI use; combated via mandatory disclosure in reports and auditable model repositories
- Hallucination: AI generating false or nonsensical outputs while appearing confident; identified as core trust breach
- Thin File / No File: Individuals with limited credit history; AI with alternate data can extend financial inclusion
- Agent-Human Accountability Loop: Ensuring agents operate only within human-defined parameters; humans remain responsible for agent actions
- Regulatory Interlink: Coordination of regulatory frameworks across jurisdictions to reduce compliance fragmentation
- Post-Deployment Monitoring Gap: Systemic failure to monitor AI systems after launch; UNESCO calls for "post-sale service" equivalent
Policy Areas (UNESCO Framework)
Education, Privacy, Data, Culture, Environment, Gender, Labor, Health, Justice, and others requiring AI-specific governance.
Context & Relevance
This panel was part of what appears to be an international AI summit (venue context suggests India, with Australian, Italian/UNESCO, and Indian representation). The discussion reflects a post-hype maturation phase in AI governance—moving from "what principles should we adopt?" (2019–2021) to "how do we implement them?" (2023–2025). Key tensions remain unresolved: balancing innovation with safety, national contexts with global standards, and voluntary compliance with regulatory enforcement.
