Artificial intelligence is moving from passive analysis to autonomous agents. Agentic AI describes systems capable of making decisions, initiating actions, and pursuing goals with minimal human input. While this offers huge efficiency gains, it also introduces complex cyber fraud and security risks.

What is Agentic AI?

Traditional AI models classify data or generate responses when prompted. Agentic AI goes further: it plans, adapts and interacts with systems and people to complete tasks. Examples include:

  • Automated trading agents
  • Customer service bots with payment access
  • Shopping agents that can find the best deals and help the user do their shopping or book a holiday

Potential benefits

  • More efficient trading across multiple platforms.
  • Reduction in customer service response times and wait times.
  • Faster bookings at relatively cheaper prices.

Fraud and Security Risks

  1. Autonomy Without Oversight: Agents may act beyond intended parameters, causing financial detriment.
  2. Adversarial Manipulation: Fraudsters could feed false data to misdirect autonomous agents.
  3. Phishing Websites, Data Breaches and Fraud: Agents could come across fake websites set up by cybercriminals and provide the user's sensitive data to fraudsters while attempting to buy non-existent products from fake websites, leading to fraud losses and data breaches.

Our Fraud Risk Perspective on Agentic AI

We believe autonomous Agentic AI functions should ideally operate alongside Human-in-the-Loop (HITL) systems or a parallel agentic anti-fraud failsafe system – whether from the model provider or from the enterprise. Each agent should have built-in verification, guardrails and audit logging, with particular attention to the risks of fraud and phishing.

Just as humans learn to recognise deception, AI agents should be capable of detecting and avoiding fraudulent websites, or at minimum performing a risk assessment before interacting with a site or completing a purchase. Fraud self-reporting capabilities would also be helpful, to help AI agents to proactively report when there's been a near miss or in situations where a potential fraud risk was encountered but averted.

Key Takeaway

Agentic AI holds great promise in increasing efficiency and delivering value - but autonomy without any governance or fraud guardrails creates the risk of financial fraud losses. The future lies in semi-supervised intelligence, where transparency, accountability, agentic anti-fraud controls and ethics are embedded, and agents and humans collaborate in a continuous, symbiotic loop to strengthen digital trust and deliver real value safely.