All postsEdTechPhishingSmishingInnovation & PartnershipsFraud Awareness & Citizen EducationGlobal Threat LandscapeEducation & University AwarenessAI & Explainable SecurityCyber Fraud Fusion & Fraud OSBanking & Financial CrimeGovernment & Public Sector ProtectionEmerging ScamsFraud PreventionCyber Security

What is Agentic AI and What Are the Fraud Risks?
Artificial intelligence is moving from passive analysis to autonomous agents. Agentic AI describes systems capable of making decisions, initiating actions, and pursuing goals with minimal human input. While this offers huge efficiency gains, it also introduces complex cyber fraud and security risks. What is Agentic AI? Traditional AI models classify data or generate responses when prompted. Agentic AI goes further: it plans, adapts and interacts with systems and people to complete tasks. Examples include: Automated trading agents Customer service bots with payment access Shopping agents that can find the best deals and help the user do their shopping or book a holiday Potential benefits More efficient trading across multiple platforms. Reduction in customer service response times and wait times. Faster bookings at relatively cheaper prices. Fraud and Security Risks Autonomy Without Oversight: Agents may act beyond intended parameters, causing financial detriment. Adversarial Manipulat
7 November 2025

The Human Factor: Behavioural Insights from the Smishing Report
Why do consumers respond to smishing messages despite knowing it exists? The Smishing Report 2022 dedicated an entire section to this paradox - revealing that the issue lies less in awareness, and more in behaviour under pressure. The awareness gap According to wider analysis, 95% of consumers could not reliably detect fraudulent SMS messages. This reflects what psychologists call overconfidence bias - people believe they can spot scams, yet fail to apply that confidence under stress. The fraud moment PORGiESOFT Security’s victim research and OSINT analysis revealed a pattern called the fraud moment - a short window between receiving a message and deciding to act. During that short interval, emotional response overrides rational thought. The report identified three high-risk triggers: Financial anxiety – messages about refunds or fines. Social pressure – fake job or delivery updates. Authority bias – impersonations of government or banks. In each case, the victim’s emotional state det
8 February 2024

APP Fraud: Understanding the UK’s Fastest-Growing Financial Threat
Authorised Push Payment (APP) fraud has emerged as one of the most damaging forms of financial crime in the UK. Unlike traditional scams, APP fraud relies on deception rather than hacking. Victims are persuaded to transfer money themselves - to a criminal account they believe is safe. How APP fraud works A typical case begins with a convincing impersonation: a phone call from “the bank’s fraud team”, an SMS alert, or even a WhatsApp message appearing to come from a family member. The victim is told their account has been compromised and that they must transfer funds “for protection”. Once the transfer occurs, the funds are often dispersed through a web of mule accounts within minutes. The emotional dimension Fraudsters no longer rely solely on technical skill. They exploit emotion - fear, trust, love, urgency - to manipulate and confuse victims. PORGiESOFT Security’s behavioural analysis shows that victims generally report “feeling pressured by authority” during the scam. Why detection
14 November 2022

Smishing in the UK: How SMS Fraud Evolved into a National-Scale Threat
When PORGiESOFT Security first released the Smishing Report 2022, it was one of the first threat intelligence studies to classify smishing using both linguistic and organisational taxonomies. The findings revealed a sophisticated and fast-evolving threat landscape. At the time, 45 million UK adults (around 71% of the population) had received a smishing text. More than 3,000 attacks were analysed and classified into nine attack classes and thirteen levels, revealing how fraudsters weaponised SMS as a psychological and technical tool. The scale of the problem The report found that smishing was not random. It followed discernible trends and emotional triggers. The top three impersonated sectors were: Banks (Level B) - 39.4% of analysed messages Parcel Delivery Companies (Level P) - 26.3% Government Departments (Level G) - 16.3% Together, these categories represented over 80% of all smishing activity in the UK at the time. Since then, smishing has only grown more complex. Threat actors no
5 November 2022



