All postsEdTechPhishingSmishingInnovation & PartnershipsFraud Awareness & Citizen EducationGlobal Threat LandscapeEducation & University AwarenessAI & Explainable SecurityCyber Fraud Fusion & Fraud OSBanking & Financial CrimeGovernment & Public Sector ProtectionEmerging ScamsFraud PreventionCyber Security

What is Agentic AI and What Are the Fraud Risks?
Artificial intelligence is moving from passive analysis to autonomous agents. Agentic AI describes systems capable of making decisions, initiating actions, and pursuing goals with minimal human input. While this offers huge efficiency gains, it also introduces complex cyber fraud and security risks. What is Agentic AI? Traditional AI models classify data or generate responses when prompted. Agentic AI goes further: it plans, adapts and interacts with systems and people to complete tasks. Examples include: Automated trading agents Customer service bots with payment access Shopping agents that can find the best deals and help the user do their shopping or book a holiday Potential benefits More efficient trading across multiple platforms. Reduction in customer service response times and wait times. Faster bookings at relatively cheaper prices. Fraud and Security Risks Autonomy Without Oversight: Agents may act beyond intended parameters, causing financial detriment. Adversarial Manipulat
7 November 2025

QR Code Scams: The Hidden Threat Behind Everyday Convenience
QR codes have become a normal part of daily life. We scan them to pay bills, view menus, check into buildings, even access government services. As adoption grows, so too does the opportunity for fraud. Criminals are now weaponising QR codes to deceive consumers, employees and even entire organisations. At PORGiESOFT Security, our Threat Intelligence Function has been monitoring the sharp rise in “quishing” - QR-code phishing - across both public and private sectors. A new entry point for fraud A QR code is simply a digital bridge. It connects a physical environment to a web destination in seconds. Fraudsters exploit this by swapping or overlaying genuine codes with malicious ones that redirect to cloned websites or install malware. In 2024, we detected fraudulent QR codes targeting car-park payment machines, event tickets, and council notices. Some even mimicked NHS vaccine booking links during the pandemic’s later stages. The subtlety of the attack – and public familiarity with scann
1 April 2025

AI and the Future of Financial Crime Detection
Artificial intelligence is transforming the financial sector’s ability to detect and prevent fraud. With over £1 trillion lost globally to scams each year, AI offers both a challenge and an opportunity. Used responsibly, it can empower banks, regulators and fraud teams to identify suspicious activity faster and more accurately than ever before. The scale of the problem Fraud now accounts for over 40% of all crime in the UK. Financial institutions face relentless attacks ranging from phishing and smishing to insider threats and synthetic identity fraud. Traditional systems often rely on static rules or manual reviews, which cannot keep pace with evolving threats. Why AI is a game-changer AI can analyse millions of data points in real time, spotting subtle anomalies that human analysts might miss. Natural language processing (NLP) helps systems understand communication patterns, while machine learning models learn from historical fraud data to predict new threats. At PORGiESOFT Security,
9 December 2024

The Human Factor: Behavioural Insights from the Smishing Report
Why do consumers respond to smishing messages despite knowing it exists? The Smishing Report 2022 dedicated an entire section to this paradox - revealing that the issue lies less in awareness, and more in behaviour under pressure. The awareness gap According to wider analysis, 95% of consumers could not reliably detect fraudulent SMS messages. This reflects what psychologists call overconfidence bias - people believe they can spot scams, yet fail to apply that confidence under stress. The fraud moment PORGiESOFT Security’s victim research and OSINT analysis revealed a pattern called the fraud moment - a short window between receiving a message and deciding to act. During that short interval, emotional response overrides rational thought. The report identified three high-risk triggers: Financial anxiety – messages about refunds or fines. Social pressure – fake job or delivery updates. Authority bias – impersonations of government or banks. In each case, the victim’s emotional state det
8 February 2024

Mapping the Smishing Threat Ecosystem: Insights and Tactical Analysis from UK Smishing Attacks
PORGiESOFT Security researched and provided a quantitative map of the UK smishing ecosystem, detailing how threat actors, infrastructure and victims intersect. What did we learn? 1. Attack infrastructure Nearly 99 percent of all messages were written in English, confirming that UK consumers remain a primary focus for global smishing campaigns. The study identified nine distinct classes of smishing messages, from Class A (URL only, 58 %) to Class M (multiple fraud data points, 8.2 %) and smaller reply-based classes (Y and Z) that asked users to text “Y”, “YES”, or “STOP”. Each class revealed a different operational intent - whether to capture clicks, phone calls or conversation engagement. On the organisational side, 13 impersonation levels were mapped. The top three were: Banks (Level B) – 39.4 % of attacks Parcel Delivery Companies (Level P) – 26.3 % Government Departments (Level G) – 16.3 % Together, these sectors accounted for over 80 percent of all UK smishing incidents analysed.
13 September 2023

Unmasking Phishing Scams: Protecting Your Personal Data
Explore the world of phishing scams and learn how to protect your personal data with effective strategies and insights.
1 July 2023



