Blog


Welcome to the PORGiESOFT Group Blog - your source for insight, research and analysis on the evolving world of digital fraud, scams, financial education and cybercrime. We explore the latest fraud intelligence, AI security innovations, and practical strategies helping people, businesses and governments stay protected. From smishing and phishing trends to fraud awareness, threat intelligence and AI-powered prevention, our mission is to make the digital world safer for everyone.

Report Fraud: A New Chapter in UK Cyber-Fraud Response
The landscape for reporting cybercrime and fraud in the UK has shifted - and it's a change worth paying attention to. On 4 December 2025, City of London Police launched Report Fraud, a new national service replacing Action Fraud as the central reporting platform for cybercrime and fraud in England, Wales and Northern Ireland. This is more than a rename. Report Fraud introduces a modernised reporting infrastructure, support system for victims, and a revamped intelligence backbone. For organisations, councils and citizens, this service change marks a pivotal opportunity to rethink how fraud is reported, tracked, and acted upon. What is Report Fraud - and why now? Report Fraud has been designed by City of London Police to streamline and improve the experience for victims of cybercrime and fraud. According to the launch announcement, the service includes: A new Contact Centre and online reporting tool — providing a user-friendly way for the public to report fraud or cybercrime A national
5 December 2025
What is Agentic AI and What Are the Fraud Risks?
Artificial intelligence is moving from passive analysis to autonomous agents. Agentic AI describes systems capable of making decisions, initiating actions, and pursuing goals with minimal human input. While this offers huge efficiency gains, it also introduces complex cyber fraud and security risks. What is Agentic AI? Traditional AI models classify data or generate responses when prompted. Agentic AI goes further: it plans, adapts and interacts with systems and people to complete tasks. Examples include: Automated trading agents Customer service bots with payment access Shopping agents that can find the best deals and help the user do their shopping or book a holiday Potential benefits More efficient trading across multiple platforms. Reduction in customer service response times and wait times. Faster bookings at relatively cheaper prices. Fraud and Security Risks Autonomy Without Oversight: Agents may act beyond intended parameters, causing financial detriment. Adversarial Manipulat
7 November 2025
The Human Factor: Behavioural Insights from the Smishing Report
Why do consumers respond to smishing messages despite knowing it exists? The Smishing Report 2022 dedicated an entire section to this paradox - revealing that the issue lies less in awareness, and more in behaviour under pressure. The awareness gap According to wider analysis, 95% of consumers could not reliably detect fraudulent SMS messages. This reflects what psychologists call overconfidence bias - people believe they can spot scams, yet fail to apply that confidence under stress. The fraud moment PORGiESOFT Security’s victim research and OSINT analysis revealed a pattern called the fraud moment - a short window between receiving a message and deciding to act. During that short interval, emotional response overrides rational thought. The report identified three high-risk triggers: Financial anxiety – messages about refunds or fines. Social pressure – fake job or delivery updates. Authority bias – impersonations of government or banks. In each case, the victim’s emotional state det
8 February 2024
APP Fraud: Understanding the UK’s Fastest-Growing Financial Threat
Authorised Push Payment (APP) fraud has emerged as one of the most damaging forms of financial crime in the UK. Unlike traditional scams, APP fraud relies on deception rather than hacking. Victims are persuaded to transfer money themselves - to a criminal account they believe is safe. How APP fraud works A typical case begins with a convincing impersonation: a phone call from “the bank’s fraud team”, an SMS alert, or even a WhatsApp message appearing to come from a family member. The victim is told their account has been compromised and that they must transfer funds “for protection”. Once the transfer occurs, the funds are often dispersed through a web of mule accounts within minutes. The emotional dimension Fraudsters no longer rely solely on technical skill. They exploit emotion - fear, trust, love, urgency - to manipulate and confuse victims. PORGiESOFT Security’s behavioural analysis shows that victims generally report “feeling pressured by authority” during the scam. Why detection
14 November 2022
Smishing in the UK: How SMS Fraud Evolved into a National-Scale Threat
When PORGiESOFT Security first released the Smishing Report 2022, it was one of the first threat intelligence studies to classify smishing using both linguistic and organisational taxonomies. The findings revealed a sophisticated and fast-evolving threat landscape. At the time, 45 million UK adults (around 71% of the population) had received a smishing text. More than 3,000 attacks were analysed and classified into nine attack classes and thirteen levels, revealing how fraudsters weaponised SMS as a psychological and technical tool. The scale of the problem The report found that smishing was not random. It followed discernible trends and emotional triggers. The top three impersonated sectors were: Banks (Level B) - 39.4% of analysed messages Parcel Delivery Companies (Level P) - 26.3% Government Departments (Level G) - 16.3% Together, these categories represented over 80% of all smishing activity in the UK at the time. Since then, smishing has only grown more complex. Threat actors no
5 November 2022
This blog isn’t available right now. Try refreshing the page or check back later. Sorry for the inconvenience