50% of targets are fooled by AI-supported spear phishing, which fools more than 50% of targets.
In a controlled study comparing AI and human cybercriminal success rates, 54% of users were tricked by AI-supported spear phishing emails.
2025-01-09 13:08:44 - Ravi Jordan
When artificial intelligence (AI) became more commonplace, everyone predicted that it would assist cybercriminals in making their phishing campaigns more effective.
Researchers have conducted a scientific study into the effectiveness of AI-supported spear phishing, and the results align with everyone's expectations. AI is making it easier to do crimes.
The study, titled Evaluating Large Language Models' Capability to Launch Fully Automated Spear Phishing Campaigns: Validated on Human Subjects, evaluates the ability of large language models to launch fully automated spear phishing campaigns and compares their performance with human experts and AI models from last year.
To accomplish this, the researchers developed and tested an AI-powered tool to automate spear phishing campaigns. AI agents based on GPT-4o and Claude 3.5 Sonnet search the web for information about a target and use this to send phishing messages.
A click-through rate of 54% was achieved with these tools, while a CTR of 12% was achieved with arbitrary phishing emails.
Another group underwent testing against an email generated by human experts, which proved to be equally efficacious as the fully AI automated emails, achieving a CTR of 54% But the human experts did this at 30 times the cost of the AI automated tools.
The AI tools with human assistance outperformed the CTR of these groups by scoring 56%, which is four times the cost of the AI automated tools. This means that some help from experts can make the CTR better, but is it worth spending the time? We don't expect cybercriminals to think the extra 2% to be worth the investment because they often exhibit a preference for efficiency and minimal effort in their operations.
The study found that AI models can trick people better now than they did last year. Before, AI models needed help from humans to do as well as experts.
The success of a phishing email depends on the level of personalization that can be achieved by an AI assisted method and the base for that personalization.
Example from the paper showing how collected information is used to write a spear phishing email
Based on information online about the target, they are invited to participate in a project that matches their interests and given a link to a site where they can find more information.
In 88% of cases, the artificial intelligence-generated data was reliable and useful, while only 4% of the participants produced incorrect profiles.
The researchers also found that the guardrails that are supposed to stop AI models from helping cybercriminals are not a good barrier for creating phishing emails with any of the tested models.
LLMs are getting better at recognizing phishing emails, which is good news. With only a few false alarms, Claude 3.5 Sonnet scored well above 90% and identified several emails that eluded human inspection. Although it struggles with some phishing emails that are clearly suspicious to most humans, it still struggles with some phishing emails that are clearly suspicious to most humans.
If you are looking for some guidance on how to recognize AI-generated phishing emails, we would like you to read: How to recognize AI-generated phishing emails. The best way is to always remember not to click on any links in emails that are not asked for.