These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
The advanced capabilities of Large Language Models (LLMs) have made them
invaluable across various applications, from conversational agents and content
creation to data analysis, research, and innovation. However, their
effectiveness and accessibility also render them susceptible to abuse for
generating malicious content, including phishing attacks. This study explores
the potential of using four popular commercially available LLMs, i.e., ChatGPT
(GPT 3.5 Turbo), GPT 4, Claude, and Bard, to generate functional phishing
attacks using a series of malicious prompts. We discover that these LLMs can
generate both phishing websites and emails that can convincingly imitate
well-known brands and also deploy a range of evasive tactics that are used to
elude detection mechanisms employed by anti-phishing systems. These attacks can
be generated using unmodified or "vanilla" versions of these LLMs without
requiring any prior adversarial exploits such as jailbreaking. We evaluate the
performance of the LLMs towards generating these attacks and find that they can
also be utilized to create malicious prompts that, in turn, can be fed back to
the model to generate phishing scams - thus massively reducing the
prompt-engineering effort required by attackers to scale these threats. As a
countermeasure, we build a BERT-based automated detection tool that can be used
for the early detection of malicious prompts to prevent LLMs from generating
phishing content. Our model is transferable across all four commercial LLMs,
attaining an average accuracy of 96% for phishing website prompts and 94% for
phishing email prompts. We also disclose the vulnerabilities to the concerned
LLMs, with Google acknowledging it as a severe issue. Our detection model is
available for use at Hugging Face, as well as a ChatGPT Actions plugin.