Large Language Models (LLMs) have gained prominence in various applications,
including security. This paper explores the utility of LLMs in scam detection,
a critical aspect of cybersecurity. Unlike traditional applications, we propose
a novel use case for LLMs to identify scams, such as phishing, advance fee
fraud, and romance scams. We present notable security applications of LLMs and
discuss the unique challenges posed by scams. Specifically, we outline the key
steps involved in building an effective scam detector using LLMs, emphasizing
data collection, preprocessing, model selection, training, and integration into
target systems. Additionally, we conduct a preliminary evaluation using GPT-3.5
and GPT-4 on a duplicated email, highlighting their proficiency in identifying
common signs of phishing or scam emails. The results demonstrate the models'
effectiveness in recognizing suspicious elements, but we emphasize the need for
a comprehensive assessment across various language tasks. The paper concludes
by underlining the importance of ongoing refinement and collaboration with
cybersecurity experts to adapt to evolving threats.