These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
The proliferation of Large Language Models (LLMs) poses challenges in
detecting and mitigating digital deception, as these models can emulate human
conversational patterns and facilitate chat-based social engineering (CSE)
attacks. This study investigates the dual capabilities of LLMs as both
facilitators and defenders against CSE threats. We develop a novel dataset,
SEConvo, simulating CSE scenarios in academic and recruitment contexts, and
designed to examine how LLMs can be exploited in these situations. Our findings
reveal that, while off-the-shelf LLMs generate high-quality CSE content, their
detection capabilities are suboptimal, leading to increased operational costs
for defense. In response, we propose ConvoSentinel, a modular defense pipeline
that improves detection at both the message and the conversation levels,
offering enhanced adaptability and cost-effectiveness. The retrieval-augmented
module in ConvoSentinel identifies malicious intent by comparing messages to a
database of similar conversations, enhancing CSE detection at all stages. Our
study highlights the need for advanced strategies to leverage LLMs in
cybersecurity.