These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Large Language Models (LLMs) are increasingly vulnerable to sophisticated
multi-turn manipulation attacks, where adversaries strategically build context
through seemingly benign conversational turns to circumvent safety measures and
elicit harmful or unauthorized responses. These attacks exploit the temporal
nature of dialogue to evade single-turn detection methods, representing a
critical security vulnerability with significant implications for real-world
deployments.
This paper introduces the Temporal Context Awareness (TCA) framework, a novel
defense mechanism designed to address this challenge by continuously analyzing
semantic drift, cross-turn intention consistency and evolving conversational
patterns. The TCA framework integrates dynamic context embedding analysis,
cross-turn consistency verification, and progressive risk scoring to detect and
mitigate manipulation attempts effectively. Preliminary evaluations on
simulated adversarial scenarios demonstrate the framework's potential to
identify subtle manipulation patterns often missed by traditional detection
techniques, offering a much-needed layer of security for conversational AI
systems. In addition to outlining the design of TCA , we analyze diverse attack
vectors and their progression across multi-turn conversation, providing
valuable insights into adversarial tactics and their impact on LLM
vulnerabilities. Our findings underscore the pressing need for robust,
context-aware defenses in conversational AI systems and highlight TCA framework
as a promising direction for securing LLMs while preserving their utility in
legitimate applications. We make our implementation available to support
further research in this emerging area of AI security.