As large language models (LLMs) become increasingly capable, security and
safety evaluation are crucial. While current red teaming approaches have made
strides in assessing LLM vulnerabilities, they often rely heavily on human
input and lack comprehensive coverage of emerging attack vectors. This paper
introduces AutoRedTeamer, a novel framework for fully automated, end-to-end red
teaming against LLMs. AutoRedTeamer combines a multi-agent architecture with a
memory-guided attack selection mechanism to enable continuous discovery and
integration of new attack vectors. The dual-agent framework consists of a red
teaming agent that can operate from high-level risk categories alone to
generate and execute test cases and a strategy proposer agent that autonomously
discovers and implements new attacks by analyzing recent research. This modular
design allows AutoRedTeamer to adapt to emerging threats while maintaining
strong performance on existing attack vectors. We demonstrate AutoRedTeamer's
effectiveness across diverse evaluation settings, achieving 20% higher attack
success rates on HarmBench against Llama-3.1-70B while reducing computational
costs by 46% compared to existing approaches. AutoRedTeamer also matches the
diversity of human-curated benchmarks in generating test cases, providing a
comprehensive, scalable, and continuously evolving framework for evaluating the
security of AI systems.