This paper analyzes Large Language Model (LLM) security vulnerabilities based
on data from Crucible, encompassing 214,271 attack attempts by 1,674 users
across 30 LLM challenges. Our findings reveal automated approaches
significantly outperform manual techniques (69.5% vs 47.6% success rate),
despite only 5.2% of users employing automation. We demonstrate that automated
approaches excel in systematic exploration and pattern matching challenges,
while manual approaches retain speed advantages in certain creative reasoning
scenarios, often solving problems 5x faster when successful. Challenge
categories requiring systematic exploration are most effectively targeted
through automation, while intuitive challenges sometimes favor manual
techniques for time-to-solve metrics. These results illuminate how algorithmic
testing is transforming AI red-teaming practices, with implications for both
offensive security research and defensive measures. Our analysis suggests
optimal security testing combines human creativity for strategy development
with programmatic execution for thorough exploration.