Prompt validation

Attacking interpretable NLP systems

Authors: Eldor Abdukhamidov, Tamer Abuhmed, Joanna C. S. Santos, Mohammed Abuhamad | Published: 2025-07-22
Prompt Injection
Prompt validation
Adversarial Attack Methods

GuardVal: Dynamic Large Language Model Jailbreak Evaluation for Comprehensive Safety Testing

Authors: Peiyan Zhang, Haibo Jin, Liying Kang, Haohan Wang | Published: 2025-07-10
Prompt validation
Large Language Model
Performance Evaluation Metrics

PenTest2.0: Towards Autonomous Privilege Escalation Using GenAI

Authors: Haitham S. Al-Sinani, Chris J. Mitchell | Published: 2025-07-09
Indirect Prompt Injection
Prompt validation
Prompt leaking

A Survey of LLM-Driven AI Agent Communication: Protocols, Security Risks, and Defense Countermeasures

Authors: Dezhang Kong, Shi Lin, Zhenhua Xu, Zhebo Wang, Minghao Li, Yufeng Li, Yilun Zhang, Zeyang Sha, Yuyuan Li, Changting Lin, Xun Wang, Xuan Liu, Muhammad Khurram Khan, Ningyu Zhang, Chaochao Chen, Meng Han | Published: 2025-06-24
AIエージェント通信
Poisoning attack on RAG
Prompt validation

Adversarial Suffix Filtering: a Defense Pipeline for LLMs

Authors: David Khachaturov, Robert Mullins | Published: 2025-05-14
Prompt validation
倫理基準遵守
Attack Detection Method

Robustness via Referencing: Defending against Prompt Injection Attacks by Referencing the Executed Instruction

Authors: Yulin Chen, Haoran Li, Yuan Sui, Yue Liu, Yufei He, Yangqiu Song, Bryan Hooi | Published: 2025-04-29
Indirect Prompt Injection
Prompt validation
Attack Method

Watermarking Needs Input Repetition Masking

Authors: David Khachaturov, Robert Mullins, Ilia Shumailov, Sumanth Dathathri | Published: 2025-04-16
LLM Performance Evaluation
Prompt validation
Watermark Design

Benchmarking Practices in LLM-driven Offensive Security: Testbeds, Metrics, and Experiment Design

Authors: Andreas Happe, Jürgen Cito | Published: 2025-04-14
Testbed
Prompt validation
Progress Tracking

Can Indirect Prompt Injection Attacks Be Detected and Removed?

Authors: Yulin Chen, Haoran Li, Yuan Sui, Yufei He, Yue Liu, Yangqiu Song, Bryan Hooi | Published: 2025-02-23
Prompt validation
Malicious Prompt
Attack Method

Feint and Attack: Attention-Based Strategies for Jailbreaking and Protecting LLMs

Authors: Rui Pu, Chaozhuo Li, Rui Ha, Zejian Chen, Litian Zhang, Zheng Liu, Lirong Qiu, Zaisheng Ye | Published: 2024-10-18 | Updated: 2025-07-08
Disabling Safety Mechanisms of LLM
Prompt Injection
Prompt validation