Malicious Prompt

Adversarial Attack-Defense Co-Evolution for LLM Safety Alignment via Tree-Group Dual-Aware Search and Optimization

Authors: Xurui Li, Kaisong Song, Rui Zhu, Pin-Yu Chen, Haixu Tang | Published: 2025-11-24
Prompt Injection
Large Language Model
Malicious Prompt

Understanding and Mitigating Over-refusal for Large Language Models via Safety Representation

Authors: Junbo Zhang, Ran Chen, Qianli Zhou, Xinyang Deng, Wen Jiang | Published: 2025-11-24
Disabling Safety Mechanisms of LLM
Prompt Injection
Malicious Prompt

Defending Large Language Models Against Jailbreak Exploits with Responsible AI Considerations

Authors: Ryan Wong, Hosea David Yu Fei Ng, Dhananjai Sharma, Glenn Jun Jie Ng, Kavishvaran Srinivasan | Published: 2025-11-24
Ethical Considerations
Large Language Model
Malicious Prompt

RoguePrompt: Dual-Layer Ciphering for Self-Reconstruction to Circumvent LLM Moderation

Authors: Benyamin Tafreshian | Published: 2025-11-24
Indirect Prompt Injection
Prompt leaking
Malicious Prompt

PSM: Prompt Sensitivity Minimization via LLM-Guided Black-Box Optimization

Authors: Huseein Jawad, Nicolas Brunel | Published: 2025-11-20
Privacy-Preserving Data Mining
Prompt leaking
Malicious Prompt

Beyond Fixed and Dynamic Prompts: Embedded Jailbreak Templates for Advancing LLM Security

Authors: Hajun Kim, Hyunsik Na, Daeseon Choi | Published: 2025-11-18
Prompt Engineering
Large Language Model
Malicious Prompt

SGuard-v1: Safety Guardrail for Large Language Models

Authors: JoonHo Lee, HyeonMin Cho, Jaewoong Yun, Hyunjae Lee, JunKyu Lee, Juree Seok | Published: 2025-11-16
Prompt Injection
Malicious Prompt
Adaptive Misuse Detection

Better Privilege Separation for Agents by Restricting Data Types

Authors: Dennis Jacob, Emad Alghamdi, Zhanhao Hu, Basel Alomair, David Wagner | Published: 2025-09-30
Indirect Prompt Injection
Security Strategy Generation
Malicious Prompt

QGuard:Question-based Zero-shot Guard for Multi-modal LLM Safety

Authors: Taegyeong Lee, Jeonghwa Yoo, Hyoungseo Cho, Soo Yong Kim, Yunho Maeng | Published: 2025-06-14 | Updated: 2025-09-30
Alignment
Ethical Statement
Malicious Prompt

STShield: Single-Token Sentinel for Real-Time Jailbreak Detection in Large Language Models

Authors: Xunguang Wang, Wenxuan Wang, Zhenlan Ji, Zongjie Li, Pingchuan Ma, Daoyuan Wu, Shuai Wang | Published: 2025-03-23
Prompt Injection
Malicious Prompt
Effectiveness Analysis of Defense Methods