LLMセキュリティ

Dataset and Lessons Learned from the 2024 SaTML LLM Capture-the-Flag Competition

Authors: Edoardo Debenedetti, Javier Rando, Daniel Paleka, Silaghi Fineas Florin, Dragos Albastroiu, Niv Cohen, Yuval Lemberg, Reshmi Ghosh, Rui Wen, Ahmed Salem, Giovanni Cherubin, Santiago Zanella-Beguelin, Robin Schmid, Victor Klemm, Takahiro Miki, Chenhao Li, Stefan Kraft, Mario Fritz, Florian Tramèr, Sahar Abdelnabi, Lea Schönherr | Published: 2024-06-12
LLMセキュリティ
プロンプトインジェクション
防御手法

A Study of Backdoors in Instruction Fine-tuned Language Models

Authors: Jayaram Raghuram, George Kesidis, David J. Miller | Published: 2024-06-12 | Updated: 2024-08-21
LLMセキュリティ
バックドア攻撃
防御手法

Knowledge Return Oriented Prompting (KROP)

Authors: Jason Martin, Kenneth Yeung | Published: 2024-06-11
LLMセキュリティ
プロンプトインジェクション
攻撃手法

A Survey of Recent Backdoor Attacks and Defenses in Large Language Models

Authors: Shuai Zhao, Meihuizi Jia, Zhongliang Guo, Leilei Gan, Xiaoyu Xu, Xiaobao Wu, Jie Fu, Yichao Feng, Fengjun Pan, Luu Anh Tuan | Published: 2024-06-10 | Updated: 2025-01-04
LLMセキュリティ
バックドア攻撃

An LLM-Assisted Easy-to-Trigger Backdoor Attack on Code Completion Models: Injecting Disguised Vulnerabilities against Strong Detection

Authors: Shenao Yan, Shen Wang, Yue Duan, Hanbin Hong, Kiho Lee, Doowon Kim, Yuan Hong | Published: 2024-06-10
LLMセキュリティ
バックドア攻撃
プロンプトインジェクション

LLM Dataset Inference: Did you train on my dataset?

Authors: Pratyush Maini, Hengrui Jia, Nicolas Papernot, Adam Dziedzic | Published: 2024-06-10
LLMセキュリティ
データプライバシー評価
メンバーシップ推論

Safety Alignment Should Be Made More Than Just a Few Tokens Deep

Authors: Xiangyu Qi, Ashwinee Panda, Kaifeng Lyu, Xiao Ma, Subhrajit Roy, Ahmad Beirami, Prateek Mittal, Peter Henderson | Published: 2024-06-10
LLMセキュリティ
プロンプトインジェクション
安全性アライメント

How Alignment and Jailbreak Work: Explain LLM Safety through Intermediate Hidden States

Authors: Zhenhong Zhou, Haiyang Yu, Xinghua Zhang, Rongwu Xu, Fei Huang, Yongbin Li | Published: 2024-06-09 | Updated: 2024-06-13
LLMセキュリティ
プロンプトインジェクション
倫理的ガイドライン遵守

Adversarial Tuning: Defending Against Jailbreak Attacks for LLMs

Authors: Fan Liu, Zhao Xu, Hao Liu | Published: 2024-06-07
LLMセキュリティ
プロンプトインジェクション
敵対的訓練

BadAgent: Inserting and Activating Backdoor Attacks in LLM Agents

Authors: Yifei Wang, Dizhan Xue, Shengjie Zhang, Shengsheng Qian | Published: 2024-06-05
LLMセキュリティ
バックドア攻撃
プロンプトインジェクション