プロンプトインジェクション

Vulnerabilities of Foundation Model Integrated Federated Learning Under Adversarial Threats

Authors: Chen Wu, Xi Li, Jiaqi Wang | Published: 2024-01-18 | Updated: 2024-04-02
プロンプトインジェクション
ポイズニング
連合学習

Excuse me, sir? Your language model is leaking (information)

Authors: Or Zamir | Published: 2024-01-18
ウォーターマーキング
プロンプトインジェクション
動的エラー訂正コード

Large Language Model Lateral Spear Phishing: A Comparative Study in Large-Scale Organizational Settings

Authors: Mazal Bethany, Athanasios Galiopoulos, Emet Bethany, Mohammad Bahrami Karkevandi, Nishant Vishwamitra, Peyman Najafirad | Published: 2024-01-18
フィッシング攻撃
プロンプトインジェクション

Signed-Prompt: A New Approach to Prevent Prompt Injection Attacks Against LLM-Integrated Applications

Authors: Xuchen Suo | Published: 2024-01-15
LLMセキュリティ
プロンプトインジェクション

Universal Vulnerabilities in Large Language Models: Backdoor Attacks for In-context Learning

Authors: Shuai Zhao, Meihuizi Jia, Luu Anh Tuan, Fengjun Pan, Jinming Wen | Published: 2024-01-11 | Updated: 2024-10-09
バックドア攻撃
プロンプトインジェクション

Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training

Authors: Evan Hubinger, Carson Denison, Jesse Mu, Mike Lambert, Meg Tong, Monte MacDiarmid, Tamera Lanham, Daniel M. Ziegler, Tim Maxwell, Newton Cheng, Adam Jermyn, Amanda Askell, Ansh Radhakrishnan, Cem Anil, David Duvenaud, Deep Ganguli, Fazl Barez, Jack Clark, Kamal Ndousse, Kshitij Sachan, Michael Sellitto, Mrinank Sharma, Nova DasSarma, Roger Grosse, Shauna Kravec, Yuntao Bai, Zachary Witten, Marina Favaro, Jan Brauner, Holden Karnofsky, Paul Christiano, Samuel R. Bowman, Logan Graham, Jared Kaplan, Sören Mindermann, Ryan Greenblatt, Buck Shlegeris, Nicholas Schiefer, Ethan Perez | Published: 2024-01-10 | Updated: 2024-01-17
バックドア攻撃
プロンプトインジェクション
強化学習

Malla: Demystifying Real-world Large Language Model Integrated Malicious Services

Authors: Zilong Lin, Jian Cui, Xiaojing Liao, XiaoFeng Wang | Published: 2024-01-06 | Updated: 2024-08-19
フィッシング攻撃
プロンプトインジェクション
悪意のあるコンテンツ生成

LLbezpeky: Leveraging Large Language Models for Vulnerability Detection

Authors: Noble Saji Mathews, Yelizaveta Brus, Yousra Aafer, Meiyappan Nagappan, Shane McIntosh | Published: 2024-01-02 | Updated: 2024-02-13
LLM性能評価
プロンプトインジェクション
脆弱性管理

A Novel Evaluation Framework for Assessing Resilience Against Prompt Injection Attacks in Large Language Models

Authors: Daniel Wankit Yip, Aysan Esmradi, Chun Fai Chan | Published: 2024-01-02
LLMセキュリティ
プロンプトインジェクション
攻撃の評価

Digger: Detecting Copyright Content Mis-usage in Large Language Model Training

Authors: Haodong Li, Gelei Deng, Yi Liu, Kailong Wang, Yuekang Li, Tianwei Zhang, Yang Liu, Guoai Xu, Guosheng Xu, Haoyu Wang | Published: 2024-01-01
LLM性能評価
データセット生成
プロンプトインジェクション