LLMセキュリティ

Enhancing Source Code Security with LLMs: Demystifying The Challenges and Generating Reliable Repairs

Authors: Nafis Tanveer Islam, Joseph Khoury, Andrew Seong, Elias Bou-Harb, Peyman Najafirad | Published: 2024-09-01
LLMセキュリティ
脆弱性管理
自動脆弱性修復

LLM-PBE: Assessing Data Privacy in Large Language Models

Authors: Qinbin Li, Junyuan Hong, Chulin Xie, Jeffrey Tan, Rachel Xin, Junyi Hou, Xavier Yin, Zhun Wang, Dan Hendrycks, Zhangyang Wang, Bo Li, Bingsheng He, Dawn Song | Published: 2024-08-23 | Updated: 2024-09-06
LLMセキュリティ
プライバシー保護手法
プロンプトインジェクション

EEG-Defender: Defending against Jailbreak through Early Exit Generation of Large Language Models

Authors: Chongwen Zhao, Zhihao Dou, Kaizhu Huang | Published: 2024-08-21
LLMセキュリティ
プロンプトインジェクション
防御手法

Security Attacks on LLM-based Code Completion Tools

Authors: Wen Cheng, Ke Sun, Xinyu Zhang, Wei Wang | Published: 2024-08-20 | Updated: 2025-01-02
LLMセキュリティ
プロンプトインジェクション
攻撃手法

Transferring Backdoors between Large Language Models by Knowledge Distillation

Authors: Pengzhou Cheng, Zongru Wu, Tianjie Ju, Wei Du, Zhuosheng Zhang Gongshen Liu | Published: 2024-08-19
LLMセキュリティ
バックドア攻撃
ポイズニング

Antidote: Post-fine-tuning Safety Alignment for Large Language Models against Harmful Fine-tuning

Authors: Tiansheng Huang, Gautam Bhattacharya, Pratik Joshi, Josh Kimball, Ling Liu | Published: 2024-08-18 | Updated: 2024-09-03
LLMセキュリティ
プロンプトインジェクション
安全性アライメント

BaThe: Defense against the Jailbreak Attack in Multimodal Large Language Models by Treating Harmful Instruction as Backdoor Trigger

Authors: Yulin Chen, Haoran Li, Yirui Zhang, Zihao Zheng, Yangqiu Song, Bryan Hooi | Published: 2024-08-17 | Updated: 2025-01-10
AIコンプライアンス
LLMセキュリティ
コンテンツモデレーション

MIA-Tuner: Adapting Large Language Models as Pre-training Text Detector

Authors: Wenjie Fu, Huandong Wang, Chen Gao, Guanghua Liu, Yong Li, Tao Jiang | Published: 2024-08-16
LLMセキュリティ
プロンプトインジェクション
メンバーシップ推論

DePrompt: Desensitization and Evaluation of Personal Identifiable Information in Large Language Model Prompts

Authors: Xiongtao Sun, Gan Liu, Zhipeng He, Hui Li, Xiaoguang Li | Published: 2024-08-16
LLMセキュリティ
プライバシー保護手法
プロンプトインジェクション

Prefix Guidance: A Steering Wheel for Large Language Models to Defend Against Jailbreak Attacks

Authors: Jiawei Zhao, Kejiang Chen, Xiaojian Yuan, Weiming Zhang | Published: 2024-08-15 | Updated: 2024-08-22
LLMセキュリティ
プロンプトインジェクション
防御手法