LLM性能評価

LLbezpeky: Leveraging Large Language Models for Vulnerability Detection

Authors: Noble Saji Mathews, Yelizaveta Brus, Yousra Aafer, Meiyappan Nagappan, Shane McIntosh | Published: 2024-01-02 | Updated: 2024-02-13
LLM性能評価
プロンプトインジェクション
脆弱性管理

Digger: Detecting Copyright Content Mis-usage in Large Language Model Training

Authors: Haodong Li, Gelei Deng, Yi Liu, Kailong Wang, Yuekang Li, Tianwei Zhang, Yang Liu, Guoai Xu, Guosheng Xu, Haoyu Wang | Published: 2024-01-01
LLM性能評価
データセット生成
プロンプトインジェクション

SecQA: A Concise Question-Answering Dataset for Evaluating Large Language Models in Computer Security

Authors: Zefang Liu | Published: 2023-12-26
LLM性能評価
サイバーセキュリティ
プロンプトインジェクション

Binary Code Summarization: Benchmarking ChatGPT/GPT-4 and Other Large Language Models

Authors: Xin Jin, Jonathan Larson, Weiwei Yang, Zhiqiang Lin | Published: 2023-12-15
LLM性能評価
プログラム解析
プロンプトインジェクション

LLMs Perform Poorly at Concept Extraction in Cyber-security Research Literature

Authors: Maxime Würsch, Andrei Kucharavy, Dimitri Percia David, Alain Mermoud | Published: 2023-12-12
LLM性能評価
データ前処理
知識抽出手法

SmoothLLM: Defending Large Language Models Against Jailbreaking Attacks

Authors: Alexander Robey, Eric Wong, Hamed Hassani, George J. Pappas | Published: 2023-10-05 | Updated: 2024-06-11
LLM性能評価
プロンプトインジェクション
防御手法

Misusing Tools in Large Language Models With Visual Adversarial Examples

Authors: Xiaohan Fu, Zihan Wang, Shuheng Li, Rajesh K. Gupta, Niloofar Mireshghallah, Taylor Berg-Kirkpatrick, Earlence Fernandes | Published: 2023-10-04
LLM性能評価
プロンプトインジェクション
敵対的サンプル

Jailbreaker in Jail: Moving Target Defense for Large Language Models

Authors: Bocheng Chen, Advait Paliwal, Qiben Yan | Published: 2023-10-03
LLM性能評価
プロンプトインジェクション
評価指標

On the Safety of Open-Sourced Large Language Models: Does Alignment Really Prevent Them From Being Misused?

Authors: Hangfan Zhang, Zhimeng Guo, Huaisheng Zhu, Bochuan Cao, Lu Lin, Jinyuan Jia, Jinghui Chen, Dinghao Wu | Published: 2023-10-02
LLM性能評価
プロンプトインジェクション
悪意のある行為者の分類

Watch Your Language: Investigating Content Moderation with Large Language Models

Authors: Deepak Kumar, Yousef AbuHashem, Zakir Durumeric | Published: 2023-09-25 | Updated: 2024-01-17
LLM性能評価
プロンプトインジェクション
不適切コンテンツ生成