Prompt Injection

Sandwich attack: Multi-language Mixture Adaptive Attack on LLMs

Authors: Bibek Upadhayay, Vahid Behzadan | Published: 2024-04-09
LLM Security
Prompt Injection
Attack Method

Rethinking How to Evaluate Language Model Jailbreak

Authors: Hongyu Cai, Arjun Arunasalam, Leo Y. Lin, Antonio Bianchi, Z. Berkay Celik | Published: 2024-04-09 | Updated: 2024-05-07
Prompt Injection
Classification of Malicious Actors
Evaluation Method

Unbridled Icarus: A Survey of the Potential Perils of Image Inputs in Multimodal Large Language Model Security

Authors: Yihe Fan, Yuxin Cao, Ziyu Zhao, Ziyao Liu, Shaofeng Li | Published: 2024-04-08 | Updated: 2024-08-11
LLM Security
Prompt Injection
Threat modeling

Initial Exploration of Zero-Shot Privacy Utility Tradeoffs in Tabular Data Using GPT-4

Authors: Bishwas Mandal, George Amariucai, Shuangqing Wei | Published: 2024-04-07
Data Privacy Assessment
Privacy Protection Method
Prompt Injection

Fine-Tuning, Quantization, and LLMs: Navigating Unintended Outcomes

Authors: Divyanshu Kumar, Anurakt Kumar, Sahil Agarwal, Prashanth Harshangi | Published: 2024-04-05 | Updated: 2024-09-09
LLM Security
Prompt Injection
Safety Alignment

AuditGPT: Auditing Smart Contracts with ChatGPT

Authors: Shihao Xia, Shuai Shao, Mengting He, Tingting Yu, Linhai Song, Yiying Zhang | Published: 2024-04-05
ERC Rules
ERC Compliance Evaluation
Prompt Injection

An Investigation into Misuse of Java Security APIs by Large Language Models

Authors: Zahra Mousavi, Chadni Islam, Kristen Moore, Alsharif Abuadbba, Muhammad Ali Babar | Published: 2024-04-04
Misuse of Security API
Security Analysis
Prompt Injection

Exploring Backdoor Vulnerabilities of Chat Models

Authors: Yunzhuo Hao, Wenkai Yang, Yankai Lin | Published: 2024-04-03
Backdoor Attack
Prompt Injection

Obfuscated Malware Detection: Investigating Real-world Scenarios through Memory Analysis

Authors: S M Rakib Hasan, Aakar Dhakal | Published: 2024-04-03
Cybersecurity
Prompt Injection
Malware Classification

Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks

Authors: Maksym Andriushchenko, Francesco Croce, Nicolas Flammarion | Published: 2024-04-02 | Updated: 2024-10-07
LLM Security
Prompt Injection
Attack Method