Attack Method

F2A: An Innovative Approach for Prompt Injection by Utilizing Feign Security Detection Agents

Authors: Yupeng Ren | Published: 2024-10-11 | Updated: 2024-10-14
Prompt Injection
Attack Evaluation
Attack Method

Time Traveling to Defend Against Adversarial Example Attacks in Image Classification

Authors: Anthony Etim, Jakub Szefer | Published: 2024-10-10
Attack Method
Adversarial Example
Defense Method

Study of Attacks on the HHL Quantum Algorithm

Authors: Yizhuo Tan, Hrvoje Kukina, Jakub Szefer | Published: 2024-10-10
Cybersecurity
Attack Evaluation
Attack Method

Prompt Infection: LLM-to-LLM Prompt Injection within Multi-Agent Systems

Authors: Donghyun Lee, Mo Tiwari | Published: 2024-10-09
Prompt Injection
Attack Method
Defense Method

Hallucinating AI Hijacking Attack: Large Language Models and Malicious Code Recommenders

Authors: David Noever, Forrest McKee | Published: 2024-10-09
Cybersecurity
Prompt Injection
Attack Method

Harnessing Task Overload for Scalable Jailbreak Attacks on Large Language Models

Authors: Yiting Dong, Guobin Shen, Dongcheng Zhao, Xiang He, Yi Zeng | Published: 2024-10-05
LLM Security
Prompt Injection
Attack Method

Impact of White-Box Adversarial Attacks on Convolutional Neural Networks

Authors: Rakesh Podder, Sudipto Ghosh | Published: 2024-10-02
Model Performance Evaluation
Attack Method
Adversarial Example

Hard-Label Cryptanalytic Extraction of Neural Network Models

Authors: Yi Chen, Xiaoyang Dong, Jian Guo, Yantian Shen, Anyu Wang, Xiaoyun Wang | Published: 2024-09-18
Model Extraction Attack
Attack Method
Computational Complexity

Context-Aware Membership Inference Attacks against Pre-trained Large Language Models

Authors: Hongyan Chang, Ali Shahin Shamsabadi, Kleomenis Katevas, Hamed Haddadi, Reza Shokri | Published: 2024-09-11
LLM Security
Membership Inference
Attack Method

AdaPPA: Adaptive Position Pre-Fill Jailbreak Attack Approach Targeting LLMs

Authors: Lijia Lv, Weigang Zhang, Xuehai Tang, Jie Wen, Feng Liu, Jizhong Han, Songlin Hu | Published: 2024-09-11
LLM Security
Prompt Injection
Attack Method