Attack Method

Breaking XOR Arbiter PUFs without Reliability Information

Authors: Niloufar Sayadi, Phuong Ha Nguyen, Marten van Dijk, Chenglu Jin | Published: 2023-12-03
Evaluation Methods for PUF
Watermarking
Attack Method

FedTruth: Byzantine-Robust and Backdoor-Resilient Federated Learning Framework

Authors: Sheldon C. Ebron Jr., Kan Yang | Published: 2023-11-17
Model Architecture
Attack Method
Evaluation Method

You Cannot Escape Me: Detecting Evasions of SIEM Rules in Enterprise Networks

Authors: Rafael Uetz, Marco Herzog, Louis Hackländer, Simon Schwarz, Martin Henze | Published: 2023-11-16 | Updated: 2023-12-19
Rule Attribution
Attack Method
Adaptive Misuse Detection

Trojan Activation Attack: Red-Teaming Large Language Models using Activation Steering for Safety-Alignment

Authors: Haoran Wang, Kai Shu | Published: 2023-11-15 | Updated: 2024-08-15
Prompt Injection
Attack Method
Natural Language Processing

Jailbreaking GPT-4V via Self-Adversarial Attacks with System Prompts

Authors: Yuanwei Wu, Xiang Li, Yixin Liu, Pan Zhou, Lichao Sun | Published: 2023-11-15 | Updated: 2024-01-20
Prompt Injection
Attack Method
Face Recognition

Identifying and Mitigating Vulnerabilities in LLM-Integrated Applications

Authors: Fengqing Jiang, Zhangchen Xu, Luyao Niu, Boxin Wang, Jinyuan Jia, Bo Li, Radha Poovendran | Published: 2023-11-07 | Updated: 2023-11-29
Prompt Injection
Experimental Validation
Attack Method

Ignore This Title and HackAPrompt: Exposing Systemic Vulnerabilities of LLMs through a Global Scale Prompt Hacking Competition

Authors: Sander Schulhoff, Jeremy Pinto, Anaum Khan, Louis-François Bouchard, Chenglei Si, Svetlina Anati, Valen Tagliabue, Anson Liu Kost, Christopher Carnahan, Jordan Boyd-Graber | Published: 2023-10-24 | Updated: 2024-03-03
Text Generation Method
Prompt Injection
Attack Method

Deceptive Fairness Attacks on Graphs via Meta Learning

Authors: Jian Kang, Yinglong Xia, Ross Maciejewski, Jiebo Luo, Hanghang Tong | Published: 2023-10-24
Graph Neural Network
Attack Method
evaluation metrics

AutoDAN: Interpretable Gradient-Based Adversarial Attacks on Large Language Models

Authors: Sicheng Zhu, Ruiyi Zhang, Bang An, Gang Wu, Joe Barrow, Zichao Wang, Furong Huang, Ani Nenkova, Tong Sun | Published: 2023-10-23 | Updated: 2023-12-14
Prompt Injection
Safety Alignment
Attack Method

A Comprehensive Study of Privacy Risks in Curriculum Learning

Authors: Joann Qiongna Chen, Xinlei He, Zheng Li, Yang Zhang, Zhou Li | Published: 2023-10-16
Membership Inference
Model Performance Evaluation
Attack Method