Literature Database

Fundamental Limits of Membership Inference Attacks on Machine Learning Models

Authors: Eric Aubinais, Elisabeth Gassiat, Pablo Piantanida | Published: 2023-10-20 | Updated: 2025-05-12
Membership Inference
Adversarial attack
Machine Learning Method

An LLM can Fool Itself: A Prompt-Based Adversarial Attack

Authors: Xilie Xu, Keyi Kong, Ning Liu, Lizhen Cui, Di Wang, Jingfeng Zhang, Mohan Kankanhalli | Published: 2023-10-20
Prompt Injection
Malicious Prompt
Adversarial attack

Critical Path Prioritization Dashboard for Alert-driven Attack Graphs

Authors: Sònia Leal Díaz, Sergio Pastrana, Azqa Nadeem | Published: 2023-10-19
Security Analysis
User Experience Evaluation
Attack Graph Generation

Network-Aware AutoML Framework for Software-Defined Sensor Networks

Authors: Emre Horsanali, Yagmur Yigit, Gokhan Secinti, Aytac Karameseoglu, Berk Canberk | Published: 2023-10-19 | Updated: 2023-10-25
DDoS Attack
DDoS Attack Detection
SDN Architecture

Blind quantum machine learning with quantum bipartite correlator

Authors: Changhao Li, Boning Li, Omar Amer, Ruslan Shaydulin, Shouvanik Chakrabarti, Guoqing Wang, Haowei Xu, Hao Tang, Isidor Schoch, Niraj Kumar, Charles Lim, Ju Li, Paola Cappellaro, Marco Pistoia | Published: 2023-10-19
Privacy Protection Method
Malicious Client
Quantum Cryptography Technology

SecurityNet: Assessing Machine Learning Vulnerabilities on Public Models

Authors: Boyang Zhang, Zheng Li, Ziqing Yang, Xinlei He, Michael Backes, Mario Fritz, Yang Zhang | Published: 2023-10-19
Membership Inference
Model Extraction Attack
Attack Evaluation

On existence, uniqueness and scalability of adversarial robustness measures for AI classifiers

Authors: Illia Horenko | Published: 2023-10-19 | Updated: 2023-11-15
Adversarial attack
Optimization Methods
Machine Learning Method

Privacy Preserving Large Language Models: ChatGPT Case Study Based Vision and Framework

Authors: Imdad Ullah, Najm Hassan, Sukhpal Singh Gill, Basem Suleiman, Tariq Ahamed Ahanger, Zawar Shah, Junaid Qadir, Salil S. Kanhere | Published: 2023-10-19
Privacy Protection Method
Privacy Technique
Prompt Injection

Attack Prompt Generation for Red Teaming and Defending Large Language Models

Authors: Boyi Deng, Wenjie Wang, Fuli Feng, Yang Deng, Qifan Wang, Xiangnan He | Published: 2023-10-19
Prompt Injection
Attack Evaluation
Adversarial Example

REMARK-LLM: A Robust and Efficient Watermarking Framework for Generative Large Language Models

Authors: Ruisi Zhang, Shehzeen Samarah Hussain, Paarth Neekhara, Farinaz Koushanfar | Published: 2023-10-18 | Updated: 2024-04-08
Data Generation
Model Design
Malicious Content Generation