LLM Performance Evaluation

Membership Inference Attacks against Language Models via Neighbourhood Comparison

Authors: Justus Mattern, Fatemehsadat Mireshghallah, Zhijing Jin, Bernhard Schölkopf, Mrinmaya Sachan, Taylor Berg-Kirkpatrick | Published: 2023-05-29 | Updated: 2023-08-07
LLM Performance Evaluation
Privacy Protection Method
Defense Method

LLMs Can Understand Encrypted Prompt: Towards Privacy-Computing Friendly Transformers

Authors: Xuanqi Liu, Zhuotao Liu | Published: 2023-05-28 | Updated: 2023-12-15
DNN IP Protection Method
LLM Performance Evaluation
Privacy Protection Method

The Curse of Recursion: Training on Generated Data Makes Models Forget

Authors: Ilia Shumailov, Zakhar Shumaylov, Yiren Zhao, Yarin Gal, Nicolas Papernot, Ross Anderson | Published: 2023-05-27 | Updated: 2024-04-14
LLM Performance Evaluation
Sampling Method
Model Interpretability

On Evaluating Adversarial Robustness of Large Vision-Language Models

Authors: Yunqing Zhao, Tianyu Pang, Chao Du, Xiao Yang, Chongxuan Li, Ngai-Man Cheung, Min Lin | Published: 2023-05-26 | Updated: 2023-10-29
LLM Performance Evaluation
Prompt Injection
Adversarial attack

Quantifying Association Capabilities of Large Language Models and Its Implications on Privacy Leakage

Authors: Hanyin Shao, Jie Huang, Shen Zheng, Kevin Chen-Chuan Chang | Published: 2023-05-22 | Updated: 2024-02-09
LLM Performance Evaluation
Privacy Violation
Privacy Protection Method