Literature Database

Generating Privacy-Preserving Personalized Advice with Zero-Knowledge Proofs and LLMs

Authors: Hiroki Watanabe, Motonobu Uchikoshi | Published: 2025-02-10 | Updated: 2025-04-24
Alignment
Privacy-Preserving Data Mining
Watermark

From Counterfactuals to Trees: Competitive Analysis of Model Extraction Attacks

Authors: Awa Khouna, Julien Ferry, Thibaut Vidal | Published: 2025-02-07 | Updated: 2025-07-08
Model Extraction Attack
Detection of Model Extraction Attacks
再構成アルゴリズム

Training Set Reconstruction from Differentially Private Forests: How Effective is DP?

Authors: Alice Gorgé, Julien Ferry, Sébastien Gambs, Thibaut Vidal | Published: 2025-02-07 | Updated: 2025-07-08
Privacy Risk Management
再構成アルゴリズム
Differential Privacy

“Short-length” Adversarial Training Helps LLMs Defend “Long-length” Jailbreak Attacks: Theoretical and Empirical Evidence

Authors: Shaopeng Fu, Liang Ding, Di Wang | Published: 2025-02-06
Prompt Injection
Large Language Model
Adversarial Training

ExpProof : Operationalizing Explanations for Confidential Models with ZKPs

Authors: Chhavi Yadav, Evan Monroe Laufer, Dan Boneh, Kamalika Chaudhuri | Published: 2025-02-06 | Updated: 2025-05-27
XAI (Explainable AI)
Model evaluation methods
Interpretability

Privacy Amplification by Structured Subsampling for Deep Differentially Private Time Series Forecasting

Authors: Jan Schuchardt, Mina Dalirrooyfard, Jed Guzelkabaagac, Anderson Schneider, Yuriy Nevmyvaka, Stephan Günnemann | Published: 2025-02-04 | Updated: 2025-05-29
Privacy Analysis
Differential Privacy
Information-Theoretic Evaluation

Online Gradient Boosting Decision Tree: In-Place Updates for Efficient Adding/Deleting Data

Authors: Huawei Lin, Jun Woo Chung, Yingjie Lao, Weijie Zhao | Published: 2025-02-03
Online Learning

Adversarial Robustness in Two-Stage Learning-to-Defer: Algorithms and Guarantees

Authors: Yannis Montreuil, Axel Carlier, Lai Xing Ng, Wei Tsang Ooi | Published: 2025-02-03
Learning-to-Defer
Adversarial Example
Adversarial Training

AgentBreeder: Mitigating the AI Safety Impact of Multi-Agent Scaffolds via Self-Improvement

Authors: J Rosser, Jakob Nicolaus Foerster | Published: 2025-02-02 | Updated: 2025-04-14
LLM Performance Evaluation
Multi-Objective Optimization
Safety Alignment

Safety at Scale: A Comprehensive Survey of Large Model Safety

Authors: Xingjun Ma, Yifeng Gao, Yixu Wang, Ruofan Wang, Xin Wang, Ye Sun, Yifan Ding, Hengyuan Xu, Yunhao Chen, Yunhan Zhao, Hanxun Huang, Yige Li, Jiaming Zhang, Xiang Zheng, Yang Bai, Zuxuan Wu, Xipeng Qiu, Jingfeng Zhang, Yiming Li, Xudong Han, Haonan Li, Jun Sun, Cong Wang, Jindong Gu, Baoyuan Wu, Siheng Chen, Tianwei Zhang, Yang Liu, Mingming Gong, Tongliang Liu, Shirui Pan, Cihang Xie, Tianyu Pang, Yinpeng Dong, Ruoxi Jia, Yang Zhang, Shiqing Ma, Xiangyu Zhang, Neil Gong, Chaowei Xiao, Sarah Erfani, Tim Baldwin, Bo Li, Masashi Sugiyama, Dacheng Tao, James Bailey, Yu-Gang Jiang | Published: 2025-02-02 | Updated: 2025-03-19
Indirect Prompt Injection
Prompt Injection
Attack Method