Literature Database

RESIST: Resilient Decentralized Learning Using Consensus Gradient Descent

Authors: Cheng Fang, Rishabh Dixit, Waheed U. Bajwa, Mert Gurbuzbalaban | Published: 2025-02-11
MITM Attack
Convergence Analysis

Trustworthy AI: Safety, Bias, and Privacy — A Survey

Authors: Xingli Fang, Jianwei Li, Varun Mulchandani, Jung-Eun Kim | Published: 2025-02-11 | Updated: 2025-06-11
Bias
Prompt leaking
Differential Privacy

Scalable and Ethical Insider Threat Detection through Data Synthesis and Analysis by LLMs

Authors: Haywood Gelman, John D. Hastings | Published: 2025-02-10 | Updated: 2025-04-07
LLM Application
Risk Analysis Method
Information Security

Membership Inference Risks in Quantized Models: A Theoretical and Empirical Study

Authors: Eric Aubinais, Philippe Formont, Pablo Piantanida, Elisabeth Gassiat | Published: 2025-02-10
Membership Inference
Quantization and Privacy

Generating Privacy-Preserving Personalized Advice with Zero-Knowledge Proofs and LLMs

Authors: Hiroki Watanabe, Motonobu Uchikoshi | Published: 2025-02-10 | Updated: 2025-04-24
Alignment
Privacy-Preserving Data Mining
Watermark

From Counterfactuals to Trees: Competitive Analysis of Model Extraction Attacks

Authors: Awa Khouna, Julien Ferry, Thibaut Vidal | Published: 2025-02-07 | Updated: 2025-07-08
Model Extraction Attack
Detection of Model Extraction Attacks
再構成アルゴリズム

Training Set Reconstruction from Differentially Private Forests: How Effective is DP?

Authors: Alice Gorgé, Julien Ferry, Sébastien Gambs, Thibaut Vidal | Published: 2025-02-07 | Updated: 2025-07-08
Privacy Risk Management
再構成アルゴリズム
Differential Privacy

Can LLMs Hack Enterprise Networks? Autonomous Assumed Breach Penetration-Testing Active Directory Networks

Authors: Andreas Happe, Jürgen Cito | Published: 2025-02-06 | Updated: 2025-09-11
Indirect Prompt Injection
Prompt Injection
攻撃戦略分析

“Short-length” Adversarial Training Helps LLMs Defend “Long-length” Jailbreak Attacks: Theoretical and Empirical Evidence

Authors: Shaopeng Fu, Liang Ding, Di Wang | Published: 2025-02-06
Prompt Injection
Large Language Model
Adversarial Training

ExpProof : Operationalizing Explanations for Confidential Models with ZKPs

Authors: Chhavi Yadav, Evan Monroe Laufer, Dan Boneh, Kamalika Chaudhuri | Published: 2025-02-06 | Updated: 2025-05-27
XAI (Explainable AI)
Model evaluation methods
Interpretability