Literature Database

文献データベースでは、AIセキュリティに関する文献情報を分類・集約しています。詳しくは文献データベースについてをご覧ください。統計情報のページでは、収集された文献に関する統計情報を公開しています。
The Literature Database categorizes and aggregates literature related to AI security. For more details, please see About Literature Database. We provide statistical information regarding the Literature Database on the Statistics page.

Unknown Attack Detection in IoT Networks using Large Language Models: A Robust, Data-efficient Approach

Authors: Shan Ali, Feifei Niu, Paria Shirani, Lionel C. Briand | Published: 2026-02-12
IoT Security Framework
Data Collection Method
Adversarial Learning

BlackCATT: Black-box Collusion Aware Traitor Tracing in Federated Learning

Authors: Elena Rodríguez-Lois, Fabio Brau, Maura Pintor, Battista Biggio, Fernando Pérez-González | Published: 2026-02-12
データリークやモデルの問題に関する分析手法
Trigger Detection
Watermark Robustness

DeepSight: An All-in-One LM Safety Toolkit

Authors: Bo Zhang, Jiaxuan Guo, Lijun Li, Dongrui Liu, Sujin Chen, Guanxu Chen, Zhijie Zheng, Qihao Lin, Lewen Yan, Chen Qian, Yijin Zhou, Yuyao Wu, Shaoxiong Guo, Tianyi Du, Jingyi Yang, Xuhao Hu, Ziqi Miao, Xiaoya Lu, Jing Shao, Xia Hu | Published: 2026-02-12
Prompt Injection
Large Language Model
Evaluation Method

PAC to the Future: Zero-Knowledge Proofs of PAC Private Systems

Authors: Guilhem Repetto, Nojan Sheybani, Gabrielle De Micheli, Farinaz Koushanfar | Published: 2026-02-12
Algorithm
Privacy Assurance
Computational Consistency

More Haste, Less Speed: Weaker Single-Layer Watermark Improves Distortion-Free Watermark Ensembles

Authors: Ruibo Chen, Yihan Wu, Xuehao Cui, Jingqi Zhang, Heng Huang | Published: 2026-02-12
Author Attribution Method
Watermark Robustness
透かし攻撃

LoRA-based Parameter-Efficient LLMs for Continuous Learning in Edge-based Malware Detection

Authors: Christian Rondanini, Barbara Carminati, Elena Ferrari, Niccolò Lardo, Ashish Kundu | Published: 2026-02-12
Edge Computing
Experimental Validation
Federated Learning

Stop Tracking Me! Proactive Defense Against Attribute Inference Attack in LLMs

Authors: Dong Yan, Jian Liang, Ran He, Tieniu Tan | Published: 2026-02-12
Disabling Safety Mechanisms of LLM
Privacy Assurance
Explanation Method

Differentially Private and Communication Efficient Large Language Model Split Inference via Stochastic Quantization and Soft Prompt

Authors: Yujie Gu, Richeng Jin, Xiaoyu Ji, Yier Jin, Wenyuan Xu | Published: 2026-02-12
Privacy Assurance
Prompt Injection
Prompt leaking

Jailbreaking Leaves a Trace: Understanding and Detecting Jailbreak Attacks from Internal Representations of Large Language Models

Authors: Sri Durga Sai Sowmya Kadali, Evangelos E. Papalexakis | Published: 2026-02-12
Prompt Injection
Experimental Validation
Evaluation Method

Cachemir: Fully Homomorphic Encrypted Inference of Generative Large Language Model with KV Cache

Authors: Ye Yu, Yifan Zhou, Yi Chen, Pedro Soto, Wenjie Xiong, Meng Li | Published: 2026-02-12
Algorithm
Model DoS
Differential Privacy