Literature Database

Security through the Eyes of AI: How Visualization is Shaping Malware Detection

Authors: Asmitha K. A., Matteo Brosolo, Serena Nicolazzo, Antonino Nocera, Vinod P., Rafidha Rehiman K. A., Muhammed Shafi K. P | Published: 2025-05-12
Prompt Injection
Malware Classification
Adversarial Example Detection

Private LoRA Fine-tuning of Open-Source LLMs with Homomorphic Encryption

Authors: Jordan Frery, Roman Bredehoft, Jakub Klemsa, Arthur Meyre, Andrei Stoian | Published: 2025-05-12
LLM Security
Cryptography
Machine Learning Technology

Comet: Accelerating Private Inference for Large Language Model by Predicting Activation Sparsity

Authors: Guang Yan, Yuhui Zhang, Zimu Guo, Lutan Zhao, Xiaojun Chen, Chen Wang, Wenhao Wang, Dan Meng, Rui Hou | Published: 2025-05-12
Sparsity Optimization
Sparse Representation
Privacy Design Principles

Securing Genomic Data Against Inference Attacks in Federated Learning Environments

Authors: Chetan Pathade, Shubham Patil | Published: 2025-05-12
Privacy Design Principles
Attribute Disclosure Risk
Differential Privacy

One Trigger Token Is Enough: A Defense Strategy for Balancing Safety and Usability in Large Language Models

Authors: Haoran Gu, Handing Wang, Yi Mei, Mengjie Zhang, Yaochu Jin | Published: 2025-05-12
LLM Security
Disabling Safety Mechanisms of LLM
Prompt Injection

I Know What You Said: Unveiling Hardware Cache Side-Channels in Local Large Language Model Inference

Authors: Zibo Gao, Junjie Hu, Feng Guo, Yixin Zhang, Yinglong Han, Siyuan Liu, Haiyang Li, Zhiqiang Lv | Published: 2025-05-10 | Updated: 2025-05-14
Disabling Safety Mechanisms of LLM
Prompt leaking
Attack Detection Method

Cape: Context-Aware Prompt Perturbation Mechanism with Differential Privacy

Authors: Haoqi Wu, Wei Dai, Li Wang, Qiang Yan | Published: 2025-05-09 | Updated: 2025-05-15
Token Identification Method
Privacy Design Principles
Evaluation Method

AGENTFUZZER: Generic Black-Box Fuzzing for Indirect Prompt Injection against LLM Agents

Authors: Zhun Wang, Vincent Siu, Zhe Ye, Tianneng Shi, Yuzhou Nie, Xuandong Zhao, Chenguang Wang, Wenbo Guo, Dawn Song | Published: 2025-05-09 | Updated: 2025-05-21
Indirect Prompt Injection
Fuzzing
Attack Type

LLM-Text Watermarking based on Lagrange Interpolation

Authors: Jarosław Janas, Paweł Morawiecki, Josef Pieprzyk | Published: 2025-05-09 | Updated: 2025-05-13
LLM Security
Prompt leaking
Digital Watermarking for Generative AI

Revealing Weaknesses in Text Watermarking Through Self-Information Rewrite Attacks

Authors: Yixin Cheng, Hongcheng Guo, Yangming Li, Leonid Sigal | Published: 2025-05-08
Prompt leaking
Attack Method
Watermarking Technology