Literature Database

MCP Safety Training: Learning to Refuse Falsely Benign MCP Exploits using Improved Preference Alignment

Authors: John Halloran | Published: 2025-05-29
Poisoning attack on RAG
Alignment
料理材料

Merge Hijacking: Backdoor Attacks to Model Merging of Large Language Models

Authors: Zenghui Yuan, Yangming Xu, Jiawen Shi, Pan Zhou, Lichao Sun | Published: 2025-05-29
LLM Security
Poisoning Attack
Model Protection Methods

Disrupting Vision-Language Model-Driven Navigation Services via Adversarial Object Fusion

Authors: Chunlong Xie, Jialing He, Shangwei Guo, Jiacheng Wang, Shudong Zhang, Tianwei Zhang, Tao Xiang | Published: 2025-05-29
Alignment
敵対的オブジェクト生成
Optimization Methods

SimProcess: High Fidelity Simulation of Noisy ICS Physical Processes

Authors: Denis Donadel, Gabriele Crestanello, Giulio Morandini, Daniele Antonioli, Mauro Conti, Massimo Merro | Published: 2025-05-28
Data Origins and Evolution
Model Design
Dynamic Analysis Method

Transformers for Secure Hardware Systems: Applications, Challenges, and Outlook

Authors: Banafsheh Saber Latibari, Najmeh Nazari, Avesta Sasan, Houman Homayoun, Pratik Satam, Soheil Salehi, Hossein Sayadi | Published: 2025-05-28
Security Analysis
Hardware Trojan Detection
Backdoor Detection

Does Johnny Get the Message? Evaluating Cybersecurity Notifications for Everyday Users

Authors: Victor Jüttner, Erik Buchmann | Published: 2025-05-28
パーソナライズ
Prompt Injection
対策の説明

Test-Time Immunization: A Universal Defense Framework Against Jailbreaks for (Multimodal) Large Language Models

Authors: Yongcan Yu, Yanbo Wang, Ran He, Jian Liang | Published: 2025-05-28
LLM Security
Prompt Injection
Large Language Model

Jailbreak Distillation: Renewable Safety Benchmarking

Authors: Jingyu Zhang, Ahmed Elgohary, Xiawei Wang, A S M Iftekhar, Ahmed Magooda, Benjamin Van Durme, Daniel Khashabi, Kyle Jackson | Published: 2025-05-28
Prompt Injection
Model Evaluation
Attack Evaluation

VulBinLLM: LLM-powered Vulnerability Detection for Stripped Binaries

Authors: Nasir Hussain, Haohan Chen, Chanh Tran, Philip Huang, Zhuohao Li, Pravir Chugh, William Chen, Ashish Kundu, Yuan Tian | Published: 2025-05-28
LLM Security
Vulnerability Analysis
逆アセンブル

Breaking the Ceiling: Exploring the Potential of Jailbreak Attacks through Expanding Strategy Space

Authors: Yao Huang, Yitong Sun, Shouwei Ruan, Yichi Zhang, Yinpeng Dong, Xingxing Wei | Published: 2025-05-27
Disabling Safety Mechanisms of LLM
Prompt Injection
Attack Evaluation