MCP Safety Training: Learning to Refuse Falsely Benign MCP Exploits using Improved Preference Alignment Authors: John Halloran | Published: 2025-05-29 Poisoning attack on RAGAlignment料理材料 2025.05.29 2025.05.31 Literature Database
Merge Hijacking: Backdoor Attacks to Model Merging of Large Language Models Authors: Zenghui Yuan, Yangming Xu, Jiawen Shi, Pan Zhou, Lichao Sun | Published: 2025-05-29 LLM SecurityPoisoning AttackModel Protection Methods 2025.05.29 2025.05.31 Literature Database
Disrupting Vision-Language Model-Driven Navigation Services via Adversarial Object Fusion Authors: Chunlong Xie, Jialing He, Shangwei Guo, Jiacheng Wang, Shudong Zhang, Tianwei Zhang, Tao Xiang | Published: 2025-05-29 Alignment敵対的オブジェクト生成Optimization Methods 2025.05.29 2025.05.31 Literature Database
SimProcess: High Fidelity Simulation of Noisy ICS Physical Processes Authors: Denis Donadel, Gabriele Crestanello, Giulio Morandini, Daniele Antonioli, Mauro Conti, Massimo Merro | Published: 2025-05-28 Data Origins and EvolutionModel DesignDynamic Analysis Method 2025.05.28 2025.05.30 Literature Database
Transformers for Secure Hardware Systems: Applications, Challenges, and Outlook Authors: Banafsheh Saber Latibari, Najmeh Nazari, Avesta Sasan, Houman Homayoun, Pratik Satam, Soheil Salehi, Hossein Sayadi | Published: 2025-05-28 Security AnalysisHardware Trojan DetectionBackdoor Detection 2025.05.28 2025.05.30 Literature Database
Does Johnny Get the Message? Evaluating Cybersecurity Notifications for Everyday Users Authors: Victor Jüttner, Erik Buchmann | Published: 2025-05-28 パーソナライズPrompt Injection対策の説明 2025.05.28 2025.05.30 Literature Database
Test-Time Immunization: A Universal Defense Framework Against Jailbreaks for (Multimodal) Large Language Models Authors: Yongcan Yu, Yanbo Wang, Ran He, Jian Liang | Published: 2025-05-28 LLM SecurityPrompt InjectionLarge Language Model 2025.05.28 2025.05.30 Literature Database
Jailbreak Distillation: Renewable Safety Benchmarking Authors: Jingyu Zhang, Ahmed Elgohary, Xiawei Wang, A S M Iftekhar, Ahmed Magooda, Benjamin Van Durme, Daniel Khashabi, Kyle Jackson | Published: 2025-05-28 Prompt InjectionModel EvaluationAttack Evaluation 2025.05.28 2025.05.30 Literature Database
VulBinLLM: LLM-powered Vulnerability Detection for Stripped Binaries Authors: Nasir Hussain, Haohan Chen, Chanh Tran, Philip Huang, Zhuohao Li, Pravir Chugh, William Chen, Ashish Kundu, Yuan Tian | Published: 2025-05-28 LLM SecurityVulnerability Analysis逆アセンブル 2025.05.28 2025.05.30 Literature Database
Breaking the Ceiling: Exploring the Potential of Jailbreak Attacks through Expanding Strategy Space Authors: Yao Huang, Yitong Sun, Shouwei Ruan, Yichi Zhang, Yinpeng Dong, Xingxing Wei | Published: 2025-05-27 Disabling Safety Mechanisms of LLMPrompt InjectionAttack Evaluation 2025.05.27 2025.05.29 Literature Database