Automated and Explainable Denial of Service Analysis for AI-Driven Intrusion Detection Systems Authors: Paul Badu Yakubu, Lesther Santana, Mohamed Rahouti, Yufeng Xin, Abdellah Chehri, Mohammed Aledhari | Published: 2025-11-06 Traffic Characteristic AnalysisModel DoSFeature Importance Analysis 2025.11.06 2025.11.08 Literature Database
Specification-Guided Vulnerability Detection with Large Language Models Authors: Hao Zhu, Jia Li, Cuiyun Gao, Jiaru Qian, Yihong Dong, Huanyu Liu, Lecheng Wang, Ziliang Wang, Xiaolong Hu, Ge Li | Published: 2025-11-06 Prompt InjectionLarge Language Model脆弱性検出手法 2025.11.06 2025.11.08 Literature Database
Hybrid Fuzzing with LLM-Guided Input Mutation and Semantic Feedback Authors: Shiyin Lin | Published: 2025-11-06 Prompt InjectionDynamic AnalysisInformation Security 2025.11.06 2025.11.08 Literature Database
Whisper Leak: a side-channel attack on Large Language Models Authors: Geoff McDonald, Jonathan Bar Or | Published: 2025-11-05 Traffic Characteristic AnalysisPrompt leakingLarge Language Model 2025.11.05 2025.11.07 Literature Database
Watermarking Large Language Models in Europe: Interpreting the AI Act in Light of Technology Authors: Thomas Souverain | Published: 2025-11-05 Digital Watermarking for Generative AIGenerative Model CharacteristicsTransparency and Verification 2025.11.05 2025.11.07 Literature Database
Let the Bees Find the Weak Spots: A Path Planning Perspective on Multi-Turn Jailbreak Attacks against LLMs Authors: Yize Liu, Yunyun Hou, Aina Sui | Published: 2025-11-05 Automation of CybersecurityPrompt Injectionマルチターン攻撃分析 2025.11.05 2025.11.07 Literature Database
Auditing M-LLMs for Privacy Risks: A Synthetic Benchmark and Evaluation Framework Authors: Junhao Li, Jiahao Chen, Zhou Feng, Chunyi Zhou | Published: 2025-11-05 HallucinationPrivacy ViolationPrivacy Protection 2025.11.05 2025.11.07 Literature Database
Death by a Thousand Prompts: Open Model Vulnerability Analysis Authors: Amy Chang, Nicholas Conley, Harish Santhanalakshmi Ganesan, Adam Swanda | Published: 2025-11-05 Disabling Safety Mechanisms of LLMIndirect Prompt InjectionThreat modeling 2025.11.05 2025.11.07 Literature Database
Measuring the Security of Mobile LLM Agents under Adversarial Prompts from Untrusted Third-Party Channels Authors: Chenghao Du, Quanfeng Huang, Tingxuan Tang, Zihao Wang, Adwait Nadkarni, Yue Xiao | Published: 2025-10-31 | Updated: 2025-11-06 Indirect Prompt InjectionPrompt InjectionInformation Security 2025.10.31 2025.11.08 Literature Database
PVMark: Enabling Public Verifiability for LLM Watermarking Schemes Authors: Haohua Duan, Liyao Xiang, Xin Zhang | Published: 2025-10-30 Model Extraction Attack公的検証可能性Watermarking Technology 2025.10.30 2025.11.01 Literature Database