When Good Sounds Go Adversarial: Jailbreaking Audio-Language Models with Benign Inputs Authors: Bodam Kim, Hiskias Dingeto, Taeyoun Kwon, Dasol Choi, DongGeon Lee, Haon Park, JaeHoon Lee, Jongho Shin | Published: 2025-08-05 Prompt InjectionAttack Evaluation音声モデルの脆弱性 2025.08.05 2025.08.07 Literature Database
VFLAIR-LLM: A Comprehensive Framework and Benchmark for Split Learning of LLMs Authors: Zixuan Gu, Qiufeng Fan, Long Sun, Yang Liu, Xiaojun Ye | Published: 2025-08-05 Prompt InjectionPrompt leakingWatermark 2025.08.05 2025.08.07 Literature Database
PhishParrot: LLM-Driven Adaptive Crawling to Unveil Cloaked Phishing Sites Authors: Hiroki Nakano, Takashi Koide, Daiki Chiba | Published: 2025-08-04 Indirect Prompt InjectionPrompt InjectionMalicious Website Detection 2025.08.04 2025.08.06 Literature Database
Breaking Obfuscation: Cluster-Aware Graph with LLM-Aided Recovery for Malicious JavaScript Detection Authors: Zhihong Liang, Xin Wang, Zhenhuang Hu, Liangliang Song, Lin Chen, Jingjing Guo, Yanbin Wang, Ye Tian | Published: 2025-07-30 Program VerificationPrompt InjectionRobustness of Watermarking Techniques 2025.07.30 2025.08.01 Literature Database
Can We End the Cat-and-Mouse Game? Simulating Self-Evolving Phishing Attacks with LLMs and Genetic Algorithms Authors: Seiji Sato, Tetsushi Ohki, Masakatsu Nishigaki | Published: 2025-07-29 Prompt InjectionPrompt leaking心理学理論 2025.07.29 2025.07.31 Literature Database
Repairing vulnerabilities without invisible hands. A differentiated replication study on LLMs Authors: Maria Camporese, Fabio Massacci | Published: 2025-07-28 Prompt InjectionLarge Language ModelVulnerability Management 2025.07.28 2025.07.30 Literature Database
Information Security Based on LLM Approaches: A Review Authors: Chang Gong, Zhongwen Li, Xiaoqi Li | Published: 2025-07-24 Network Traffic AnalysisPrompt InjectionPrompt leaking 2025.07.24 2025.07.26 Literature Database
Tab-MIA: A Benchmark Dataset for Membership Inference Attacks on Tabular Data in LLMs Authors: Eyal German, Sagiv Antebi, Daniel Samira, Asaf Shabtai, Yuval Elovici | Published: 2025-07-23 Relationship of AI SystemsProperty Inference AttackPrompt Injection 2025.07.23 2025.07.25 Literature Database
Depth Gives a False Sense of Privacy: LLM Internal States Inversion Authors: Tian Dong, Yan Meng, Shaofeng Li, Guoxing Chen, Zhen Liu, Haojin Zhu | Published: 2025-07-22 Prompt InjectionPrompt leakingAttack Method 2025.07.22 2025.07.24 Literature Database
Attacking interpretable NLP systems Authors: Eldor Abdukhamidov, Tamer Abuhmed, Joanna C. S. Santos, Mohammed Abuhamad | Published: 2025-07-22 Prompt InjectionPrompt validationAdversarial Attack Methods 2025.07.22 2025.07.24 Literature Database