Jailbreaking Commercial Black-Box LLMs with Explicitly Harmful Prompts Authors: Chiyu Zhang, Lu Zhou, Xiaogang Xu, Jiafei Wu, Liming Fang, Zhe Liu | Published: 2025-08-14 Social Engineering AttackPrompt InjectionLarge Language Model 2025.08.14 2025.08.16 Literature Database
Demystifying the Role of Rule-based Detection in AI Systems for Windows Malware Detection Authors: Andrea Ponte, Luca Demetrio, Luca Oneto, Ivan Tesfai Ogbu, Battista Biggio, Fabio Roli | Published: 2025-08-13 Prompt InjectionMalware Detection MethodImbalanced Dataset 2025.08.13 2025.08.15 Literature Database
Attacks and Defenses Against LLM Fingerprinting Authors: Kevin Kurian, Ethan Holland, Sean Oesch | Published: 2025-08-12 Prompt InjectionReinforcement LearningWatermark Design 2025.08.12 2025.08.14 Literature Database
Oblivionis: A Lightweight Learning and Unlearning Framework for Federated Large Language Models Authors: Fuyao Zhang, Xinyu Yan, Tiantong Wu, Wenjie Li, Tianxiang Chen, Yang Cao, Ran Yan, Longtao Huang, Wei Yang Bryan Lim, Qiang Yang | Published: 2025-08-12 Data Management SystemFrameworkPrompt Injection 2025.08.12 2025.08.14 Literature Database
Robust Anomaly Detection in O-RAN: Leveraging LLMs against Data Manipulation Attacks Authors: Thusitha Dayaratne, Ngoc Duy Pham, Viet Vo, Shangqi Lai, Sharif Abuadbba, Hajime Suzuki, Xingliang Yuan, Carsten Rudolph | Published: 2025-08-11 FrameworkPrompt InjectionPerformance Evaluation Method 2025.08.11 2025.08.13 Literature Database
JPS: Jailbreak Multimodal Large Language Models with Collaborative Visual Perturbation and Textual Steering Authors: Renmiao Chen, Shiyao Cui, Xuancheng Huang, Chengwei Pan, Victor Shea-Jay Huang, QingLin Zhang, Xuan Ouyang, Zhexin Zhang, Hongning Wang, Minlie Huang | Published: 2025-08-07 Prompt InjectionInappropriate Content Generation攻撃戦略分析 2025.08.07 2025.08.09 Literature Database
When Good Sounds Go Adversarial: Jailbreaking Audio-Language Models with Benign Inputs Authors: Bodam Kim, Hiskias Dingeto, Taeyoun Kwon, Dasol Choi, DongGeon Lee, Haon Park, JaeHoon Lee, Jongho Shin | Published: 2025-08-05 Prompt InjectionAttack Evaluation音声モデルの脆弱性 2025.08.05 2025.08.07 Literature Database
VFLAIR-LLM: A Comprehensive Framework and Benchmark for Split Learning of LLMs Authors: Zixuan Gu, Qiufeng Fan, Long Sun, Yang Liu, Xiaojun Ye | Published: 2025-08-05 Prompt InjectionPrompt leakingWatermark 2025.08.05 2025.08.07 Literature Database
PhishParrot: LLM-Driven Adaptive Crawling to Unveil Cloaked Phishing Sites Authors: Hiroki Nakano, Takashi Koide, Daiki Chiba | Published: 2025-08-04 Indirect Prompt InjectionPrompt InjectionMalicious Website Detection 2025.08.04 2025.08.06 Literature Database
Breaking Obfuscation: Cluster-Aware Graph with LLM-Aided Recovery for Malicious JavaScript Detection Authors: Zhihong Liang, Xin Wang, Zhenhuang Hu, Liangliang Song, Lin Chen, Jingjing Guo, Yanbin Wang, Ye Tian | Published: 2025-07-30 Program VerificationPrompt InjectionRobustness of Watermarking Techniques 2025.07.30 2025.08.01 Literature Database