Securing the AI Supply Chain: What Can We Learn From Developer-Reported Security Issues and Solutions of AI Projects? Authors: The Anh Nguyen, Triet Huynh Minh Le, M. Ali Babar | Published: 2025-12-29 Security Analysis MethodData-Driven Vulnerability AssessmentPrompt leaking 2025.12.29 2025.12.31 Literature Database
EquaCode: A Multi-Strategy Jailbreak Approach for Large Language Models via Equation Solving and Code Completion Authors: Zhen Liang, Hai Huang, Zhengkui Chen | Published: 2025-12-29 Disabling Safety Mechanisms of LLMLLM活用Prompt Injection 2025.12.29 2025.12.31 Literature Database
Certifying the Right to Be Forgotten: Primal-Dual Optimization for Sample and Label Unlearning in Vertical Federated Learning Authors: Yu Jiang, Xindi Tong, Ziyao Liu, Xiaoxi Zhang, Kwok-Yan Lam, Chee Wei Tan | Published: 2025-12-29 Data Selection StrategyMachine learningConvergence analysis 2025.12.29 2025.12.31 Literature Database
Multi-Agent Framework for Threat Mitigation and Resilience in AI-Based Systems Authors: Armstrong Foundjem, Lionel Nganyewou Tidjon, Leuson Da Silva, Foutse Khomh | Published: 2025-12-29 RAGModel DoS脆弱性優先順位付け 2025.12.29 2025.12.31 Literature Database
Assessing the Software Security Comprehension of Large Language Models Authors: Mohammed Latif Siddiq, Natalie Sekerak, Antonio Karam, Maria Leal, Arvin Islam-Gomes, Joanna C. S. Santos | Published: 2025-12-24 Indirect Prompt InjectionSecurity Analysis Method脆弱性優先順位付け 2025.12.24 2025.12.26 Literature Database
Casting a SPELL: Sentence Pairing Exploration for LLM Limitation-breaking Authors: Yifan Huang, Xiaojun Jia, Wenbo Guo, Yuqiang Sun, Yihao Huang, Chong Wang, Yang Liu | Published: 2025-12-24 Data Selection StrategyPrompt InjectionAdversarial Attack Detection 2025.12.24 2025.12.26 Literature Database
Beyond Context: Large Language Models Failure to Grasp Users Intent Authors: Ahmed M. Hussain, Salahuddin Salahuddin, Panos Papadimitratos | Published: 2025-12-24 Indirect Prompt Injectionマルチモーダル安全性脆弱性優先順位付け 2025.12.24 2025.12.26 Literature Database
GateBreaker: Gate-Guided Attacks on Mixture-of-Expert LLMs Authors: Lichao Wu, Sasha Behrouzi, Mohamadreza Rostami, Stjepan Picek, Ahmad-Reza Sadeghi | Published: 2025-12-24 Sparse ModelPrompt leaking安全性に関連するマルチモーダルなアプローチ 2025.12.24 2025.12.26 Literature Database
AegisAgent: An Autonomous Defense Agent Against Prompt Injection Attacks in LLM-HARs Authors: Yihan Wang, Huanqi Yang, Shantanu Pal, Weitao Xu | Published: 2025-12-24 Indirect Prompt InjectionPrompt InjectionAdversarial Attack Assessment 2025.12.24 2025.12.26 Literature Database
Evasion-Resilient Detection of DNS-over-HTTPS Data Exfiltration: A Practical Evaluation and Toolkit Authors: Adam Elaoumari | Published: 2025-12-23 Data Extraction and AnalysisData Flow Analysisトラフィック分類 2025.12.23 2025.12.25 Literature Database