Literature Database 文献データベースでは、AIセキュリティに関する文献情報を分類・集約しています。詳しくは文献データベースについてをご覧ください。 The Literature Database categorizes and aggregates literature related to AI security. For more details, please see About Literature Database.
Amplifying Machine Learning Attacks Through Strategic Compositions Authors: Yugeng Liu, Zheng Li, Hai Huang, Michael Backes, Yang Zhang | Published: 2025-06-23 Membership Disclosure RiskCertified RobustnessAdversarial attack 2025.06.23 2025.06.25 Literature Database
Security Assessment of DeepSeek and GPT Series Models against Jailbreak Attacks Authors: Xiaodong Wu, Xiangman Li, Jianbing Ni | Published: 2025-06-23 Prompt InjectionModel ArchitectureLarge Language Model 2025.06.23 2025.06.25 Literature Database
DUMB and DUMBer: Is Adversarial Training Worth It in the Real World? Authors: Francesco Marchiori, Marco Alecci, Luca Pajola, Mauro Conti | Published: 2025-06-23 Model ArchitectureCertified RobustnessAdversarial Attack Analysis 2025.06.23 2025.06.25 Literature Database
Smart-LLaMA-DPO: Reinforced Large Language Model for Explainable Smart Contract Vulnerability Detection Authors: Lei Yu, Zhirong Huang, Hang Yuan, Shiqi Cheng, Li Yang, Fengjun Zhang, Chenjie Shen, Jiajia Ma, Jingyuan Zhang, Junyi Lu, Chun Zuo | Published: 2025-06-23 スマートコントラクト脆弱性Prompt leakingLarge Language Model 2025.06.23 2025.06.25 Literature Database
Evaluating Large Language Models for Phishing Detection, Self-Consistency, Faithfulness, and Explainability Authors: Shova Kuikel, Aritran Piplai, Palvi Aggarwal | Published: 2025-06-16 AlignmentPrompt InjectionLarge Language Model 2025.06.16 2025.06.18 Literature Database
Weakest Link in the Chain: Security Vulnerabilities in Advanced Reasoning Models Authors: Arjun Krishna, Aaditya Rastogi, Erick Galinkin | Published: 2025-06-16 Prompt InjectionLarge Language ModelAdversarial Attack Methods 2025.06.16 2025.06.18 Literature Database
Watermarking LLM-Generated Datasets in Downstream Tasks Authors: Yugeng Liu, Tianshuo Cong, Michael Backes, Zheng Li, Yang Zhang | Published: 2025-06-16 Prompt leakingModel Protection MethodsDigital Watermarking for Generative AI 2025.06.16 2025.06.18 Literature Database
From Promise to Peril: Rethinking Cybersecurity Red and Blue Teaming in the Age of LLMs Authors: Alsharif Abuadbba, Chris Hicks, Kristen Moore, Vasilios Mavroudis, Burak Hasircioglu, Diksha Goel, Piers Jennings | Published: 2025-06-16 Indirect Prompt InjectionCybersecurityEducation and Follow-up 2025.06.16 2025.06.18 Literature Database
Using LLMs for Security Advisory Investigations: How Far Are We? Authors: Bayu Fedra Abdullah, Yusuf Sulistyo Nugroho, Brittany Reid, Raula Gaikovina Kula, Kazumasa Shimari, Kenichi Matsumoto | Published: 2025-06-16 Advice ProvisionHallucinationPrompt leaking 2025.06.16 2025.06.18 Literature Database
Detecting Hard-Coded Credentials in Software Repositories via LLMs Authors: Chidera Biringa, Gokhan Kul | Published: 2025-06-16 Software SecurityPerformance EvaluationPrompt leaking 2025.06.16 2025.06.18 Literature Database