PRISON: Unmasking the Criminal Potential of Large Language Models Authors: Xinyi Wu, Geng Hong, Pei Chen, Yueyue Chen, Xudong Pan, Min Yang | Published: 2025-06-19 | Updated: 2025-08-04 Disabling Safety Mechanisms of LLM法執行回避Research Methodology 2025.06.19 2025.08.06 Literature Database
Privacy-Preserving LLM Interaction with Socratic Chain-of-Thought Reasoning and Homomorphically Encrypted Vector Databases Authors: Yubeen Bae, Minchan Kim, Jaejin Lee, Sangbum Kim, Jaehyung Kim, Yejin Choi, Niloofar Mireshghallah | Published: 2025-06-19 | Updated: 2025-07-01 Privacy ProtectionPrompt InjectionLarge Language Model 2025.06.19 2025.07.03 Literature Database
ETrace:Event-Driven Vulnerability Detection in Smart Contracts via LLM-Based Trace Analysis Authors: Chenyang Peng, Haijun Wang, Yin Wu, Hao Wu, Ming Fan, Yitao Zhao, Ting Liu | Published: 2025-06-18 | Updated: 2025-07-08 Event IdentificationInformation SecurityVulnerability Attack Method 2025.06.18 2025.07.10 Literature Database
Evaluating Large Language Models for Phishing Detection, Self-Consistency, Faithfulness, and Explainability Authors: Shova Kuikel, Aritran Piplai, Palvi Aggarwal | Published: 2025-06-16 AlignmentPrompt InjectionLarge Language Model 2025.06.16 2025.06.18 Literature Database
Weakest Link in the Chain: Security Vulnerabilities in Advanced Reasoning Models Authors: Arjun Krishna, Aaditya Rastogi, Erick Galinkin | Published: 2025-06-16 Prompt InjectionLarge Language ModelAdversarial Attack Methods 2025.06.16 2025.06.18 Literature Database
Watermarking LLM-Generated Datasets in Downstream Tasks Authors: Yugeng Liu, Tianshuo Cong, Michael Backes, Zheng Li, Yang Zhang | Published: 2025-06-16 Prompt leakingModel Protection MethodsDigital Watermarking for Generative AI 2025.06.16 2025.06.18 Literature Database
From Promise to Peril: Rethinking Cybersecurity Red and Blue Teaming in the Age of LLMs Authors: Alsharif Abuadbba, Chris Hicks, Kristen Moore, Vasilios Mavroudis, Burak Hasircioglu, Diksha Goel, Piers Jennings | Published: 2025-06-16 Indirect Prompt InjectionCybersecurityEducation and Follow-up 2025.06.16 2025.06.18 Literature Database
Using LLMs for Security Advisory Investigations: How Far Are We? Authors: Bayu Fedra Abdullah, Yusuf Sulistyo Nugroho, Brittany Reid, Raula Gaikovina Kula, Kazumasa Shimari, Kenichi Matsumoto | Published: 2025-06-16 Advice ProvisionHallucinationPrompt leaking 2025.06.16 2025.06.18 Literature Database
Detecting Hard-Coded Credentials in Software Repositories via LLMs Authors: Chidera Biringa, Gokhan Kul | Published: 2025-06-16 Software SecurityPerformance EvaluationPrompt leaking 2025.06.16 2025.06.18 Literature Database
Exploring the Secondary Risks of Large Language Models Authors: Jiawei Chen, Zhengwei Fang, Xiao Yang, Chao Yu, Zhaoxia Yin, Hang Su | Published: 2025-06-14 | Updated: 2025-09-25 Indirect Prompt InjectionPrompt leakingGenerative Model 2025.06.14 2025.09.27 Literature Database