Literature Database

PRISON: Unmasking the Criminal Potential of Large Language Models

Authors: Xinyi Wu, Geng Hong, Pei Chen, Yueyue Chen, Xudong Pan, Min Yang | Published: 2025-06-19 | Updated: 2025-08-04
Disabling Safety Mechanisms of LLM
法執行回避
Research Methodology

Privacy-Preserving LLM Interaction with Socratic Chain-of-Thought Reasoning and Homomorphically Encrypted Vector Databases

Authors: Yubeen Bae, Minchan Kim, Jaejin Lee, Sangbum Kim, Jaehyung Kim, Yejin Choi, Niloofar Mireshghallah | Published: 2025-06-19 | Updated: 2025-07-01
Privacy Protection
Prompt Injection
Large Language Model

ETrace:Event-Driven Vulnerability Detection in Smart Contracts via LLM-Based Trace Analysis

Authors: Chenyang Peng, Haijun Wang, Yin Wu, Hao Wu, Ming Fan, Yitao Zhao, Ting Liu | Published: 2025-06-18 | Updated: 2025-07-08
Event Identification
Information Security
Vulnerability Attack Method

Evaluating Large Language Models for Phishing Detection, Self-Consistency, Faithfulness, and Explainability

Authors: Shova Kuikel, Aritran Piplai, Palvi Aggarwal | Published: 2025-06-16
Alignment
Prompt Injection
Large Language Model

Weakest Link in the Chain: Security Vulnerabilities in Advanced Reasoning Models

Authors: Arjun Krishna, Aaditya Rastogi, Erick Galinkin | Published: 2025-06-16
Prompt Injection
Large Language Model
Adversarial Attack Methods

Watermarking LLM-Generated Datasets in Downstream Tasks

Authors: Yugeng Liu, Tianshuo Cong, Michael Backes, Zheng Li, Yang Zhang | Published: 2025-06-16
Prompt leaking
Model Protection Methods
Digital Watermarking for Generative AI

From Promise to Peril: Rethinking Cybersecurity Red and Blue Teaming in the Age of LLMs

Authors: Alsharif Abuadbba, Chris Hicks, Kristen Moore, Vasilios Mavroudis, Burak Hasircioglu, Diksha Goel, Piers Jennings | Published: 2025-06-16
Indirect Prompt Injection
Cybersecurity
Education and Follow-up

Using LLMs for Security Advisory Investigations: How Far Are We?

Authors: Bayu Fedra Abdullah, Yusuf Sulistyo Nugroho, Brittany Reid, Raula Gaikovina Kula, Kazumasa Shimari, Kenichi Matsumoto | Published: 2025-06-16
Advice Provision
Hallucination
Prompt leaking

Detecting Hard-Coded Credentials in Software Repositories via LLMs

Authors: Chidera Biringa, Gokhan Kul | Published: 2025-06-16
Software Security
Performance Evaluation
Prompt leaking

Exploring the Secondary Risks of Large Language Models

Authors: Jiawei Chen, Zhengwei Fang, Xiao Yang, Chao Yu, Zhaoxia Yin, Hang Su | Published: 2025-06-14 | Updated: 2025-09-25
Indirect Prompt Injection
Prompt leaking
Generative Model