Large Language Model

ZKPROV: A Zero-Knowledge Approach to Dataset Provenance for Large Language Models

Authors: Mina Namazi, Alexander Nemecek, Erman Ayday | Published: 2025-06-26
Privacy Protection
Large Language Model
Watermarking Technology

SV-LLM: An Agentic Approach for SoC Security Verification using Large Language Models

Authors: Dipayan Saha, Shams Tarek, Hasan Al Shaikh, Khan Thamid Hasan, Pavan Sai Nalluri, Md. Ajoad Hasan, Nashmin Alam, Jingbo Zhou, Sujan Kumar Saha, Mark Tehranipoor, Farimah Farahmandi | Published: 2025-06-25
セキュリティ検証手法
Prompt Injection
Large Language Model

FuncVul: An Effective Function Level Vulnerability Detection Model using LLM and Code Chunk

Authors: Sajal Halder, Muhammad Ejaz Ahmed, Seyit Camtepe | Published: 2025-06-24
Prompt Injection
Large Language Model
Vulnerability Research

Security Assessment of DeepSeek and GPT Series Models against Jailbreak Attacks

Authors: Xiaodong Wu, Xiangman Li, Jianbing Ni | Published: 2025-06-23
Prompt Injection
Model Architecture
Large Language Model

Smart-LLaMA-DPO: Reinforced Large Language Model for Explainable Smart Contract Vulnerability Detection

Authors: Lei Yu, Zhirong Huang, Hang Yuan, Shiqi Cheng, Li Yang, Fengjun Zhang, Chenjie Shen, Jiajia Ma, Jingyuan Zhang, Junyi Lu, Chun Zuo | Published: 2025-06-23
スマートコントラクト脆弱性
Prompt leaking
Large Language Model

Evaluating Large Language Models for Phishing Detection, Self-Consistency, Faithfulness, and Explainability

Authors: Shova Kuikel, Aritran Piplai, Palvi Aggarwal | Published: 2025-06-16
Alignment
Prompt Injection
Large Language Model

Weakest Link in the Chain: Security Vulnerabilities in Advanced Reasoning Models

Authors: Arjun Krishna, Aaditya Rastogi, Erick Galinkin | Published: 2025-06-16
Prompt Injection
Large Language Model
Adversarial Attack Methods

Can We Infer Confidential Properties of Training Data from LLMs?

Authors: Penguin Huang, Chhavi Yadav, Ruihan Wu, Kamalika Chaudhuri | Published: 2025-06-12
Privacy Enhancing Technology
医療診断属性
Large Language Model

Beyond Jailbreaks: Revealing Stealthier and Broader LLM Security Risks Stemming from Alignment Failures

Authors: Yukai Zhou, Sibei Yang, Wenjie Wang | Published: 2025-06-09
Cooperative Effects with LLM
Cyber Threat
Large Language Model

A Red Teaming Roadmap Towards System-Level Safety

Authors: Zifan Wang, Christina Q. Knight, Jeremy Kritz, Willow E. Primack, Julian Michael | Published: 2025-05-30 | Updated: 2025-06-09
Model DoS
Large Language Model
製品安全性