FuncVul: An Effective Function Level Vulnerability Detection Model using LLM and Code Chunk Authors: Sajal Halder, Muhammad Ejaz Ahmed, Seyit Camtepe | Published: 2025-06-24 Prompt InjectionLarge Language ModelVulnerability Research 2025.06.24 2025.06.26 Literature Database
Security Assessment of DeepSeek and GPT Series Models against Jailbreak Attacks Authors: Xiaodong Wu, Xiangman Li, Jianbing Ni | Published: 2025-06-23 Prompt InjectionModel ArchitectureLarge Language Model 2025.06.23 2025.06.25 Literature Database
Smart-LLaMA-DPO: Reinforced Large Language Model for Explainable Smart Contract Vulnerability Detection Authors: Lei Yu, Zhirong Huang, Hang Yuan, Shiqi Cheng, Li Yang, Fengjun Zhang, Chenjie Shen, Jiajia Ma, Jingyuan Zhang, Junyi Lu, Chun Zuo | Published: 2025-06-23 スマートコントラクト脆弱性Prompt leakingLarge Language Model 2025.06.23 2025.06.25 Literature Database
Privacy-Preserving LLM Interaction with Socratic Chain-of-Thought Reasoning and Homomorphically Encrypted Vector Databases Authors: Yubeen Bae, Minchan Kim, Jaejin Lee, Sangbum Kim, Jaehyung Kim, Yejin Choi, Niloofar Mireshghallah | Published: 2025-06-19 | Updated: 2025-07-01 Privacy ProtectionPrompt InjectionLarge Language Model 2025.06.19 2025.07.03 Literature Database
Evaluating Large Language Models for Phishing Detection, Self-Consistency, Faithfulness, and Explainability Authors: Shova Kuikel, Aritran Piplai, Palvi Aggarwal | Published: 2025-06-16 AlignmentPrompt InjectionLarge Language Model 2025.06.16 2025.06.18 Literature Database
Weakest Link in the Chain: Security Vulnerabilities in Advanced Reasoning Models Authors: Arjun Krishna, Aaditya Rastogi, Erick Galinkin | Published: 2025-06-16 Prompt InjectionLarge Language ModelAdversarial Attack Methods 2025.06.16 2025.06.18 Literature Database
Can We Infer Confidential Properties of Training Data from LLMs? Authors: Penguin Huang, Chhavi Yadav, Ruihan Wu, Kamalika Chaudhuri | Published: 2025-06-12 Privacy Enhancing Technology医療診断属性Large Language Model 2025.06.12 2025.06.14 Literature Database
Beyond Jailbreaks: Revealing Stealthier and Broader LLM Security Risks Stemming from Alignment Failures Authors: Yukai Zhou, Sibei Yang, Wenjie Wang | Published: 2025-06-09 Cooperative Effects with LLMCyber ThreatLarge Language Model 2025.06.09 2025.06.11 Literature Database
The Scales of Justitia: A Comprehensive Survey on Safety Evaluation of LLMs Authors: Songyang Liu, Chaozhuo Li, Jiameng Qiu, Xi Zhang, Feiran Huang, Litian Zhang, Yiming Hei, Philip S. Yu | Published: 2025-06-06 | Updated: 2025-10-30 AlignmentLarge Language Model安全性評価 2025.06.06 2025.11.01 Literature Database
A Red Teaming Roadmap Towards System-Level Safety Authors: Zifan Wang, Christina Q. Knight, Jeremy Kritz, Willow E. Primack, Julian Michael | Published: 2025-05-30 | Updated: 2025-06-09 Model DoSLarge Language Model製品安全性 2025.05.30 2025.06.11 Literature Database