CAIN: Hijacking LLM-Humans Conversations via a Two-Stage Malicious System Prompt Generation and Refining Framework Authors: Viet Pham, Thai Le | Published: 2025-05-22 LLM SecurityPrompt InjectionAdversarial Learning 2025.05.22 2025.05.28 Literature Database
Unlearning Isn’t Deletion: Investigating Reversibility of Machine Unlearning in LLMs Authors: Xiaoyu Xu, Xiang Yue, Yang Liu, Qingqing Ye, Haibo Hu, Minxin Du | Published: 2025-05-22 Bias Detection in AI OutputPrivacy ManagementMachine learning 2025.05.22 2025.05.28 Literature Database
CoTSRF: Utilize Chain of Thought as Stealthy and Robust Fingerprint of Large Language Models Authors: Zhenzhen Ren, GuoBiao Li, Sheng Li, Zhenxing Qian, Xinpeng Zhang | Published: 2025-05-22 LLM SecurityFingerprinting MethodModel Identification 2025.05.22 2025.05.28 Literature Database
When Safety Detectors Aren’t Enough: A Stealthy and Effective Jailbreak Attack on LLMs via Steganographic Techniques Authors: Jianing Geng, Biao Yi, Zekun Fei, Tongxi Wu, Lihai Nie, Zheli Liu | Published: 2025-05-22 Disabling Safety Mechanisms of LLMPrompt InjectionWatermark Removal Technology 2025.05.22 2025.05.28 Literature Database
Mitigating Fine-tuning Risks in LLMs via Safety-Aware Probing Optimization Authors: Chengcan Wu, Zhixin Zhang, Zeming Wei, Yihao Zhang, Meng Sun | Published: 2025-05-22 LLM SecurityAlignmentAdversarial Learning 2025.05.22 2025.05.28 Literature Database
BitHydra: Towards Bit-flip Inference Cost Attack against Large Language Models Authors: Xiaobei Yan, Yiming Li, Zhaoxin Fan, Han Qiu, Tianwei Zhang | Published: 2025-05-22 LLM SecurityText Generation MethodPrompt Injection 2025.05.22 2025.05.28 Literature Database
Finetuning-Activated Backdoors in LLMs Authors: Thibaud Gloaguen, Mark Vero, Robin Staab, Martin Vechev | Published: 2025-05-22 LLM SecurityBackdoor AttackPrompt Injection 2025.05.22 2025.05.28 Literature Database
CTRAP: Embedding Collapse Trap to Safeguard Large Language Models from Harmful Fine-Tuning Authors: Biao Yi, Tiansheng Huang, Baolei Zhang, Tong Li, Lihai Nie, Zheli Liu, Li Shen | Published: 2025-05-22 AlignmentIndirect Prompt InjectionCalculation of Output Harmfulness 2025.05.22 2025.05.28 Literature Database
DuFFin: A Dual-Level Fingerprinting Framework for LLMs IP Protection Authors: Yuliang Yan, Haochun Tang, Shuo Yan, Enyan Dai | Published: 2025-05-22 Fingerprinting MethodPrompt InjectionModel Identification 2025.05.22 2025.05.28 Literature Database
Password Strength Detection via Machine Learning: Analysis, Modeling, and Evaluation Authors: Jiazhi Mo, Hailu Kuang, Xiaoqi Li | Published: 2025-05-22 Data Origins and EvolutionパスワードセキュリティMachine Learning 2025.05.22 2025.05.28 Literature Database