Invisible Prompts, Visible Threats: Malicious Font Injection in External Resources for Large Language Models Authors: Junjie Xiong, Changjia Zhu, Shuhang Lin, Chong Zhang, Yongfeng Zhang, Yao Liu, Lingyao Li | Published: 2025-05-22 2025.05.22 文献データベース
Backdoor Cleaning without External Guidance in MLLM Fine-tuning Authors: Xuankun Rong, Wenke Huang, Jian Liang, Jinhe Bi, Xun Xiao, Yiming Li, Bo Du, Mang Ye | Published: 2025-05-22 2025.05.22 文献データベース
CAIN: Hijacking LLM-Humans Conversations via a Two-Stage Malicious System Prompt Generation and Refining Framework Authors: Viet Pham, Thai Le | Published: 2025-05-22 2025.05.22 文献データベース
Unlearning Isn’t Deletion: Investigating Reversibility of Machine Unlearning in LLMs Authors: Xiaoyu Xu, Xiang Yue, Yang Liu, Qingqing Ye, Haibo Hu, Minxin Du | Published: 2025-05-22 2025.05.22 文献データベース
CoTSRF: Utilize Chain of Thought as Stealthy and Robust Fingerprint of Large Language Models Authors: Zhenzhen Ren, GuoBiao Li, Sheng Li, Zhenxing Qian, Xinpeng Zhang | Published: 2025-05-22 2025.05.22 文献データベース
When Safety Detectors Aren’t Enough: A Stealthy and Effective Jailbreak Attack on LLMs via Steganographic Techniques Authors: Jianing Geng, Biao Yi, Zekun Fei, Tongxi Wu, Lihai Nie, Zheli Liu | Published: 2025-05-22 2025.05.22 文献データベース
Mitigating Fine-tuning Risks in LLMs via Safety-Aware Probing Optimization Authors: Chengcan Wu, Zhixin Zhang, Zeming Wei, Yihao Zhang, Meng Sun | Published: 2025-05-22 2025.05.22 文献データベース
BitHydra: Towards Bit-flip Inference Cost Attack against Large Language Models Authors: Xiaobei Yan, Yiming Li, Zhaoxin Fan, Han Qiu, Tianwei Zhang | Published: 2025-05-22 2025.05.22 文献データベース
Finetuning-Activated Backdoors in LLMs Authors: Thibaud Gloaguen, Mark Vero, Robin Staab, Martin Vechev | Published: 2025-05-22 2025.05.22 文献データベース
CTRAP: Embedding Collapse Trap to Safeguard Large Language Models from Harmful Fine-Tuning Authors: Biao Yi, Tiansheng Huang, Baolei Zhang, Tong Li, Lihai Nie, Zheli Liu, Li Shen | Published: 2025-05-22 2025.05.22 文献データベース