Towards Privacy-Preserving LLM Inference via Collaborative Obfuscation (Technical Report) Authors: Yu Lin, Qizhi Zhang, Wenqiang Ruan, Daode Zhang, Jue Hong, Ye Wu, Hanning Xia, Yunlong Mao, Sheng Zhong | Published: 2026-03-02 Disabling Safety Mechanisms of LLMLLM Performance EvaluationDifferential Privacy 2026.03.02 2026.03.04 Literature Database
Assessing Deanonymization Risks with Stylometry-Assisted LLM Agent Authors: Boyang Zhang, Yang Zhang | Published: 2026-02-26 Disabling Safety Mechanisms of LLMData Privacy AssessmentPrompt leaking 2026.02.26 2026.02.28 Literature Database
Stop Tracking Me! Proactive Defense Against Attribute Inference Attack in LLMs Authors: Dong Yan, Jian Liang, Ran He, Tieniu Tan | Published: 2026-02-12 Disabling Safety Mechanisms of LLMPrivacy AssuranceExplanation Method 2026.02.12 2026.02.14 Literature Database
A Behavioral Fingerprint for Large Language Models: Provenance Tracking via Refusal Vectors Authors: Zhenyu Xu, Victor S. Sheng | Published: 2026-02-10 Disabling Safety Mechanisms of LLMLLM Performance Evaluationevaluation metrics 2026.02.10 2026.02.12 Literature Database
Know Thy Enemy: Securing LLMs Against Prompt Injection via Diverse Data Synthesis and Instruction-Level Chain-of-Thought Learning Authors: Zhiyuan Chang, Mingyang Li, Yuekai Huang, Ziyou Jiang, Xiaojun Jia, Qian Xiong, Junjie Wang, Zhaoyang Li, Qing Wang | Published: 2026-01-08 Disabling Safety Mechanisms of LLMIndirect Prompt InjectionPrivacy Protection Method 2026.01.08 2026.01.10 Literature Database
Adversarial Contrastive Learning for LLM Quantization Attacks Authors: Dinghong Song, Zhiwei Xu, Hai Wan, Xibin Zhao, Pengfei Su, Dong Li | Published: 2026-01-06 Disabling Safety Mechanisms of LLMModel Extraction AttackQuantization and Privacy 2026.01.06 2026.01.08 Literature Database
EquaCode: A Multi-Strategy Jailbreak Approach for Large Language Models via Equation Solving and Code Completion Authors: Zhen Liang, Hai Huang, Zhengkui Chen | Published: 2025-12-29 Disabling Safety Mechanisms of LLMLLM活用Prompt Injection 2025.12.29 2025.12.31 Literature Database
Odysseus: Jailbreaking Commercial Multimodal LLM-integrated Systems via Dual Steganography Authors: Songze Li, Jiameng Cheng, Yiming Li, Xiaojun Jia, Dacheng Tao | Published: 2025-12-23 Disabling Safety Mechanisms of LLMPrompt Injectionマルチモーダル安全性 2025.12.23 2025.12.25 Literature Database
Can LLMs Make (Personalized) Access Control Decisions? Authors: Friederike Groschupp, Daniele Lain, Aritra Dhar, Lara Magdalena Lazier, Srdjan Čapkun | Published: 2025-11-25 Disabling Safety Mechanisms of LLMPrivacy AssessmentPrompt Injection 2025.11.25 2025.11.27 Literature Database
Understanding and Mitigating Over-refusal for Large Language Models via Safety Representation Authors: Junbo Zhang, Ran Chen, Qianli Zhou, Xinyang Deng, Wen Jiang | Published: 2025-11-24 Disabling Safety Mechanisms of LLMPrompt InjectionMalicious Prompt 2025.11.24 2025.11.26 Literature Database