A Semantic Invariant Robust Watermark for Large Language Models Authors: Aiwei Liu, Leyi Pan, Xuming Hu, Shiao Meng, Lijie Wen | Published: 2023-10-10 | Updated: 2024-05-19 WatermarkingPrompt InjectionPerformance Evaluation 2023.10.10 2025.05.28 Literature Database
SCAR: Power Side-Channel Analysis at RTL-Level Authors: Amisha Srivastava, Sanjay Das, Navnil Choudhury, Rafail Psiakis, Pedro Henrique Silva, Debjit Pal, Kanad Basu | Published: 2023-10-10 Prompt InjectionCryptographyVulnerability Prediction 2023.10.10 2025.05.28 Literature Database
LLM for SoC Security: A Paradigm Shift Authors: Dipayan Saha, Shams Tarek, Katayoon Yahyaei, Sujan Kumar Saha, Jingbo Zhou, Mark Tehranipoor, Farimah Farahmandi | Published: 2023-10-09 LLM ApplicationPrompt InjectionVulnerability detection 2023.10.09 2025.05.28 Literature Database
Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To! Authors: Xiangyu Qi, Yi Zeng, Tinghao Xie, Pin-Yu Chen, Ruoxi Jia, Prateek Mittal, Peter Henderson | Published: 2023-10-05 Data CollectionPrompt InjectionInformation Gathering Methods 2023.10.05 2025.05.28 Literature Database
SmoothLLM: Defending Large Language Models Against Jailbreaking Attacks Authors: Alexander Robey, Eric Wong, Hamed Hassani, George J. Pappas | Published: 2023-10-05 | Updated: 2024-06-11 LLM Performance EvaluationPrompt InjectionDefense Method 2023.10.05 2025.05.28 Literature Database
Misusing Tools in Large Language Models With Visual Adversarial Examples Authors: Xiaohan Fu, Zihan Wang, Shuheng Li, Rajesh K. Gupta, Niloofar Mireshghallah, Taylor Berg-Kirkpatrick, Earlence Fernandes | Published: 2023-10-04 LLM Performance EvaluationPrompt InjectionAdversarial Example 2023.10.04 2025.05.28 Literature Database
Shadow Alignment: The Ease of Subverting Safely-Aligned Language Models Authors: Xianjun Yang, Xiao Wang, Qi Zhang, Linda Petzold, William Yang Wang, Xun Zhao, Dahua Lin | Published: 2023-10-04 Prompt InjectionSafety AlignmentMalicious Content Generation 2023.10.04 2025.05.28 Literature Database
Low-Resource Languages Jailbreak GPT-4 Authors: Zheng-Xin Yong, Cristina Menghini, Stephen H. Bach | Published: 2023-10-03 | Updated: 2024-01-27 Prompt InjectionSafety AlignmentVulnerability detection 2023.10.03 2025.05.28 Literature Database
Jailbreaker in Jail: Moving Target Defense for Large Language Models Authors: Bocheng Chen, Advait Paliwal, Qiben Yan | Published: 2023-10-03 LLM Performance EvaluationPrompt Injectionevaluation metrics 2023.10.03 2025.05.28 Literature Database
On the Safety of Open-Sourced Large Language Models: Does Alignment Really Prevent Them From Being Misused? Authors: Hangfan Zhang, Zhimeng Guo, Huaisheng Zhu, Bochuan Cao, Lu Lin, Jinyuan Jia, Jinghui Chen, Dinghao Wu | Published: 2023-10-02 LLM Performance EvaluationPrompt InjectionClassification of Malicious Actors 2023.10.02 2025.05.28 Literature Database