Low-Resource Languages Jailbreak GPT-4 Authors: Zheng-Xin Yong, Cristina Menghini, Stephen H. Bach | Published: 2023-10-03 | Updated: 2024-01-27 Prompt InjectionSafety AlignmentVulnerability detection 2023.10.03 2025.05.28 Literature Database
Defending Against Alignment-Breaking Attacks via Robustly Aligned LLM Authors: Bochuan Cao, Yuanpu Cao, Lu Lin, Jinghui Chen | Published: 2023-09-18 | Updated: 2024-06-12 Prompt InjectionSafety AlignmentDefense Method 2023.09.18 2025.05.28 Literature Database
Censoring chemical data to mitigate dual use risk Authors: Quintina L. Campbell, Jonathan Herington, Andrew D. White | Published: 2023-04-20 Data GenerationPrivacy TechniqueSafety Alignment 2023.04.20 2025.05.28 Literature Database
Alignment with human representations supports robust few-shot learning Authors: Ilia Sucholutsky, Thomas L. Griffiths | Published: 2023-01-27 | Updated: 2023-10-29 Few-Shot LearningWatermarkingSafety Alignment 2023.01.27 2025.05.28 Literature Database