Safety Alignment

Low-Resource Languages Jailbreak GPT-4

Authors: Zheng-Xin Yong, Cristina Menghini, Stephen H. Bach | Published: 2023-10-03 | Updated: 2024-01-27
Prompt Injection
Safety Alignment
Vulnerability detection

Defending Against Alignment-Breaking Attacks via Robustly Aligned LLM

Authors: Bochuan Cao, Yuanpu Cao, Lu Lin, Jinghui Chen | Published: 2023-09-18 | Updated: 2024-06-12
Prompt Injection
Safety Alignment
Defense Method

Censoring chemical data to mitigate dual use risk

Authors: Quintina L. Campbell, Jonathan Herington, Andrew D. White | Published: 2023-04-20
Data Generation
Privacy Technique
Safety Alignment

Alignment with human representations supports robust few-shot learning

Authors: Ilia Sucholutsky, Thomas L. Griffiths | Published: 2023-01-27 | Updated: 2023-10-29
Few-Shot Learning
Watermarking
Safety Alignment