Safety Alignment

Do We Really Need Curated Malicious Data for Safety Alignment in Multi-modal Large Language Models?

Authors: Yanbo Wang, Jiyang Guan, Jian Liang, Ran He | Published: 2025-04-14
Prompt Injection
Bias in Training Data
Safety Alignment

Representation Bending for Large Language Model Safety

Authors: Ashkan Yousefpour, Taeheon Kim, Ryan S. Kwon, Seungbeen Lee, Wonje Jeung, Seungju Han, Alvin Wan, Harrison Ngan, Youngjae Yu, Jonghyun Choi | Published: 2025-04-02
Prompt Injection
Prompt leaking
Safety Alignment

AgentBreeder: Mitigating the AI Safety Impact of Multi-Agent Scaffolds via Self-Improvement

Authors: J Rosser, Jakob Nicolaus Foerster | Published: 2025-02-02 | Updated: 2025-04-14
LLM Performance Evaluation
Multi-Objective Optimization
Safety Alignment

LLM Safety Alignment is Divergence Estimation in Disguise

Authors: Rajdeep Haldar, Ziyi Wang, Qifan Song, Guang Lin, Yue Xing | Published: 2025-02-02
Prompt Injection
Convergence Analysis
Large Language Model
Safety Alignment

LegalGuardian: A Privacy-Preserving Framework for Secure Integration of Large Language Models in Legal Practice

Authors: M. Mikail Demir, Hakan T. Otal, M. Abdullah Canbaz | Published: 2025-01-19
Privacy Protection
Improvement of Learning
Safety Alignment

Safeguarding Large Language Models in Real-time with Tunable Safety-Performance Trade-offs

Authors: Joao Fonseca, Andrew Bell, Julia Stoyanovich | Published: 2025-01-02
Framework
Prompt Injection
Safety Alignment

VLSBench: Unveiling Visual Leakage in Multimodal Safety

Authors: Xuhao Hu, Dongrui Liu, Hao Li, Xuanjing Huang, Jing Shao | Published: 2024-11-29 | Updated: 2025-01-17
Prompt Injection
Safety Alignment

Immune: Improving Safety Against Jailbreaks in Multi-modal LLMs via Inference-Time Alignment

Authors: Soumya Suvra Ghosal, Souradip Chakraborty, Vaibhav Singh, Tianrui Guan, Mengdi Wang, Ahmad Beirami, Furong Huang, Alvaro Velasquez, Dinesh Manocha, Amrit Singh Bedi | Published: 2024-11-27 | Updated: 2025-03-20
Prompt Injection
Safety Alignment
Adversarial attack

Can a large language model be a gaslighter?

Authors: Wei Li, Luyao Zhu, Yang Song, Ruixi Lin, Rui Mao, Yang You | Published: 2024-10-11
Prompt Injection
Safety Alignment
Attack Method

Superficial Safety Alignment Hypothesis

Authors: Jianwei Li, Jung-Eun Kim | Published: 2024-10-07
LLM Performance Evaluation
Safety Alignment