Alignment

Client-Side Zero-Shot LLM Inference for Comprehensive In-Browser URL Analysis

Authors: Avihay Cohen | Published: 2025-06-04
Alignment
Prompt Injection
Dynamic Analysis

MCP Safety Training: Learning to Refuse Falsely Benign MCP Exploits using Improved Preference Alignment

Authors: John Halloran | Published: 2025-05-29
Poisoning attack on RAG
Alignment
料理材料

Disrupting Vision-Language Model-Driven Navigation Services via Adversarial Object Fusion

Authors: Chunlong Xie, Jialing He, Shangwei Guo, Jiacheng Wang, Shudong Zhang, Tianwei Zhang, Tao Xiang | Published: 2025-05-29
Alignment
敵対的オブジェクト生成
Optimization Methods

Mitigating Fine-tuning Risks in LLMs via Safety-Aware Probing Optimization

Authors: Chengcan Wu, Zhixin Zhang, Zeming Wei, Yihao Zhang, Meng Sun | Published: 2025-05-22
LLM Security
Alignment
Adversarial Learning

CTRAP: Embedding Collapse Trap to Safeguard Large Language Models from Harmful Fine-Tuning

Authors: Biao Yi, Tiansheng Huang, Baolei Zhang, Tong Li, Lihai Nie, Zheli Liu, Li Shen | Published: 2025-05-22
Alignment
Indirect Prompt Injection
Calculation of Output Harmfulness

ReCopilot: Reverse Engineering Copilot in Binary Analysis

Authors: Guoqiang Chen, Huiqi Sun, Daguang Liu, Zhiqi Wang, Qiang Wang, Bin Yin, Lu Liu, Lingyun Ying | Published: 2025-05-22
Alignment
バイナリ分析
Dynamic Analysis

Alignment Under Pressure: The Case for Informed Adversaries When Evaluating LLM Defenses

Authors: Xiaoxue Yang, Bozhidar Stevanoski, Matthieu Meeus, Yves-Alexandre de Montjoye | Published: 2025-05-21
Alignment
Prompt Injection
Defense Mechanism

sudoLLM : On Multi-role Alignment of Language Models

Authors: Soumadeep Saha, Akshay Chaturvedi, Joy Mahapatra, Utpal Garain | Published: 2025-05-20
Alignment
Prompt Injection
Large Language Model

LlamaFirewall: An open source guardrail system for building secure AI agents

Authors: Sahana Chennabasappa, Cyrus Nikolaidis, Daniel Song, David Molnar, Stephanie Ding, Shengye Wan, Spencer Whitman, Lauren Deason, Nicholas Doucette, Abraham Montilla, Alekhya Gampa, Beto de Paola, Dominik Gabi, James Crnkovich, Jean-Christophe Testud, Kat He, Rashnil Chaturvedi, Wu Zhou, Joshua Saxe | Published: 2025-05-06
LLM Security
Alignment
Prompt Injection

Bridging Expertise Gaps: The Role of LLMs in Human-AI Collaboration for Cybersecurity

Authors: Shahroz Tariq, Ronal Singh, Mohan Baruwal Chhetri, Surya Nepal, Cecile Paris | Published: 2025-05-06
Cooperative Effects with LLM
Alignment
Participant Question Analysis