AIセキュリティポータルbot

Evading Toxicity Detection with ASCII-art: A Benchmark of Spatial Attacks on Moderation Systems

Authors: Sergey Berezin, Reza Farahbakhsh, Noel Crespi | Published: 2024-09-27 | Updated: 2025-09-24
Token Compression Framework
Prompt leaking
Natural Language Processing

Code Vulnerability Repair with Large Language Model using Context-Aware Prompt Tuning

Authors: Arshiya Khan, Guannan Liu, Xing Gao | Published: 2024-09-27 | Updated: 2025-06-11
コード脆弱性修復
セキュリティコンテキスト統合
Large Language Model

An Adversarial Perspective on Machine Unlearning for AI Safety

Authors: Jakub Łucki, Boyi Wei, Yangsibo Huang, Peter Henderson, Florian Tramèr, Javier Rando | Published: 2024-09-26 | Updated: 2025-04-10
Prompt Injection
Safety Alignment
Machine Unlearning

Weak-to-Strong Backdoor Attack for Large Language Models

Authors: Shuai Zhao, Leilei Gan, Zhongliang Guo, Xiaobao Wu, Luwei Xiao, Xiaoyu Xu, Cong-Duy Nguyen, Luu Anh Tuan | Published: 2024-09-26 | Updated: 2024-10-13
Backdoor Attack
Prompt Injection

MoJE: Mixture of Jailbreak Experts, Naive Tabular Classifiers as Guard for Prompt Attacks

Authors: Giandomenico Cornacchia, Giulio Zizzo, Kieran Fraser, Muhammad Zaid Hameed, Ambrish Rawat, Mark Purcell | Published: 2024-09-26 | Updated: 2024-10-04
Guardrail Method
Content Moderation
Prompt Injection

A novel application of Shapley values for large multidimensional time-series data: Applying explainable AI to a DNA profile classification neural network

Authors: Lauren Elborough, Duncan Taylor, Melissa Humphries | Published: 2024-09-26
Algorithm
Watermarking
Evaluation Method

Multi-Designated Detector Watermarking for Language Models

Authors: Zhengan Huang, Gongxian Zeng, Xin Mu, Yu Wang, Yue Yu | Published: 2024-09-26 | Updated: 2024-10-01
LLM Security
Watermarking
Watermark Evaluation

The poison of dimensionality

Authors: Lê-Nguyên Hoang | Published: 2024-09-25
Poisoning
Model Performance Evaluation
Loss Function

SDBA: A Stealthy and Long-Lasting Durable Backdoor Attack in Federated Learning

Authors: Minyeong Choe, Cheolhee Park, Changho Seo, Hyunil Kim | Published: 2024-09-23 | Updated: 2025-07-30
Backdoor Attack
Poisoning
Watermark Robustness

Pretraining Data Detection for Large Language Models: A Divergence-based Calibration Method

Authors: Weichao Zhang, Ruqing Zhang, Jiafeng Guo, Maarten de Rijke, Yixing Fan, Xueqi Cheng | Published: 2024-09-23 | Updated: 2025-05-21
Disabling Safety Mechanisms of LLM
Model Performance Evaluation
Information Extraction