AIセキュリティポータルbot

Embedding Poisoning: Bypassing Safety Alignment via Embedding Semantic Shift

Authors: Shuai Yuan, Zhibo Zhang, Yuxi Li, Guangdong Bai, Wang Kailong | Published: 2025-09-08
Disabling Safety Mechanisms of LLM
Calculation of Output Harmfulness
Attack Detection Method

AttestLLM: Efficient Attestation Framework for Billion-scale On-device LLMs

Authors: Ruisi Zhang, Yifei Zhao, Neusha Javidnia, Mengxin Zheng, Farinaz Koushanfar | Published: 2025-09-08
Security Strategy Generation
Efficiency Evaluation
Large Language Model

Exploit Tool Invocation Prompt for Tool Behavior Hijacking in LLM-Based Agentic System

Authors: Yu Liu, Yuchong Xie, Mingyu Luo, Zesen Liu, Zhixiang Zhang, Kaikai Zhang, Zongjie Li, Ping Chen, Shuai Wang, Dongdong She | Published: 2025-09-06 | Updated: 2025-09-15
Prompt Injection
Model DoS
Attack Evaluation

Self-adaptive Dataset Construction for Real-World Multimodal Safety Scenarios

Authors: Jingen Qu, Lijun Li, Bo Zhang, Yichen Yan, Jing Shao | Published: 2025-09-04
Prompt Injection
Risk Analysis Method
安全性評価手法

An Automated, Scalable Machine Learning Model Inversion Assessment Pipeline

Authors: Tyler Shumaker, Jessica Carpenter, David Saranchak, Nathaniel D. Bastian | Published: 2025-09-04
Model Inversion
Model Extraction Attack
Risk Analysis Method

KubeGuard: LLM-Assisted Kubernetes Hardening via Configuration Files and Runtime Logs Analysis

Authors: Omri Sgan Cohen, Ehud Malul, Yair Meidan, Dudu Mimran, Yuval Elovici, Asaf Shabtai | Published: 2025-09-04
Security Strategy Generation
Network Forensics
監査ログ分析

NeuroBreak: Unveil Internal Jailbreak Mechanisms in Large Language Models

Authors: Chuhan Zhang, Ye Zhang, Bowen Shi, Yuyou Gan, Tianyu Du, Shouling Ji, Dazhan Deng, Yingcai Wu | Published: 2025-09-04
Prompt Injection
神経細胞と安全性
Defense Mechanism

Federated Learning: An approach with Hybrid Homomorphic Encryption

Authors: Pedro Correia, Ivan Silva, Ivone Amorim, Eva Maia, Isabel Praça | Published: 2025-09-03
Integration of FL and HE
Privacy Design Principles
Federated Learning

VulnRepairEval: An Exploit-Based Evaluation Framework for Assessing Large Language Model Vulnerability Repair Capabilities

Authors: Weizhe Wang, Wei Ma, Qiang Hu, Yao Zhang, Jianfei Sun, Bin Wu, Yang Liu, Guangquan Xu, Lingxiao Jiang | Published: 2025-09-03
Prompt Injection
Large Language Model
Vulnerability Analysis

A Comprehensive Guide to Differential Privacy: From Theory to User Expectations

Authors: Napsu Karmitsa, Antti Airola, Tapio Pahikkala, Tinja Pitkämäki | Published: 2025-09-03
Detection of Poison Data for Backdoor Attacks
Privacy Design Principles
Differential Privacy