Embedding Poisoning: Bypassing Safety Alignment via Embedding Semantic Shift

Authors: Shuai Yuan, Zhibo Zhang, Yuxi Li, Guangdong Bai, Wang Kailong | Published: 2025-09-08

AttestLLM: Efficient Attestation Framework for Billion-scale On-device LLMs

Authors: Ruisi Zhang, Yifei Zhao, Neusha Javidnia, Mengxin Zheng, Farinaz Koushanfar | Published: 2025-09-08

Self-adaptive Dataset Construction for Real-World Multimodal Safety Scenarios

Authors: Jingen Qu, Lijun Li, Bo Zhang, Yichen Yan, Jing Shao | Published: 2025-09-04

An Automated, Scalable Machine Learning Model Inversion Assessment Pipeline

Authors: Tyler Shumaker, Jessica Carpenter, David Saranchak, Nathaniel D. Bastian | Published: 2025-09-04

KubeGuard: LLM-Assisted Kubernetes Hardening via Configuration Files and Runtime Logs Analysis

Authors: Omri Sgan Cohen, Ehud Malul, Yair Meidan, Dudu Mimran, Yuval Elovici, Asaf Shabtai | Published: 2025-09-04

NeuroBreak: Unveil Internal Jailbreak Mechanisms in Large Language Models

Authors: Chuhan Zhang, Ye Zhang, Bowen Shi, Yuyou Gan, Tianyu Du, Shouling Ji, Dazhan Deng, Yingcai Wu | Published: 2025-09-04

Federated Learning: An approach with Hybrid Homomorphic Encryption

Authors: Pedro Correia, Ivan Silva, Ivone Amorim, Eva Maia, Isabel Praça | Published: 2025-09-03

VulnRepairEval: An Exploit-Based Evaluation Framework for Assessing Large Language Model Vulnerability Repair Capabilities

Authors: Weizhe Wang, Wei Ma, Qiang Hu, Yao Zhang, Jianfei Sun, Bin Wu, Yang Liu, Guangquan Xu, Lingxiao Jiang | Published: 2025-09-03

A Comprehensive Guide to Differential Privacy: From Theory to User Expectations

Authors: Napsu Karmitsa, Antti Airola, Tapio Pahikkala, Tinja Pitkämäki | Published: 2025-09-03

AIはだまされることがある?~AIに仕掛けられるワナとその守り方~

今や、AI(人工知能)は私たちの生活のあちこちで使われています。たとえば、スマートフォンの顔認証や自動運転の自動車、オンラインストアでの商品のおすすめ表示などもそうです。でも、そんな便利なAIも「だまされてしまう」ことがあるのをご存じでしょうか?その代表例が、「敵対的サンプル」と呼ばれる仕掛けです。この記事では、その仕組みと守り方を、できるだけわかりやすくご紹介します。