Amplifying Machine Learning Attacks Through Strategic Compositions

Authors: Yugeng Liu, Zheng Li, Hai Huang, Michael Backes, Yang Zhang | Published: 2025-06-23

Security Assessment of DeepSeek and GPT Series Models against Jailbreak Attacks

Authors: Xiaodong Wu, Xiangman Li, Jianbing Ni | Published: 2025-06-23

DUMB and DUMBer: Is Adversarial Training Worth It in the Real World?

Authors: Francesco Marchiori, Marco Alecci, Luca Pajola, Mauro Conti | Published: 2025-06-23

Smart-LLaMA-DPO: Reinforced Large Language Model for Explainable Smart Contract Vulnerability Detection

Authors: Lei Yu, Zhirong Huang, Hang Yuan, Shiqi Cheng, Li Yang, Fengjun Zhang, Chenjie Shen, Jiajia Ma, Jingyuan Zhang, Junyi Lu, Chun Zuo | Published: 2025-06-23

LLMによる有害な応答を防ぐ、安全機構

LLMが有害な応答をしないようにするための安全機構について解説します。本記事を読むことで、安全機構の仕組みについて理解を深めることができます。

Evaluating Large Language Models for Phishing Detection, Self-Consistency, Faithfulness, and Explainability

Authors: Shova Kuikel, Aritran Piplai, Palvi Aggarwal | Published: 2025-06-16

Weakest Link in the Chain: Security Vulnerabilities in Advanced Reasoning Models

Authors: Arjun Krishna, Aaditya Rastogi, Erick Galinkin | Published: 2025-06-16

Watermarking LLM-Generated Datasets in Downstream Tasks

Authors: Yugeng Liu, Tianshuo Cong, Michael Backes, Zheng Li, Yang Zhang | Published: 2025-06-16

From Promise to Peril: Rethinking Cybersecurity Red and Blue Teaming in the Age of LLMs

Authors: Alsharif Abuadbba, Chris Hicks, Kristen Moore, Vasilios Mavroudis, Burak Hasircioglu, Diksha Goel, Piers Jennings | Published: 2025-06-16

「AI Security Portal」(英語版)を公開しました

「AIセキュリティポータル」の英語版を公開しました。ぜひご覧ください。