AIセキュリティポータルbot

KnowML: Improving Generalization of ML-NIDS with Attack Knowledge Graphs

Authors: Xin Fan Guo, Albert Merono Penuela, Sergio Maffeis, Fabio Pierazzi | Published: 2025-06-24
Model Inversion
攻撃戦略分析
Feature Extraction

A Survey of LLM-Driven AI Agent Communication: Protocols, Security Risks, and Defense Countermeasures

Authors: Dezhang Kong, Shi Lin, Zhenhua Xu, Zhebo Wang, Minghao Li, Yufeng Li, Yilun Zhang, Zeyang Sha, Yuyuan Li, Changting Lin, Xun Wang, Xuan Liu, Muhammad Khurram Khan, Ningyu Zhang, Chaochao Chen, Meng Han | Published: 2025-06-24
AIエージェント通信
Poisoning attack on RAG
Prompt validation

Decompiling Smart Contracts with a Large Language Model

Authors: Isaac David, Liyi Zhou, Dawn Song, Arthur Gervais, Kaihua Qin | Published: 2025-06-24
デコンパイルの課題
バイナリ分析
Vulnerability Research

PrivacyXray: Detecting Privacy Breaches in LLMs through Semantic Consistency and Probability Certainty

Authors: Jinwen He, Yiyang Lu, Zijin Lin, Kai Chen, Yue Zhao | Published: 2025-06-24
Backdoor Detection
Privacy Protection
Privacy protection framework

PhishingHook: Catching Phishing Ethereum Smart Contracts leveraging EVM Opcodes

Authors: Pasquale De Rosa, Simon Queyrut, Yérom-David Bromberg, Pascal Felber, Valerio Schiavoni | Published: 2025-06-24
バイナリ分析
Detection Rate of Phishing Attacks
Vulnerability Research

FuncVul: An Effective Function Level Vulnerability Detection Model using LLM and Code Chunk

Authors: Sajal Halder, Muhammad Ejaz Ahmed, Seyit Camtepe | Published: 2025-06-24
Prompt Injection
Large Language Model
Vulnerability Research

Amplifying Machine Learning Attacks Through Strategic Compositions

Authors: Yugeng Liu, Zheng Li, Hai Huang, Michael Backes, Yang Zhang | Published: 2025-06-23
Membership Disclosure Risk
Certified Robustness
Adversarial attack

Robust Anomaly Detection in Network Traffic: Evaluating Machine Learning Models on CICIDS2017

Authors: Zhaoyang Xu, Yunbo Liu | Published: 2025-06-23 | Updated: 2025-08-11
Certified Robustness
Performance Evaluation Method
Anomaly Detection Method

Security Assessment of DeepSeek and GPT Series Models against Jailbreak Attacks

Authors: Xiaodong Wu, Xiangman Li, Jianbing Ni | Published: 2025-06-23
Prompt Injection
Model Architecture
Large Language Model

DUMB and DUMBer: Is Adversarial Training Worth It in the Real World?

Authors: Francesco Marchiori, Marco Alecci, Luca Pajola, Mauro Conti | Published: 2025-06-23
Model Architecture
Certified Robustness
Adversarial Attack Analysis