AIセキュリティポータルbot

PhishingHook: Catching Phishing Ethereum Smart Contracts leveraging EVM Opcodes

Authors: Pasquale De Rosa, Simon Queyrut, Yérom-David Bromberg, Pascal Felber, Valerio Schiavoni | Published: 2025-06-24
バイナリ分析
Detection Rate of Phishing Attacks
Vulnerability Research

FuncVul: An Effective Function Level Vulnerability Detection Model using LLM and Code Chunk

Authors: Sajal Halder, Muhammad Ejaz Ahmed, Seyit Camtepe | Published: 2025-06-24
Prompt Injection
Large Language Model
Vulnerability Research

Amplifying Machine Learning Attacks Through Strategic Compositions

Authors: Yugeng Liu, Zheng Li, Hai Huang, Michael Backes, Yang Zhang | Published: 2025-06-23
Membership Disclosure Risk
Certified Robustness
Adversarial attack

Security Assessment of DeepSeek and GPT Series Models against Jailbreak Attacks

Authors: Xiaodong Wu, Xiangman Li, Jianbing Ni | Published: 2025-06-23
Prompt Injection
Model Architecture
Large Language Model

DUMB and DUMBer: Is Adversarial Training Worth It in the Real World?

Authors: Francesco Marchiori, Marco Alecci, Luca Pajola, Mauro Conti | Published: 2025-06-23
Model Architecture
Certified Robustness
Adversarial Attack Analysis

Smart-LLaMA-DPO: Reinforced Large Language Model for Explainable Smart Contract Vulnerability Detection

Authors: Lei Yu, Zhirong Huang, Hang Yuan, Shiqi Cheng, Li Yang, Fengjun Zhang, Chenjie Shen, Jiajia Ma, Jingyuan Zhang, Junyi Lu, Chun Zuo | Published: 2025-06-23
スマートコントラクト脆弱性
Prompt leaking
Large Language Model

VReaves: Eavesdropping on Virtual Reality App Identity and Activity via Electromagnetic Side Channels

Authors: Wei Sun, Minghong Fang, Mengyuan Li | Published: 2025-06-21 | Updated: 2025-06-24
信号処理技術
実験設定
環境干渉抑制

Evaluating Large Language Models for Phishing Detection, Self-Consistency, Faithfulness, and Explainability

Authors: Shova Kuikel, Aritran Piplai, Palvi Aggarwal | Published: 2025-06-16
Alignment
Prompt Injection
Large Language Model

Weakest Link in the Chain: Security Vulnerabilities in Advanced Reasoning Models

Authors: Arjun Krishna, Aaditya Rastogi, Erick Galinkin | Published: 2025-06-16
Prompt Injection
Large Language Model
Adversarial Attack Methods

Watermarking LLM-Generated Datasets in Downstream Tasks

Authors: Yugeng Liu, Tianshuo Cong, Michael Backes, Zheng Li, Yang Zhang | Published: 2025-06-16
Prompt leaking
Model Protection Methods
Digital Watermarking for Generative AI