AIセキュリティポータルbot

Post-Hoc Robustness Enhancement in Graph Neural Networks with Conditional Random Fields

Authors: Yassine Abbahaddou, Sofiane Ennadir, Johannes F. Lutzeyer, Fragkiskos D. Malliaros, Michalis Vazirgiannis | Published: 2024-11-08
Experimental Validation

MRJ-Agent: An Effective Jailbreak Agent for Multi-Round Dialogue

Authors: Fengxiang Wang, Ranjie Duan, Peng Xiao, Xiaojun Jia, Shiji Zhao, Cheng Wei, YueFeng Chen, Chongwen Wang, Jialing Tao, Hang Su, Jun Zhu, Hui Xue | Published: 2024-11-06 | Updated: 2025-01-07
Prompt Injection
Multi-Round Dialogue

Optimal Defenses Against Gradient Reconstruction Attacks

Authors: Yuxiao Chen, Gamze Gürsoy, Qi Lei | Published: 2024-11-06
Poisoning
Defense Method

FEDLAD: Federated Evaluation of Deep Leakage Attacks and Defenses

Authors: Isaac Baglin, Xiatian Zhu, Simon Hadfield | Published: 2024-11-05 | Updated: 2025-01-05
Poisoning
Attack Evaluation
Evaluation Method

A General Recipe for Contractive Graph Neural Networks — Technical Report

Authors: Maya Bechler-Speicher, Moshe Eliasof | Published: 2024-11-04
Algorithm
Convergence Analysis
Regularization

SQL Injection Jailbreak: A Structural Disaster of Large Language Models

Authors: Jiawei Zhao, Kejiang Chen, Weiming Zhang, Nenghai Yu | Published: 2024-11-03 | Updated: 2025-05-21
Prompt Injection
Prompt leaking
Attack Type

What Features in Prompts Jailbreak LLMs? Investigating the Mechanisms Behind Attacks

Authors: Nathalie Kirch, Constantin Weisser, Severin Field, Helen Yannakoudakis, Stephen Casper | Published: 2024-11-02 | Updated: 2025-05-14
Disabling Safety Mechanisms of LLM
Prompt Injection
Exploratory Attack

Privacy-Preserving Federated Learning with Differentially Private Hyperdimensional Computing

Authors: Fardin Jalil Piran, Zhiling Chen, Mohsen Imani, Farhad Imani | Published: 2024-11-02 | Updated: 2025-03-22
Privacy Protection
Framework

Defense Against Prompt Injection Attack by Leveraging Attack Techniques

Authors: Yulin Chen, Haoran Li, Zihao Zheng, Yangqiu Song, Dekai Wu, Bryan Hooi | Published: 2024-11-01 | Updated: 2025-07-22
Indirect Prompt Injection
Prompt Injection
Attack Method

Attention Tracker: Detecting Prompt Injection Attacks in LLMs

Authors: Kuo-Han Hung, Ching-Yun Ko, Ambrish Rawat, I-Hsin Chung, Winston H. Hsu, Pin-Yu Chen | Published: 2024-11-01 | Updated: 2025-04-23
Indirect Prompt Injection
Large Language Model
Attention Mechanism