AIセキュリティポータルbot

CantorNet: A Sandbox for Testing Geometrical and Topological Complexity Measures

Authors: Michal Lewandowski, Hamid Eghbalzadeh, Bernhard A. Moser | Published: 2024-11-29 | Updated: 2025-01-28
Framework

Immune: Improving Safety Against Jailbreaks in Multi-modal LLMs via Inference-Time Alignment

Authors: Soumya Suvra Ghosal, Souradip Chakraborty, Vaibhav Singh, Tianrui Guan, Mengdi Wang, Ahmad Beirami, Furong Huang, Alvaro Velasquez, Dinesh Manocha, Amrit Singh Bedi | Published: 2024-11-27 | Updated: 2025-03-20
Prompt Injection
Safety Alignment
Adversarial attack

Evaluating and Improving the Robustness of Security Attack Detectors Generated by LLMs

Authors: Samuele Pasini, Jinhan Kim, Tommaso Aiello, Rocio Cabrera Lozoya, Antonino Sabetta, Paolo Tonella | Published: 2024-11-27 | Updated: 2025-09-17
RAG
Poisoning attack on RAG
Evaluation Method

SoK: Decentralized AI (DeAI)

Authors: Zhipeng Wang, Rui Sun, Elizabeth Lui, Vatsal Shah, Xihan Xiong, Jiahao Sun, Davide Crapis, William Knottenbelt | Published: 2024-11-26 | Updated: 2025-04-16
Blockchain Integration
Distributed Learning
Watermark Design

CleanVul: Automatic Function-Level Vulnerability Detection in Code Commits Using LLM Heuristics

Authors: Yikun Li, Ting Zhang, Ratnadira Widyasari, Yan Naing Tun, Huu Hung Nguyen, Tan Bui, Ivana Clairine Irsan, Yiran Cheng, Xiang Lan, Han Wei Ang, Frank Liauw, Martin Weyssow, Hong Jin Kang, Eng Lieh Ouh, Lwin Khin Shar, David Lo | Published: 2024-11-26 | Updated: 2025-04-14
LLM Performance Evaluation
Code Change Analysis
Vulnerability Management

ThreatModeling-LLM: Automating Threat Modeling using Large Language Models for Banking System

Authors: Tingmin Wu, Shuiqiao Yang, Shigang Liu, David Nguyen, Seung Jang, Alsharif Abuadbba | Published: 2024-11-26 | Updated: 2025-05-14
Bias Detection in AI Output
Prompt leaking
脅威モデリング自動化

CS-Eval: A Comprehensive Large Language Model Benchmark for CyberSecurity

Authors: Zhengmin Yu, Jiutian Zeng, Siyi Chen, Wenhan Xu, Dandan Xu, Xiangyu Liu, Zonghao Ying, Nan Wang, Yuan Zhang, Min Yang | Published: 2024-11-25 | Updated: 2025-01-17
LLM Performance Evaluation
Cybersecurity

“Moralized” Multi-Step Jailbreak Prompts: Black-Box Testing of Guardrails in Large Language Models for Verbal Attacks

Authors: Libo Wang | Published: 2024-11-23 | Updated: 2025-03-20
Prompt Injection
Large Language Model

Indiscriminate Disruption of Conditional Inference on Multivariate Gaussians

Authors: William N. Caballero, Matthew LaRosa, Alexander Fisher, Vahid Tarokh | Published: 2024-11-21
Attack Method
Optimization Problem

Attribute Inference Attacks for Federated Regression Tasks

Authors: Francesco Diana, Othmane Marfoq, Chuan Xu, Giovanni Neglia, Frédéric Giroire, Eoin Thomas | Published: 2024-11-19 | Updated: 2025-04-16
Privacy Enhancing Protocol
Label Inference Attack
Federated Learning