AIセキュリティポータルbot

Can Federated Learning Safeguard Private Data in LLM Training? Vulnerabilities, Attacks, and Defense Evaluation

Authors: Wenkai Guo, Xuefeng Liu, Haolin Wang, Jianwei Niu, Shaojie Tang, Jing Yuan | Published: 2025-09-25
Privacy Protection Method
Prompt Injection
Poisoning

A Framework for Rapidly Developing and Deploying Protection Against Large Language Model Attacks

Authors: Adam Swanda, Amy Chang, Alexander Chen, Fraser Burch, Paul Kassianik, Konstantin Berlin | Published: 2025-09-25
Indirect Prompt Injection
Security Metric
Prompt Injection

RAG Security and Privacy: Formalizing the Threat Model and Attack Surface

Authors: Atousa Arzanipour, Rouzbeh Behnia, Reza Ebrahimi, Kaushik Dutta | Published: 2025-09-24
RAG
Poisoning attack on RAG
Privacy Protection Method

Investigating Security Implications of Automatically Generated Code on the Software Supply Chain

Authors: Xiaofan Li, Xing Gao | Published: 2025-09-24
Alignment
Indirect Prompt Injection
Vulnerability Research

STAF: Leveraging LLMs for Automated Attack Tree-Based Security Test Generation

Authors: Tanmay Khule, Stefan Marksteiner, Jose Alguindigue, Hannes Fuchs, Sebastian Fischmeister, Apurva Narayan | Published: 2025-09-24
セキュリティ検証手法
Test Case Generation
Model DoS

CyberSOCEval: Benchmarking LLMs Capabilities for Malware Analysis and Threat Intelligence Reasoning

Authors: Lauren Deason, Adam Bali, Ciprian Bejean, Diana Bolocan, James Crnkovich, Ioana Croitoru, Krishna Durai, Chase Midler, Calin Miron, David Molnar, Brad Moon, Bruno Ostarcevic, Alberto Peltea, Matt Rosenberg, Catalin Sandu, Arthur Saputkin, Sagar Shah, Daniel Stan, Ernest Szocs, Shengye Wan, Spencer Whitman, Sven Krasser, Joshua Saxe | Published: 2025-09-24
Security Metric
Dataset for Malware Classification
Information Leakage Analysis

Adversarial Defense in Cybersecurity: A Systematic Review of GANs for Threat Detection and Mitigation

Authors: Tharcisse Ndayipfukamiye, Jianguo Ding, Doreen Sebastian Sarwatt, Adamu Gaston Philipo, Huansheng Ning | Published: 2025-09-24 | Updated: 2025-09-30
Prompt Injection
Certified Robustness
Defense Mechanism

bi-GRPO: Bidirectional Optimization for Jailbreak Backdoor Injection on LLMs

Authors: Wence Ji, Jiancan Wu, Aiying Li, Shuyi Zhang, Junkang Wu, An Zhang, Xiang Wang, Xiangnan He | Published: 2025-09-24
Disabling Safety Mechanisms of LLM
Prompt Injection
Generative Model

Unmasking Fake Careers: Detecting Machine-Generated Career Trajectories via Multi-layer Heterogeneous Graphs

Authors: Michiharu Yamashita, Thanh Tran, Delvin Ce Zhang, Dongwon Lee | Published: 2025-09-24
キャリアデータ生成
構造的パターン検出
Generative Model Characteristics

Defending against Stegomalware in Deep Neural Networks with Permutation Symmetry

Authors: Birk Torpmann-Hagen, Michael A. Riegler, Pål Halvorsen, Dag Johansen | Published: 2025-09-23 | Updated: 2025-10-15
Security Analysis Method
Certified Robustness
Information Hiding Techniques