AIセキュリティポータルbot

$(ε, δ)$-Differentially Private Partial Least Squares Regression

Authors: Ramin Nikzad-Langerodi, Mohit Kumar, Du Nguyen Duy, Mahtab Alghasi | Published: 2024-12-12
Privacy Protection

Protecting Confidentiality, Privacy and Integrity in Collaborative Learning

Authors: Dong Chen, Alice Dethise, Istemi Ekin Akkus, Ivica Rimac, Klaus Satzke, Antti Koskela, Marco Canini, Wei Wang, Ruichuan Chen | Published: 2024-12-11 | Updated: 2025-04-17
Privacy protection framework
Differential Privacy
Adversarial Learning

GLL: A Differentiable Graph Learning Layer for Neural Networks

Authors: Jason Brown, Bohan Chen, Harris Hardiman-Mostow, Jeff Calder, Andrea L. Bertozzi | Published: 2024-12-11
Poisoning
Adversarial Training

Heuristic-Induced Multimodal Risk Distribution Jailbreak Attack for Multimodal Large Language Models

Authors: Ma Teng, Jia Xiaojun, Duan Ranjie, Li Xinfeng, Huang Yihao, Chu Zhixuan, Liu Yang, Ren Wenqi | Published: 2024-12-08 | Updated: 2025-01-03
Content Moderation
Prompt Injection
Attack Method

ChatNVD: Advancing Cybersecurity Vulnerability Assessment with Large Language Models

Authors: Shivansh Chopra, Hussain Ahmad, Diksha Goel, Claudia Szabo | Published: 2024-12-06 | Updated: 2025-05-20
Text Generation Method
Prompt Injection
Computational Efficiency

On the Lack of Robustness of Binary Function Similarity Systems

Authors: Gianluca Capozzi, Tong Tang, Jie Wan, Ziqi Yang, Daniele Cono D'Elia, Giuseppe Antonio Di Luna, Lorenzo Cavallaro, Leonardo Querzoni | Published: 2024-12-05 | Updated: 2025-05-22
バイナリ分析
Adversarial Learning
Adversarial Learning

DP-2Stage: Adapting Language Models as Differentially Private Tabular Data Generators

Authors: Tejumade Afonja, Hui-Po Wang, Raouf Kerkouche, Mario Fritz | Published: 2024-12-03 | Updated: 2025-04-29
Privacy Violation
Synthetic Data Generation
Differential Privacy

Intermediate Outputs Are More Sensitive Than You Think

Authors: Tao Huang, Qingyu Huang, Jiayang Meng | Published: 2024-12-01
Privacy Protection
Membership Inference

VLSBench: Unveiling Visual Leakage in Multimodal Safety

Authors: Xuhao Hu, Dongrui Liu, Hao Li, Xuanjing Huang, Jing Shao | Published: 2024-11-29 | Updated: 2025-01-17
Prompt Injection
Safety Alignment

LUMIA: Linear probing for Unimodal and MultiModal Membership Inference Attacks leveraging internal LLM states

Authors: Luis Ibanez-Lissen, Lorena Gonzalez-Manzano, Jose Maria de Fuentes, Nicolas Anciaux, Joaquin Garcia-Alfaro | Published: 2024-11-29 | Updated: 2025-01-10
LLM Performance Evaluation
Membership Inference