AIセキュリティポータルbot

Treatment of Statistical Estimation Problems in Randomized Smoothing for Adversarial Robustness

Authors: Vaclav Voracek | Published: 2024-06-25 | Updated: 2025-01-20
信頼評価モジュール
評価手法
透かし評価

The Effect of Similarity Measures on Accurate Stability Estimates for Local Surrogate Models in Text-based Explainable AI

Authors: Christopher Burger, Charles Walter, Thai Le | Published: 2024-06-22 | Updated: 2025-01-17
敵対的サンプル
評価手法
類似性測定

Mind the Privacy Unit! User-Level Differential Privacy for Language Model Fine-Tuning

Authors: Lynn Chua, Badih Ghazi, Yangsibo Huang, Pritish Kamath, Ravi Kumar, Daogao Liu, Pasin Manurangsi, Amer Sinha, Chiyuan Zhang | Published: 2024-06-20 | Updated: 2024-08-16
ウォーターマーキング
データ選択戦略
プライバシー保護手法

Let the Noise Speak: Harnessing Noise for a Unified Defense Against Adversarial and Backdoor Attacks

Authors: Md Hasan Shahriar, Ning Wang, Naren Ramakrishnan, Y. Thomas Hou, Wenjing Lou | Published: 2024-06-18 | Updated: 2025-04-14
モデルの頑健性保証
再構成攻撃
敵対的攻撃検出

Data Plagiarism Index: Characterizing the Privacy Risk of Data-Copying in Tabular Generative Models

Authors: Joshua Ward, Chi-Hua Wang, Guang Cheng | Published: 2024-06-18
データプライバシー評価
プライバシー保護手法
メンバーシップ推論

Can Go AIs be adversarially robust?

Authors: Tom Tseng, Euan McLean, Kellin Pelrine, Tony T. Wang, Adam Gleave | Published: 2024-06-18 | Updated: 2025-01-14
モデル性能評価
攻撃手法
透かし評価

UIFV: Data Reconstruction Attack in Vertical Federated Learning

Authors: Jirui Yang, Peng Chen, Zhihui Lu, Qiang Duan, Yubing Bao | Published: 2024-06-18 | Updated: 2025-01-14
データプライバシー評価
フレームワーク
攻撃手法

Defending Against Social Engineering Attacks in the Age of LLMs

Authors: Lin Ai, Tharindu Kumarage, Amrita Bhattacharjee, Zizhou Liu, Zheng Hui, Michael Davinroy, James Cook, Laura Cassani, Kirill Trapeznikov, Matthias Kirchner, Arslan Basharat, Anthony Hoogs, Joshua Garland, Huan Liu, Julia Hirschberg | Published: 2024-06-18 | Updated: 2024-10-11
インダイレクトプロンプトインジェクション
サイバー脅威
ソーシャルエンジニアリング攻撃

CleanGen: Mitigating Backdoor Attacks for Generation Tasks in Large Language Models

Authors: Yuetai Li, Zhangchen Xu, Fengqing Jiang, Luyao Niu, Dinuka Sahabandu, Bhaskar Ramasubramanian, Radha Poovendran | Published: 2024-06-18 | Updated: 2025-03-27
LLMセキュリティ
バックドア攻撃
プロンプトインジェクション

FullCert: Deterministic End-to-End Certification for Training and Inference of Neural Networks

Authors: Tobias Lorenz, Marta Kwiatkowska, Mario Fritz | Published: 2024-06-17 | Updated: 2024-09-11
セキュリティ保証
収束分析
最適化問題