モデルの頑健性保証

Amplifying Machine Learning Attacks Through Strategic Compositions

Authors: Yugeng Liu, Zheng Li, Hai Huang, Michael Backes, Yang Zhang | Published: 2025-06-23
メンバーシップ開示リスク
モデルの頑健性保証
敵対的攻撃

DUMB and DUMBer: Is Adversarial Training Worth It in the Real World?

Authors: Francesco Marchiori, Marco Alecci, Luca Pajola, Mauro Conti | Published: 2025-06-23
モデルアーキテクチャ
モデルの頑健性保証
敵対的攻撃分析

Unsourced Adversarial CAPTCHA: A Bi-Phase Adversarial CAPTCHA Framework

Authors: Xia Du, Xiaoyuan Liu, Jizhe Zhou, Zheng Lin, Chi-man Pun, Zhe Chen, Wei Ni, Jun Luo | Published: 2025-06-12
モデルの頑健性保証
敵対的学習
敵対的攻撃検出

Adversarial Surrogate Risk Bounds for Binary Classification

Authors: Natalie S. Frank | Published: 2025-06-11
モデルの頑健性保証
収束解析
関数境界ペア形成

Enhancing Adversarial Robustness with Conformal Prediction: A Framework for Guaranteed Model Reliability

Authors: Jie Bao, Chuangyin Dang, Rui Luo, Hanwei Zhang, Zhixin Zhou | Published: 2025-06-09
モデルの頑健性保証
ロバスト最適化
敵対的攻撃手法

LLM Unlearning Should Be Form-Independent

Authors: Xiaotian Ye, Mengqi Zhang, Shu Wu | Published: 2025-06-09
トレーニング手法
モデルの頑健性保証
非意味的リダイレクション

Adversarially Pretrained Transformers may be Universally Robust In-Context Learners

Authors: Soichiro Kumano, Hiroshi Kera, Toshihiko Yamasaki | Published: 2025-05-20
モデルの頑健性保証
ロバスト性とプライバシーの関係
敵対的学習

Quantum Support Vector Regression for Robust Anomaly Detection

Authors: Kilian Tscharke, Maximilian Wendlinger, Sebastian Issel, Pascal Debus | Published: 2025-05-02 | Updated: 2025-05-13
モデルの頑健性保証
異常検出手法
量子機械学習の役割

A Cryptographic Perspective on Mitigation vs. Detection in Machine Learning

Authors: Greg Gluch, Shafi Goldwasser | Published: 2025-04-28 | Updated: 2025-07-10
モデルの頑健性保証
敵対的攻撃
計算問題

Evaluating the Vulnerability of ML-Based Ethereum Phishing Detectors to Single-Feature Adversarial Perturbations

Authors: Ahod Alghuried, Ali Alkinoon, Abdulaziz Alghamdi, Soohyeon Choi, Manar Mohaisen, David Mohaisen | Published: 2025-04-24
フィッシング攻撃の検出率
モデルの頑健性保証
敵対的サンプルの検知