収束特性

Non-Asymptotic Bounds for Adversarial Excess Risk under Misspecified Models

Authors: Changyu Liu, Yuling Jiao, Junhui Wang, Jian Huang | Published: 2023-09-02
収束特性
損失項
敵対的攻撃

Large-Scale Public Data Improves Differentially Private Image Generation Quality

Authors: Ruihan Wu, Chuan Guo, Kamalika Chaudhuri | Published: 2023-08-04
データ生成
プライバシー保護手法
収束特性

On Neural Network approximation of ideal adversarial attack and convergence of adversarial training

Authors: Rajdeep Haldar, Qifan Song | Published: 2023-07-30
収束特性
敵対的攻撃
最適化手法

Probing the Transition to Dataset-Level Privacy in ML Models Using an Output-Specific and Data-Resolved Privacy Profile

Authors: Tyler LeBlond, Joseph Munoz, Fred Lu, Maya Fuchs, Elliott Zaresky-Williams, Edward Raff, Brian Testa | Published: 2023-06-27
プライバシー評価
収束保証
収束特性

Byzantine-Robust Clustered Federated Learning

Authors: Zhixu Tao, Kun Yang, Sanjeev R. Kulkarni | Published: 2023-06-01
ビザンチン合意メカニズム
収束特性
損失項

Improved Privacy-Preserving PCA Using Optimized Homomorphic Matrix Multiplication

Authors: Xirong Ma | Published: 2023-05-27 | Updated: 2023-08-17
プライバシー保護手法
収束特性
暗号化手法

On the Optimal Batch Size for Byzantine-Robust Distributed Learning

Authors: Yi-Rui Yang, Chang-Wei Shi, Wu-Jun Li | Published: 2023-05-23
ビザンチン合意メカニズム
収束特性
機械学習手法

Protecting Federated Learning from Extreme Model Poisoning Attacks via Multidimensional Time Series Anomaly Detection

Authors: Edoardo Gabrielli, Dimitri Belli, Zoe Matrullo, Vittorio Miori, Gabriele Tolomei | Published: 2023-03-29 | Updated: 2024-12-02
データ汚染検出
ポイズニング
収束特性

How many dimensions are required to find an adversarial example?

Authors: Charles Godfrey, Henry Kvinge, Elise Bishoff, Myles Mckay, Davis Brown, Tim Doster, Eleanor Byler | Published: 2023-03-24 | Updated: 2023-04-11
収束特性
敵対的サンプル
機械学習技術

Score Attack: A Lower Bound Technique for Optimal Differentially Private Learning

Authors: T. Tony Cai, Yichen Wang, Linjun Zhang | Published: 2023-03-13
プライバシー保護技術
リスク評価手法
収束特性