敵対的サンプル

Generalization Bounds for Adversarial Contrastive Learning

Authors: Xin Zou, Weiwei Liu | Published: 2023-02-21
ウォーターマーキング
モデル性能評価
敵対的サンプル

On the Discredibility of Membership Inference Attacks

Authors: Shahbaz Rezaei, Xin Liu | Published: 2022-12-06 | Updated: 2023-04-28
サブポピュレーション特性
メンバーシップ開示リスク
敵対的サンプル

Hijack Vertical Federated Learning Models As One Party

Authors: Pengyu Qiu, Xuhong Zhang, Shouling Ji, Changjiang Li, Yuwen Pu, Xing Yang, Ting Wang | Published: 2022-12-01 | Updated: 2024-02-16
敵対的サンプル
最適化問題
未ターゲット毒性攻撃

Evolution of Neural Tangent Kernels under Benign and Adversarial Training

Authors: Noel Loo, Ramin Hasani, Alexander Amini, Daniela Rus | Published: 2022-10-21
敵対的サンプル
敵対的攻撃手法
深層学習手法

Scaling Adversarial Training to Large Perturbation Bounds

Authors: Sravanti Addepalli, Samyak Jain, Gaurang Sriramanan, R. Venkatesh Babu | Published: 2022-10-18
敵対的サンプル
敵対的攻撃手法
深層学習手法

Towards Generating Adversarial Examples on Mixed-type Data

Authors: Han Xu, Menghai Pan, Zhimeng Jiang, Huiyuan Chen, Xiaoting Li, Mahashweta Das, Hao Yang | Published: 2022-10-17
敵対的サンプル
敵対的攻撃手法
最適化アルゴリズムの選択と評価

Preprocessors Matter! Realistic Decision-Based Attacks on Machine Learning Systems

Authors: Chawin Sitawarin, Florian Tramèr, Nicholas Carlini | Published: 2022-10-07 | Updated: 2023-07-20
DNN IP保護手法
モデル抽出攻撃
敵対的サンプル

A Black-Box Attack on Optical Character Recognition Systems

Authors: Samet Bayram, Kenneth Barner | Published: 2022-08-30
敵対的サンプル
敵対的攻撃
最適化手法

Customized Watermarking for Deep Neural Networks via Label Distribution Perturbation

Authors: Tzu-Yun Chien, Chih-Ya Shen | Published: 2022-08-10
カスタマイズ手法
敵対的サンプル
透かしの耐久性

Design of secure and robust cognitive system for malware detection

Authors: Sanket Shukla | Published: 2022-08-03
マルウェア検出
ロバスト性
敵対的サンプル