敵対的攻撃

Why Blocking Targeted Adversarial Perturbations Impairs the Ability to Learn

Authors: Ziv Katzir, Yuval Elovici | Published: 2019-07-11
敵対的サンプル
敵対的攻撃
深層学習手法

Detecting and Diagnosing Adversarial Images with Class-Conditional Capsule Reconstructions

Authors: Yao Qin, Nicholas Frosst, Sara Sabour, Colin Raffel, Garrison Cottrell, Geoffrey Hinton | Published: 2019-07-05 | Updated: 2020-02-18
敵対的サンプル
敵対的攻撃
深層学習手法

Adversarial Robustness through Local Linearization

Authors: Chongli Qin, James Martens, Sven Gowal, Dilip Krishnan, Krishnamurthy Dvijotham, Alhussein Fawzi, Soham De, Robert Stanforth, Pushmeet Kohli | Published: 2019-07-04 | Updated: 2019-10-10
ロバスト性評価
敵対的攻撃
深層学習手法

Minimally distorted Adversarial Examples with a Fast Adaptive Boundary Attack

Authors: Francesco Croce, Matthias Hein | Published: 2019-07-03 | Updated: 2020-07-20
ポイズニング
敵対的サンプルの脆弱性
敵対的攻撃

MimosaNet: An Unrobust Neural Network Preventing Model Stealing

Authors: Kálmán Szentannai, Jalal Al-Afandi, András Horváth | Published: 2019-07-02
DNN IP保護手法
敵対的攻撃
深層学習手法

Treant: Training Evasion-Aware Decision Trees

Authors: Stefano Calzavara, Claudio Lucchese, Gabriele Tolomei, Seyum Assefa Abebe, Salvatore Orlando | Published: 2019-07-02 | Updated: 2019-07-03
敵対的攻撃
最適化戦略
機械学習フレームワーク

Accurate, reliable and fast robustness evaluation

Authors: Wieland Brendel, Jonas Rauber, Matthias Kümmerer, Ivan Ustyuzhaninov, Matthias Bethge | Published: 2019-07-01 | Updated: 2019-12-12
敵対的攻撃
最適化戦略
深層学習手法

Comment on “Adv-BNN: Improved Adversarial Defense through Robust Bayesian Neural Network”

Authors: Roland S. Zimmermann | Published: 2019-07-01
ポイズニング
敵対的攻撃
深層学習手法

On the Privacy Risks of Model Explanations

Authors: Reza Shokri, Martin Strobel, Yair Zick | Published: 2019-06-29 | Updated: 2021-02-05
メンバーシップ推論
敵対的攻撃
説明手法

Stolen Memories: Leveraging Model Memorization for Calibrated White-Box Membership Inference

Authors: Klas Leino, Matt Fredrikson | Published: 2019-06-27 | Updated: 2020-06-24
プライバシー保護
メンバーシップ推論
敵対的攻撃