文献データベース

A Convex Relaxation Barrier to Tight Robustness Verification of Neural Networks

Authors: Hadi Salman, Greg Yang, Huan Zhang, Cho-Jui Hsieh, Pengchuan Zhang | Published: 2019-02-23 | Updated: 2020-01-10
モデルの頑健性保証
ロバスト性評価
敵対的学習

Adversarial Neural Network Inversion via Auxiliary Knowledge Alignment

Authors: Ziqi Yang, Ee-Chien Chang, Zhenkai Liang | Published: 2019-02-22
モデルインバージョン
敵対的攻撃手法
最適化手法

A Graph-Based Machine Learning Approach for Bot Detection

Authors: Abbas Abou Daya, Mohammad A. Salahuddin, Noura Limam, Raouf Boutaba | Published: 2019-02-22
グラフ構築
データ前処理
ボット検出手法

Adversarial Attacks on Graph Neural Networks via Meta Learning

Authors: Daniel Zügner, Stephan Günnemann | Published: 2019-02-22 | Updated: 2024-01-28
グラフ構築
敵対的サンプル
敵対的攻撃手法

Quantifying Perceptual Distortion of Adversarial Examples

Authors: Matt Jordan, Naren Manoj, Surbhi Goel, Alexandros G. Dimakis | Published: 2019-02-21
モデルの頑健性保証
敵対的学習
敵対的攻撃手法

Wasserstein Adversarial Examples via Projected Sinkhorn Iterations

Authors: Eric Wong, Frank R. Schmidt, J. Zico Kolter | Published: 2019-02-21 | Updated: 2020-01-18
Wasserstein距離
モデルの頑健性保証
敵対的攻撃手法

advertorch v0.1: An Adversarial Robustness Toolbox based on PyTorch

Authors: Gavin Weiguang Ding, Luyu Wang, Xiaomeng Jin | Published: 2019-02-20
ポイズニング
敵対的学習
研究方法論

There are No Bit Parts for Sign Bits in Black-Box Attacks

Authors: Abdullah Al-Dujaili, Una-May O'Reilly | Published: 2019-02-19 | Updated: 2019-04-03
モデルの頑健性保証
敵対的攻撃手法
最適化戦略

On Evaluating Adversarial Robustness

Authors: Nicholas Carlini, Anish Athalye, Nicolas Papernot, Wieland Brendel, Jonas Rauber, Dimitris Tsipras, Ian Goodfellow, Aleksander Madry, Alexey Kurakin | Published: 2019-02-18 | Updated: 2019-02-20
モデルの頑健性保証
ロバスト性向上手法
敵対的攻撃手法

Mockingbird: Defending Against Deep-Learning-Based Website Fingerprinting Attacks with Adversarial Traces

Authors: Mohammad Saidur Rahman, Mohsen Imani, Nate Mathews, Matthew Wright | Published: 2019-02-18 | Updated: 2020-10-28
バックドアモデルの検知
敵対的サンプル
敵対的攻撃手法