ロバスト性

Potential adversarial samples for white-box attacks

Authors: Amir Nazemi, Paul Fieguth | Published: 2019-12-13
ロバスト性
敵対的スペクトル攻撃検出
深層学習手法

Training Provably Robust Models by Polyhedral Envelope Regularization

Authors: Chen Liu, Mathieu Salzmann, Sabine Süsstrunk | Published: 2019-12-10 | Updated: 2021-09-20
ロバスト性
最適化問題
深層学習手法

Hardening Random Forest Cyber Detectors Against Adversarial Attacks

Authors: Giovanni Apruzzese, Mauro Andreolini, Michele Colajanni, Mirco Marchetti | Published: 2019-12-09
データ生成
ロバスト性
敵対的サンプル

An Empirical Study on the Relation between Network Interpretability and Adversarial Robustness

Authors: Adam Noack, Isaac Ahern, Dejing Dou, Boyang Li | Published: 2019-12-07 | Updated: 2020-12-04
ロバスト性
損失関数
深層学習手法

Principal Component Properties of Adversarial Samples

Authors: Malhar Jere, Sandro Herbig, Christine Lind, Farinaz Koushanfar | Published: 2019-12-07
ロバスト性
敵対的サンプル
敵対的スペクトル攻撃検出

Data Poisoning Attacks on Neighborhood-based Recommender Systems

Authors: Liang Chen, Yangjun Xu, Fenfang Xie, Min Huang, Zibin Zheng | Published: 2019-12-01
ポイズニング
ロバスト性
最適化問題

Adversarial Attack and Defense on Graph Data: A Survey

Authors: Lichao Sun, Yingtong Dou, Carl Yang, Ji Wang, Yixin Liu, Philip S. Yu, Lifang He, Bo Li | Published: 2018-12-26 | Updated: 2022-10-06
ポイズニング
ロバスト性
敵対的サンプル

PPD: Permutation Phase Defense Against Adversarial Examples in Deep Learning

Authors: Mehdi Jafarnia-Jahromi, Tasmin Chowdhury, Hsin-Tai Wu, Sayandev Mukherjee | Published: 2018-12-25 | Updated: 2020-01-04
ロバスト性
敵対的サンプルの検知
敵対的学習

Guessing Smart: Biased Sampling for Efficient Black-Box Adversarial Attacks

Authors: Thomas Brunner, Frederik Diehl, Michael Truong Le, Alois Knoll | Published: 2018-12-24 | Updated: 2019-05-05
モデルの頑健性保証
ロバスト性
敵対的サンプルの検知

Increasing the adversarial robustness and explainability of capsule networks with $γ$-capsules

Authors: David Peer, Sebastian Stabinger, Antonio Rodriguez-Sanchez | Published: 2018-12-23 | Updated: 2019-12-05
マルチクラス分類
ロバスト性
深層学習