Backdoor Attack

Design of intentional backdoors in sequential models

Authors: Zhaoyuan Yang, Naresh Iyer, Johan Reimann, Nurali Virani | Published: 2019-02-26
Backdoor Attack
Reinforcement Learning Attack
Adversarial Learning

Adversarial attacks hidden in plain sight

Authors: Jan Philip Göpfert, André Artelt, Heiko Wersing, Barbara Hammer | Published: 2019-02-25 | Updated: 2020-04-26
Backdoor Attack
Robustness Evaluation
Adversarial Learning

Adversarial Reinforcement Learning under Partial Observability in Autonomous Computer Network Defence

Authors: Yi Han, David Hubczenko, Paul Montague, Olivier De Vel, Tamas Abraham, Benjamin I. P. Rubinstein, Christopher Leckie, Tansu Alpcan, Sarah Erfani | Published: 2019-02-25 | Updated: 2020-08-17
Backdoor Attack
Reinforcement Learning Attack
Adversarial Learning

Robust Audio Adversarial Example for a Physical Attack

Authors: Hiromu Yakura, Jun Sakuma | Published: 2018-10-28 | Updated: 2019-08-19
Backdoor Attack
Signal Processing Techniques
Adversarial Example

Have You Stolen My Model? Evasion Attacks Against Deep Neural Network Watermarking Techniques

Authors: Dorjan Hitaj, Luigi V. Mancini | Published: 2018-09-03
Backdoor Attack
Detection of Model Extraction Attacks
Transparency and Verification

Backdoor Embedding in Convolutional Neural Network Models via Invisible Perturbation

Authors: Cong Liao, Haoti Zhong, Anna Squicciarini, Sencun Zhu, David Miller | Published: 2018-08-30
Backdoor Attack
Backdoor Attack Mitigation
Robustness Analysis

Adversarial Robustness Toolbox v1.0.0

Authors: Maria-Irina Nicolae, Mathieu Sinn, Minh Ngoc Tran, Beat Buesser, Ambrish Rawat, Martin Wistuba, Valentina Zantedeschi, Nathalie Baracaldo, Bryant Chen, Heiko Ludwig, Ian M. Molloy, Ben Edwards | Published: 2018-07-03 | Updated: 2019-11-15
Backdoor Attack
Attack Evaluation
Adversarial Learning

Adversarial Attack on Graph Structured Data

Authors: Hanjun Dai, Hui Li, Tian Tian, Xin Huang, Lin Wang, Jun Zhu, Le Song | Published: 2018-06-06
Graph Representation Learning
Backdoor Attack
Certified Robustness

Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks

Authors: Ali Shafahi, W. Ronny Huang, Mahyar Najibi, Octavian Suciu, Christoph Studer, Tudor Dumitras, Tom Goldstein | Published: 2018-04-03 | Updated: 2018-11-10
Backdoor Attack
Poisoning
Detection of Poisonous Data

BEBP: An Poisoning Method Against Machine Learning Based IDSs

Authors: Pan Li, Qiang Liu, Wentao Zhao, Dongxu Wang, Siqi Wang | Published: 2018-03-11
Data Generation Method
Backdoor Attack
Detection of Poisonous Data