文献データベース

Towards Class-Oriented Poisoning Attacks Against Neural Networks

Authors: Bingyin Zhao, Yingjie Lao | Published: 2020-07-31 | Updated: 2021-10-11
バックドア攻撃
ポイズニング
攻撃手法

Adversarial Attacks with Multiple Antennas Against Deep Learning-Based Modulation Classifiers

Authors: Brian Kim, Yalin E. Sagduyu, Tugba Erpek, Kemal Davaslioglu, Sennur Ulukus | Published: 2020-07-31
ポイズニング
攻撃手法
深層学習

TEAM: We Need More Powerful Adversarial Examples for DNNs

Authors: Yaguan Qian, Ximin Zhang, Bin Wang, Wei Li, Zhaoquan Gu, Haijiang Wang, Wassim Swaileh | Published: 2020-07-31 | Updated: 2020-08-10
攻撃手法
敵対的サンプル
計算効率

Practical Detection of Trojan Neural Networks: Data-Limited and Data-Free Cases

Authors: Ren Wang, Gaoyuan Zhang, Sijia Liu, Pin-Yu Chen, Jinjun Xiong, Meng Wang | Published: 2020-07-31
バックドア攻撃
ポイズニング
攻撃手法

LDP-FL: Practical Private Aggregation in Federated Learning with Local Differential Privacy

Authors: Lichao Sun, Jianwei Qian, Xun Chen | Published: 2020-07-31 | Updated: 2021-05-21
ウォーターマーキング
クライアントサイドコンポーネント
プライバシー評価

Membership Leakage in Label-Only Exposures

Authors: Zheng Li, Yang Zhang | Published: 2020-07-30 | Updated: 2021-09-17
メンバーシップ推論
性能評価
攻撃手法

Black-box Adversarial Sample Generation Based on Differential Evolution

Authors: Junyu Lin, Lei Xu, Yingqi Liu, Xiangyu Zhang | Published: 2020-07-30
攻撃手法
深層学習
研究方法論

DeepPeep: Exploiting Design Ramifications to Decipher the Architecture of Compact DNNs

Authors: Nandan Kumar Jha, Sparsh Mittal, Binod Kumar, Govardhan Mattela | Published: 2020-07-30
性能評価
深層学習
計算効率

A General Framework For Detecting Anomalous Inputs to DNN Classifiers

Authors: Jayaram Raghuram, Varun Chandrasekaran, Somesh Jha, Suman Banerjee | Published: 2020-07-29 | Updated: 2021-06-17
性能評価
攻撃手法
深層学習

Dynamic Defense Against Byzantine Poisoning Attacks in Federated Learning

Authors: Nuria Rodríguez-Barroso, Eugenio Martínez-Cámara, M. Victoria Luzón, Francisco Herrera | Published: 2020-07-29 | Updated: 2022-02-24
ビザンチン耐性
ポイズニング
防御メカニズム