文献データベース

Poisoning Attacks to Graph-Based Recommender Systems

Authors: Minghong Fang, Guolei Yang, Neil Zhenqiang Gong, Jia Liu | Published: 2018-09-11
RAGへのポイズニング攻撃
ポイズニング
敵対的攻撃

PUF-AES-PUF: a novel PUF architecture against non-invasive attacks

Authors: Weize Yu, Jia Chen | Published: 2018-09-11
IoTセキュリティ
ロバスト性向上手法
暗号化手法

Universal Multi-Party Poisoning Attacks

Authors: Saeed Mahloujifar, Mohammad Mahmoody, Ameer Mohammed | Published: 2018-09-10 | Updated: 2021-11-10
ポイズニング
マルチパーティ攻撃
敵対的攻撃

Privacy-Preserving Deep Learning via Weight Transmission

Authors: Le Trieu Phong, Tran Thi Phuong | Published: 2018-09-10 | Updated: 2019-02-12
モデル抽出攻撃
分散学習プラットフォーム
差分プライバシー

Certified Adversarial Robustness with Additive Noise

Authors: Bai Li, Changyou Chen, Wenlin Wang, Lawrence Carin | Published: 2018-09-10 | Updated: 2019-11-10
ロバスト性分析
ロバスト性向上手法
敵対的学習

The Curse of Concentration in Robust Learning: Evasion and Poisoning Attacks from Concentration of Measure

Authors: Saeed Mahloujifar, Dimitrios I. Diochnos, Mohammad Mahmoody | Published: 2018-09-09 | Updated: 2018-11-06
モデルの頑健性保証
ロバスト性分析
敵対的移転性

Towards Query Efficient Black-box Attacks: An Input-free Perspective

Authors: Yali Du, Meng Fang, Jinfeng Yi, Jun Cheng, Dacheng Tao | Published: 2018-09-09
クエリ生成手法
トリガーの検知
ポイズニング

Structure-Preserving Transformation: Generating Diverse and Transferable Adversarial Examples

Authors: Dan Peng, Zizhan Zheng, Xiaofeng Zhang | Published: 2018-09-08 | Updated: 2018-12-22
モデルの頑健性保証
敵対的サンプルの検知
敵対的移転性

Detecting Potential Local Adversarial Examples for Human-Interpretable Defense

Authors: Xavier Renard, Thibault Laugel, Marie-Jeanne Lesot, Christophe Marsala, Marcin Detyniecki | Published: 2018-09-07
モデルの頑健性保証
敵対的移転性
解釈可能性の損失

Are adversarial examples inevitable?

Authors: Ali Shafahi, W. Ronny Huang, Christoph Studer, Soheil Feizi, Tom Goldstein | Published: 2018-09-06 | Updated: 2020-02-03
ロバスト性分析
敵対的サンプル
敵対的サンプルの検知