DNN IP保護手法

Towards Adversarial Purification using Denoising AutoEncoders

Authors: Dvij Kalaria, Aritra Hazra, Partha Pratim Chakrabarti | Published: 2022-08-29
DNN IP保護手法
ウォーターマーキング
適応型再訓練メカニズム

DNNShield: Dynamic Randomized Model Sparsification, A Defense Against Adversarial Machine Learning

Authors: Mohammad Hossein Samavatian, Saikat Majumdar, Kristin Barber, Radu Teodorescu | Published: 2022-07-31
DNN IP保護手法
攻撃検出
敵対的サンプル

DarKnight: An Accelerated Framework for Privacy and Integrity Preserving Deep Learning Using Trusted Hardware

Authors: Hanieh Hashemi, Yongqin Wang, Murali Annavaram | Published: 2022-06-30
DNN IP保護手法
セキュリティ保証
プライバシーリスク管理

Matryoshka: Stealing Functionality of Private ML Data by Hiding Models in Model

Authors: Xudong Pan, Yifan Yan, Shengyao Zhang, Mi Zhang, Min Yang | Published: 2022-06-29
DNN IP保護手法
アルゴリズム設計
メンバーシップ推論

ROSE: A RObust and SEcure DNN Watermarking

Authors: Kassem Kallas, Teddy Furon | Published: 2022-06-22
DNN IP保護手法
敵対的学習
評価手法

Deep Quaternion Features for Privacy Protection

Authors: Hao Zhang, Yiting Chen, Liyao Xiang, Haotian Ma, Jie Shi, Quanshi Zhang | Published: 2020-03-18 | Updated: 2020-06-21
DNN IP保護手法
プライバシー保護手法
量子暗号技術

Entangled Watermarks as a Defense against Model Extraction

Authors: Hengrui Jia, Christopher A. Choquette-Choo, Varun Chandrasekaran, Nicolas Papernot | Published: 2020-02-27 | Updated: 2021-02-19
DNN IP保護手法
ロバスト性評価
防御手法

Stealing Knowledge from Protected Deep Neural Networks Using Composite Unlabeled Data

Authors: Itay Mosafi, Eli David, Nathan S. Netanyahu | Published: 2019-12-09
DNN IP保護手法
敵対的サンプル
深層学習手法

MimosaNet: An Unrobust Neural Network Preventing Model Stealing

Authors: Kálmán Szentannai, Jalal Al-Afandi, András Horváth | Published: 2019-07-02
DNN IP保護手法
敵対的攻撃
深層学習手法

On the Robustness of the Backdoor-based Watermarking in Deep Neural Networks

Authors: Masoumeh Shafieinejad, Jiaqi Wang, Nils Lukas, Xinda Li, Florian Kerschbaum | Published: 2019-06-18 | Updated: 2019-11-26
DNN IP保護手法
バックドア攻撃
攻撃手法