アルゴリズム設計

Approximate Data Deletion in Generative Models

Authors: Zhifeng Kong, Scott Alfeld | Published: 2022-06-29
アルゴリズム設計
データ漏洩
仮説検定

Matryoshka: Stealing Functionality of Private ML Data by Hiding Models in Model

Authors: Xudong Pan, Yifan Yan, Shengyao Zhang, Mi Zhang, Min Yang | Published: 2022-06-29
DNN IP保護手法
アルゴリズム設計
メンバーシップ推論

A Deep Learning Approach to Create DNS Amplification Attacks

Authors: Jared Mathews, Prosenjit Chatterjee, Shankar Banik, Cory Nance | Published: 2022-06-29
アルゴリズム設計
バックドア攻撃
敵対的攻撃検出

How to Steer Your Adversary: Targeted and Efficient Model Stealing Defenses with Gradient Redirection

Authors: Mantas Mazeika, Bo Li, David Forsyth | Published: 2022-06-28
アルゴリズム設計
敵対的サンプル
計算効率

Parallel Instance Filtering for Malware Detection

Authors: Martin Jureček, Olha Jurečková | Published: 2022-06-28
アルゴリズム設計
計算効率
静的分析

Multifamily Malware Models

Authors: Samanvitha Basole, Fabio Di Troia, Mark Stamp | Published: 2022-06-27
アルゴリズム設計
マルウェア拡散手段
評価手法

Adversarially Robust PAC Learnability of Real-Valued Functions

Authors: Idan Attias, Steve Hanneke | Published: 2022-06-26 | Updated: 2024-05-05
アルゴリズム設計
サンプリング手法
学習の改善

Defense against adversarial attacks on deep convolutional neural networks through nonlocal denoising

Authors: Sandhya Aneja, Nagender Aneja, Pg Emeroylariffion Abas, Abdul Ghani Naim | Published: 2022-06-25
アルゴリズム設計
学習の改善
敵対的攻撃手法

Using Autoencoders on Differentially Private Federated Learning GANs

Authors: Gregor Schram, Rui Wang, Kaitai Liang | Published: 2022-06-24
アルゴリズム設計
学習の改善
生成モデルの課題

Adversarial Robustness of Deep Neural Networks: A Survey from a Formal Verification Perspective

Authors: Mark Huasong Meng, Guangdong Bai, Sin Gee Teo, Zhe Hou, Yan Xiao, Yun Lin, Jin Song Dong | Published: 2022-06-24 | Updated: 2022-10-11
アルゴリズム設計
形式的検証
敵対的サンプル