実験的検証

Absum: Simple Regularization Method for Reducing Structural Sensitivity of Convolutional Neural Networks

Authors: Sekitoshi Kanai, Yasutoshi Ida, Yasuhiro Fujiwara, Masanori Yamada, Shuichi Adachi | Published: 2019-09-19
実験的検証
敵対的サンプル
敵対的攻撃

AutoGAN: Robust Classifier Against Adversarial Attacks

Authors: Blerta Lindqvist, Shridatt Sugrim, Rauf Izmailov | Published: 2018-12-08
モデルの頑健性保証
堅牢性向上手法
実験的検証

Deep-RBF Networks Revisited: Robust Classification with Rejection

Authors: Pourya Habib Zadeh, Reshad Hosseini, Suvrit Sra | Published: 2018-12-07
モデルの頑健性保証
実験的検証
敵対的サンプル

Bypassing Feature Squeezing by Increasing Adversary Strength

Authors: Yash Sharma, Pin-Yu Chen | Published: 2018-03-27
実験的検証
敵対的学習
敵対的攻撃

Learning from Pseudo-Randomness With an Artificial Neural Network – Does God Play Pseudo-Dice?

Authors: Fenglei Fan, Ge Wang | Published: 2018-01-05
実験的検証
数理的解析
機械学習アルゴリズム

Learning from Mutants: Using Code Mutation to Learn and Monitor Invariants of a Cyber-Physical System

Authors: Yuqi Chen, Christopher M. Poskitt, Jun Sun | Published: 2018-01-03 | Updated: 2018-06-13
コード生成
実験的検証
機械学習アルゴリズム