These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Machine learning and deep learning models are potential vectors for various
attack scenarios. For example, previous research has shown that malware can be
hidden in deep learning models. Hiding information in a learning model can be
viewed as a form of steganography. In this research, we consider the general
question of the steganographic capacity of learning models. Specifically, for a
wide range of models, we determine the number of low-order bits of the trained
parameters that can be overwritten, without adversely affecting model
performance. For each model considered, we graph the accuracy as a function of
the number of low-order bits that have been overwritten, and for selected
models, we also analyze the steganographic capacity of individual layers. The
models that we test include the classic machine learning techniques of Linear
Regression (LR) and Support Vector Machine (SVM); the popular general deep
learning models of Multilayer Perceptron (MLP) and Convolutional Neural Network
(CNN); the highly-successful Recurrent Neural Network (RNN) architecture of
Long Short-Term Memory (LSTM); the pre-trained transfer learning-based models
VGG16, DenseNet121, InceptionV3, and Xception; and, finally, an Auxiliary
Classifier Generative Adversarial Network (ACGAN). In all cases, we find that a
majority of the bits of each trained parameter can be overwritten before the
accuracy degrades. Of the models tested, the steganographic capacity ranges
from 7.04 KB for our LR experiments, to 44.74 MB for InceptionV3. We discuss
the implications of our results and consider possible avenues for further
research.