The exponential increase in dependencies between the cyber and physical world
leads to an enormous amount of data which must be efficiently processed and
stored. Therefore, computing paradigms are evolving towards machine learning
(ML)-based systems because of their ability to efficiently and accurately
process the enormous amount of data. Although ML-based solutions address the
efficient computing requirements of big data, they introduce (new) security
vulnerabilities into the systems, which cannot be addressed by traditional
monitoring-based security measures. Therefore, this paper first presents a
brief overview of various security threats in machine learning, their
respective threat models and associated research challenges to develop robust
security measures. To illustrate the security vulnerabilities of ML during
training, inferencing and hardware implementation, we demonstrate some key
security threats on ML using LeNet and VGGNet for MNIST and German Traffic Sign
Recognition Benchmarks (GTSRB), respectively. Moreover, based on the security
analysis of ML-training, we also propose an attack that has a very less impact
on the inference accuracy. Towards the end, we highlight the associated
research challenges in developing security measures and provide a brief
overview of the techniques used to mitigate such security threats.
外部データセット
MNIST
German Traffic Sign Recognition Benchmarks (GTSRB)