To promote secure and private artificial intelligence (SPAI), we review
studies on the model security and data privacy of DNNs. Model security allows
system to behave as intended without being affected by malicious external
influences that can compromise its integrity and efficiency. Security attacks
can be divided based on when they occur: if an attack occurs during training,
it is known as a poisoning attack, and if it occurs during inference (after
training) it is termed an evasion attack. Poisoning attacks compromise the
training process by corrupting the data with malicious examples, while evasion
attacks use adversarial examples to disrupt entire classification process.
Defenses proposed against such attacks include techniques to recognize and
remove malicious data, train a model to be insensitive to such data, and mask
the model's structure and parameters to render attacks more challenging to
implement. Furthermore, the privacy of the data involved in model training is
also threatened by attacks such as the model-inversion attack, or by dishonest
service providers of AI applications. To maintain data privacy, several
solutions that combine existing data-privacy techniques have been proposed,
including differential privacy and modern cryptography techniques. In this
paper, we describe the notions of some of methods, e.g., homomorphic
encryption, and review their advantages and challenges when implemented in
deep-learning models.