These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Recently, the evolution of deep learning has promoted the application of
machine learning (ML) to various systems. However, there are ML systems, such
as autonomous vehicles, that cause critical damage when they misclassify.
Conversely, there are ML-specific attacks called adversarial attacks based on
the characteristics of ML systems. For example, one type of adversarial attack
is an evasion attack, which uses minute perturbations called "adversarial
examples" to intentionally misclassify classifiers. Therefore, it is necessary
to analyze the risk of ML-specific attacks in introducing ML base systems. In
this study, we propose a quantitative evaluation method for analyzing the risk
of evasion attacks using attack trees. The proposed method consists of the
extension of the conventional attack tree to analyze evasion attacks and the
systematic construction method of the extension. In the extension of the
conventional attack tree, we introduce ML and conventional attack nodes to
represent various characteristics of evasion attacks. In the systematic
construction process, we propose a procedure to construct the attack tree. The
procedure consists of three steps: (1) organizing information about attack
methods in the literature to a matrix, (2) identifying evasion attack scenarios
from methods in the matrix, and (3) constructing the attack tree from the
identified scenarios using a pattern. Finally, we conducted experiments on
three ML image recognition systems to demonstrate the versatility and
effectiveness of our proposed method.