These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
The existence of adversarial attacks (or adversarial examples) brings huge
concern about the machine learning (ML) model's safety issues. For many
safety-critical ML tasks, such as financial forecasting, fraudulent detection,
and anomaly detection, the data samples are usually mixed-type, which contain
plenty of numerical and categorical features at the same time. However, how to
generate adversarial examples with mixed-type data is still seldom studied. In
this paper, we propose a novel attack algorithm M-Attack, which can effectively
generate adversarial examples in mixed-type data. Based on M-Attack, attackers
can attempt to mislead the targeted classification model's prediction, by only
slightly perturbing both the numerical and categorical features in the given
data samples. More importantly, by adding designed regularizations, our
generated adversarial examples can evade potential detection models, which
makes the attack indeed insidious. Through extensive empirical studies, we
validate the effectiveness and efficiency of our attack method and evaluate the
robustness of existing classification models against our proposed attack. The
experimental results highlight the feasibility of generating adversarial
examples toward machine learning models in real-world applications.