Security of machine learning models is a concern as they may face adversarial
attacks for unwarranted advantageous decisions. While research on the topic has
mainly been focusing on the image domain, numerous industrial applications, in
particular in finance, rely on standard tabular data. In this paper, we discuss
the notion of adversarial examples in the tabular domain. We propose a
formalization based on the imperceptibility of attacks in the tabular domain
leading to an approach to generate imperceptible adversarial examples.
Experiments show that we can generate imperceptible adversarial examples with a
high fooling rate.