Deep learning-based facial recognition (FR) models have demonstrated
state-of-the-art performance in the past few years, even when wearing
protective medical face masks became commonplace during the COVID-19 pandemic.
Given the outstanding performance of these models, the machine learning
research community has shown increasing interest in challenging their
robustness. Initially, researchers presented adversarial attacks in the digital
domain, and later the attacks were transferred to the physical domain. However,
in many cases, attacks in the physical domain are conspicuous, and thus may
raise suspicion in real-world environments (e.g., airports). In this paper, we
propose Adversarial Mask, a physical universal adversarial perturbation (UAP)
against state-of-the-art FR models that is applied on face masks in the form of
a carefully crafted pattern. In our experiments, we examined the
transferability of our adversarial mask to a wide range of FR model
architectures and datasets. In addition, we validated our adversarial mask's
effectiveness in real-world experiments (CCTV use case) by printing the
adversarial pattern on a fabric face mask. In these experiments, the FR system
was only able to identify 3.34% of the participants wearing the mask (compared
to a minimum of 83.34% with other evaluated masks). A demo of our experiments
can be found at: https://youtu.be/_TXkDO5z11w.