It is well-known that the most existing machine learning (ML)-based
safety-critical applications are vulnerable to carefully crafted input
instances called adversarial examples (AXs). An adversary can conveniently
attack these target systems from digital as well as physical worlds. This paper
aims to the generation of robust physical AXs against face recognition systems.
We present a novel smoothness loss function and a patch-noise combo attack for
realizing powerful physical AXs. The smoothness loss interjects the concept of
delayed constraints during the attack generation process, thereby causing
better handling of optimization complexity and smoother AXs for the physical
domain. The patch-noise combo attack combines patch noise and imperceptibly
small noises from different distributions to generate powerful
registration-based physical AXs. An extensive experimental analysis found that
our smoothness loss results in robust and more transferable digital and
physical AXs than the conventional techniques. Notably, our smoothness loss
results in a 1.17 and 1.97 times better mean attack success rate (ASR) in
physical white-box and black-box attacks, respectively. Our patch-noise combo
attack furthers the performance gains and results in 2.39 and 4.74 times higher
mean ASR than conventional technique in physical world white-box and black-box
attacks, respectively.