Constructing adversarial perturbations for deep neural networks is an
important direction of research. Crafting image-dependent adversarial
perturbations using white-box feedback has hitherto been the norm for such
adversarial attacks. However, black-box attacks are much more practical for
real-world applications. Universal perturbations applicable across multiple
images are gaining popularity due to their innate generalizability. There have
also been efforts to restrict the perturbations to a few pixels in the image.
This helps to retain visual similarity with the original images making such
attacks hard to detect. This paper marks an important step which combines all
these directions of research. We propose the DEceit algorithm for constructing
effective universal pixel-restricted perturbations using only black-box
feedback from the target network. We conduct empirical investigations using the
ImageNet validation set on the state-of-the-art deep neural classifiers by
varying the number of pixels to be perturbed from a meagre 10 pixels to as high
as all pixels in the image. We find that perturbing only about 10% of the
pixels in an image using DEceit achieves a commendable and highly transferable
Fooling Rate while retaining the visual quality. We further demonstrate that
DEceit can be successfully applied to image dependent attacks as well. In both
sets of experiments, we outperformed several state-of-the-art methods.