In general, adversarial perturbations superimposed on inputs are realistic
threats for a deep neural network (DNN). In this paper, we propose a practical
generation method of such adversarial perturbation to be applied to black-box
attacks that demand access to an input-output relationship only. Thus, the
attackers generate such perturbation without invoking inner functions and/or
accessing the inner states of a DNN. Unlike the earlier studies, the algorithm
to generate the perturbation presented in this study requires much fewer query
trials. Moreover, to show the effectiveness of the adversarial perturbation
extracted, we experiment with a DNN for semantic segmentation. The result shows
that the network is easily deceived with the perturbation generated than using
uniformly distributed random noise with the same magnitude.