Deep learning algorithms and networks are vulnerable to perturbed inputs
which is known as the adversarial attack. Many defense methodologies have been
investigated to defend against such adversarial attack. In this work, we
propose a novel methodology to defend the existing powerful attack model. We
for the first time introduce a new attacking scheme for the attacker and set a
practical constraint for white box attack. Under this proposed attacking
scheme, we present the best defense ever reported against some of the recent
strong attacks. It consists of a set of nonlinear function to process the input
data which will make it more robust over the adversarial attack. However, we
make this processing layer completely hidden from the attacker. Blind
pre-processing improves the white box attack accuracy of MNIST from 94.3\% to
98.7\%. Even with increasing defense when others defenses completely fail,
blind pre-processing remains one of the strongest ever reported. Another
strength of our defense is that it eliminates the need for adversarial training
as it can significantly increase the MNIST accuracy without adversarial
training as well. Additionally, blind pre-processing can also increase the
inference accuracy in the face of a powerful attack on CIFAR-10 and SVHN data
set as well without much sacrificing clean data accuracy.