Though Convolutional Neural Networks (CNNs) have surpassed human-level
performance on tasks such as object classification and face verification, they
can easily be fooled by adversarial attacks. These attacks add a small
perturbation to the input image that causes the network to misclassify the
sample. In this paper, we focus on neutralizing adversarial attacks by compact
feature learning. In particular, we show that learning features in a closed and
bounded space improves the robustness of the network. We explore the effect of
L2-Softmax Loss, that enforces compactness in the learned features, thus
resulting in enhanced robustness to adversarial perturbations. Additionally, we
propose compact convolution, a novel method of convolution that when
incorporated in conventional CNNs improves their robustness. Compact
convolution ensures feature compactness at every layer such that they are
bounded and close to each other. Extensive experiments show that Compact
Convolutional Networks (CCNs) neutralize multiple types of attacks, and perform
better than existing methods in defending adversarial attacks, without
incurring any additional training overhead compared to CNNs.