Convolutional neural networks or standard CNNs (StdCNNs) are
translation-equivariant models that achieve translation invariance when trained
on data augmented with sufficient translations. Recent work on equivariant
models for a given group of transformations (e.g., rotations) has lead to
group-equivariant convolutional neural networks (GCNNs). GCNNs trained on data
augmented with sufficient rotations achieve rotation invariance. Recent work by
authors arXiv:2002.11318 studies a trade-off between invariance and robustness
to adversarial attacks. In another related work arXiv:2005.08632, given any
model and any input-dependent attack that satisfies a certain spectral
property, the authors propose a universalization technique called SVD-Universal
to produce a universal adversarial perturbation by looking at very few test
examples. In this paper, we study the effectiveness of SVD-Universal on GCNNs
as they gain rotation invariance through higher degree of training
augmentation. We empirically observe that as GCNNs gain rotation invariance
through training augmented with larger rotations, the fooling rate of
SVD-Universal gets better. To understand this phenomenon, we introduce
universal invariant directions and study their relation to the universal
adversarial direction produced by SVD-Universal.