Graph Neural Networks (GNNs) are increasingly important given their
popularity and the diversity of applications. Yet, existing studies of their
vulnerability to adversarial attacks rely on relatively small graphs. We
address this gap and study how to attack and defend GNNs at scale. We propose
two sparsity-aware first-order optimization attacks that maintain an efficient
representation despite optimizing over a number of parameters which is
quadratic in the number of nodes. We show that common surrogate losses are not
well-suited for global attacks on GNNs. Our alternatives can double the attack
strength. Moreover, to improve GNNs' reliability we design a robust aggregation
function, Soft Median, resulting in an effective defense at all scales. We
evaluate our attacks and defense with standard GNNs on graphs more than 100
times larger compared to previous work. We even scale one order of magnitude
further by extending our techniques to a scalable GNN.