An emerging problem in trustworthy machine learning is to train models that
produce robust interpretations for their predictions. We take a step towards
solving this problem through the lens of axiomatic attribution of neural
networks. Our theory is grounded in the recent work, Integrated Gradients (IG),
in axiomatically attributing a neural network's output change to its input
change. We propose training objectives in classic robust optimization models to
achieve robust IG attributions. Our objectives give principled generalizations
of previous objectives designed for robust predictions, and they naturally
degenerate to classic soft-margin training for one-layer neural networks. We
also generalize previous theory and prove that the objectives for different
robust optimization models are closely related. Experiments demonstrate the
effectiveness of our method, and also point to intriguing problems which hint
at the need for better optimization techniques or better neural network
architectures for robust attribution training.