Neural network robustness has become a central topic in machine learning in
recent years. Most training algorithms that improve the model's robustness to
adversarial and common corruptions also introduce a large computational
overhead, requiring as many as ten times the number of forward and backward
passes in order to converge. To combat this inefficiency, we propose
BulletTrain $-$ a boundary example mining technique to drastically reduce the
computational cost of robust training. Our key observation is that only a small
fraction of examples are beneficial for improving robustness. BulletTrain
dynamically predicts these important examples and optimizes robust training
algorithms to focus on the important examples. We apply our technique to
several existing robust training algorithms and achieve a 2.1$\times$ speed-up
for TRADES and MART on CIFAR-10 and a 1.7$\times$ speed-up for AugMix on
CIFAR-10-C and CIFAR-100-C without any reduction in clean and robust accuracy.