Deep neural networks have achieved human-level accuracy on almost all
perceptual benchmarks. It is interesting that these advances were made using
two ideas that are decades old: (a) an artificial neuron based on a linear
summator and (b) SGD training.
However, there are important metrics beyond accuracy: computational
efficiency and stability against adversarial perturbations. In this paper, we
propose two closely connected methods to improve these metrics on contour
recognition tasks: (a) a novel model of an artificial neuron, a "strong
neuron," with low hardware requirements and inherent robustness against
adversarial perturbations and (b) a novel constructive training algorithm that
generates sparse networks with $O(1)$ connections per neuron.
We demonstrate the feasibility of our approach through experiments on SVHN
and GTSRB benchmarks. We achieved an impressive 10x-100x reduction in
operations count (10x when compared with other sparsification approaches, 100x
when compared with dense networks) and a substantial reduction in hardware
requirements (8-bit fixed-point math was used) with no reduction in model
accuracy. Superior stability against adversarial perturbations (exceeding that
of adversarial training) was achieved without any counteradversarial measures,
relying on the robustness of strong neurons alone. We also proved that
constituent blocks of our strong neuron are the only activation functions with
perfect stability against adversarial attacks.