These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Deep Neural Networks (DNNs) have become a powerful toolfor a wide range of
problems. Yet recent work has found an increasing variety of adversarial
samplesthat can fool them. Most existing detection mechanisms against
adversarial attacksimpose significant costs, either by using additional
classifiers to spot adversarial samples, or by requiring the DNN to be
restructured. In this paper, we introduce a novel defence. We train our DNN so
that, as long as it is workingas intended on the kind of inputs we expect, its
behavior is constrained, in that some set of behaviors are taboo. If it is
exposed to adversarial samples, they will often cause a taboo behavior, which
we can detect. Taboos can be both subtle and diverse, so their choice can
encode and hide information. It is a well-established design principle that the
security of a system should not depend on the obscurity of its design, but on
some variable (the key) which can differ between implementations and bechanged
as necessary. We discuss how taboos can be used to equip a classifier with just
such a key, and how to tune the keying mechanism to adversaries of various
capabilities. We evaluate the performance of a prototype against a wide range
of attacks and show how our simple defense can defend against cheap attacks at
scale with zero run-time computation overhead, making it a suitable defense
method for IoT devices.