Convolutional and recurrent neural networks have been widely employed to
achieve state-of-the-art performance on classification tasks. However, it has
also been noted that these networks can be manipulated adversarially with
relative ease, by carefully crafted additive perturbations to the input. Though
several experimentally established prior works exist on crafting and defending
against attacks, it is also desirable to have theoretical guarantees on the
existence of adversarial examples and robustness margins of the network to such
examples. We provide both in this paper. We focus specifically on recurrent
architectures and draw inspiration from dynamical systems theory to naturally
cast this as a control problem, allowing us to dynamically compute adversarial
perturbations at each timestep of the input sequence, thus resembling a
feedback controller. Illustrative examples are provided to supplement the
theoretical discussions.