Neural ordinary differential equations (ODEs) have been attracting increasing
attention in various research domains recently. There have been some works
studying optimization issues and approximation capabilities of neural ODEs, but
their robustness is still yet unclear. In this work, we fill this important gap
by exploring robustness properties of neural ODEs both empirically and
theoretically. We first present an empirical study on the robustness of the
neural ODE-based networks (ODENets) by exposing them to inputs with various
types of perturbations and subsequently investigating the changes of the
corresponding outputs. In contrast to conventional convolutional neural
networks (CNNs), we find that the ODENets are more robust against both random
Gaussian perturbations and adversarial attack examples. We then provide an
insightful understanding of this phenomenon by exploiting a certain desirable
property of the flow of a continuous-time ODE, namely that integral curves are
non-intersecting. Our work suggests that, due to their intrinsic robustness, it
is promising to use neural ODEs as a basic block for building robust deep
network models. To further enhance the robustness of vanilla neural ODEs, we
propose the time-invariant steady neural ODE (TisODE), which regularizes the
flow on perturbed data via the time-invariant property and the imposition of a
steady-state constraint. We show that the TisODE method outperforms vanilla
neural ODEs and also can work in conjunction with other state-of-the-art
architectural methods to build more robust deep networks.