Recent deep neural networks based techniques, especially those equipped with
the ability of self-adaptation in the system level such as deep reinforcement
learning (DRL), are shown to possess many advantages of optimizing robot
learning systems (e.g., autonomous navigation and continuous robot arm
control.) However, the learning-based systems and the associated models may be
threatened by the risks of intentionally adaptive (e.g., noisy sensor
confusion) and adversarial perturbations from real-world scenarios. In this
paper, we introduce timing-based adversarial strategies against a DRL-based
navigation system by jamming in physical noise patterns on the selected time
frames. To study the vulnerability of learning-based navigation systems, we
propose two adversarial agent models: one refers to online learning; another
one is based on evolutionary learning. Besides, three open-source robot
learning and navigation control environments are employed to study the
vulnerability under adversarial timing attacks. Our experimental results show
that the adversarial timing attacks can lead to a significant performance drop,
and also suggest the necessity of enhancing the robustness of robot learning
systems.