Neural networks achieve outstanding accuracy in classification and regression
tasks. However, understanding their behavior still remains an open challenge
that requires questions to be addressed on the robustness, explainability and
reliability of predictions. We answer these questions by computing reachable
sets of neural networks, i.e. sets of outputs resulting from continuous sets of
inputs. We provide two efficient approaches that lead to over- and
under-approximations of the reachable set. This principle is highly versatile,
as we show. First, we use it to analyze and enhance the robustness properties
of both classifiers and regression models. This is in contrast to existing
works, which are mainly focused on classification. Specifically, we verify
(non-)robustness, propose a robust training procedure, and show that our
approach outperforms adversarial attacks as well as state-of-the-art methods of
verifying classifiers for non-norm bound perturbations. Second, we provide
techniques to distinguish between reliable and non-reliable predictions for
unlabeled inputs, to quantify the influence of each feature on a prediction,
and compute a feature ranking.