We present \texttt{secml}, an open-source Python library for secure and
explainable machine learning. It implements the most popular attacks against
machine learning, including test-time evasion attacks to generate adversarial
examples against deep neural networks and training-time poisoning attacks
against support vector machines and many other algorithms. These attacks enable
evaluating the security of learning algorithms and the corresponding defenses
under both white-box and black-box threat models. To this end, \texttt{secml}
provides built-in functions to compute security evaluation curves, showing how
quickly classification performance decreases against increasing adversarial
perturbations of the input data. \texttt{secml} also includes explainability
methods to help understand why adversarial attacks succeed against a given
model, by visualizing the most influential features and training prototypes
contributing to each decision. It is distributed under the Apache License 2.0
and hosted at \url{https://github.com/pralab/secml}.