These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
The EU Artificial Intelligence Act (AIA) establishes different legal
principles for different types of AI systems. While prior work has sought to
clarify some of these principles, little attention has been paid to robustness
and cybersecurity. This paper aims to fill this gap. We identify legal
challenges and shortcomings in provisions related to robustness and
cybersecurity for high-risk AI systems(Art. 15 AIA) and general-purpose AI
models (Art. 55 AIA). We show that robustness and cybersecurity demand
resilience against performance disruptions. Furthermore, we assess potential
challenges in implementing these provisions in light of recent advancements in
the machine learning (ML) literature. Our analysis informs efforts to develop
harmonized standards, guidelines by the European Commission, as well as
benchmarks and measurement methodologies under Art. 15(2) AIA. With this, we
seek to bridge the gap between legal terminology and ML research, fostering a
better alignment between research and implementation efforts.