These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Machine learning (ML) models are used in many safety- and security-critical
applications nowadays. It is therefore important to measure the security of a
system that uses ML as a component. This paper focuses on the field of ML,
particularly the security of autonomous vehicles. For this purpose, a technical
framework will be described, implemented, and evaluated in a case study. Based
on ISO/IEC 27004:2016, risk indicators are utilized to measure and evaluate the
extent of damage and the effort required by an attacker. It is not possible,
however, to determine a single risk value that represents the attacker's
effort. Therefore, four different values must be interpreted individually.