These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Spacecraft are among the earliest autonomous systems. Their ability to
function without a human in the loop have afforded some of humanity's grandest
achievements. As reliance on autonomy grows, space vehicles will become
increasingly vulnerable to attacks designed to disrupt autonomous
processes-especially probabilistic ones based on machine learning. This paper
aims to elucidate and demonstrate the threats that adversarial machine learning
(AML) capabilities pose to spacecraft. First, an AML threat taxonomy for
spacecraft is introduced. Next, we demonstrate the execution of AML attacks
against spacecraft through experimental simulations using NASA's Core Flight
System (cFS) and NASA's On-board Artificial Intelligence Research (OnAIR)
Platform. Our findings highlight the imperative for incorporating AML-focused
security measures in spacecraft that engage autonomy.