These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Multi-Agent Reinforcement Learning (MARL) is vulnerable to Adversarial
Machine Learning (AML) attacks and needs adequate defences before it can be
used in real world applications. We have conducted a survey into the use of
execution-time AML attacks against MARL and the defences against those attacks.
We surveyed related work in the application of AML in Deep Reinforcement
Learning (DRL) and Multi-Agent Learning (MAL) to inform our analysis of AML for
MARL. We propose a novel perspective to understand the manner of perpetrating
an AML attack, by defining Attack Vectors. We develop two new frameworks to
address a gap in current modelling frameworks, focusing on the means and tempo
of an AML attack against MARL, and identify knowledge gaps and future avenues
of research.