Machine learning techniques are currently used extensively for automating
various cybersecurity tasks. Most of these techniques utilize supervised
learning algorithms that rely on training the algorithm to classify incoming
data into different categories, using data encountered in the relevant domain.
A critical vulnerability of these algorithms is that they are susceptible to
adversarial attacks where a malicious entity called an adversary deliberately
alters the training data to misguide the learning algorithm into making
classification errors. Adversarial attacks could render the learning algorithm
unsuitable to use and leave critical systems vulnerable to cybersecurity
attacks. Our paper provides a detailed survey of the state-of-the-art
techniques that are used to make a machine learning algorithm robust against
adversarial attacks using the computational framework of game theory. We also
discuss open problems and challenges and possible directions for further
research that would make deep machine learning-based systems more robust and
reliable for cybersecurity tasks.