Machine learning (ML) classifiers are vulnerable to adversarial examples. An
adversarial example is an input sample which is slightly modified to induce
misclassification in an ML classifier. In this work, we investigate white-box
and grey-box evasion attacks to an ML-based malware detector and conduct
performance evaluations in a real-world setting. We compare the defense
approaches in mitigating the attacks. We propose a framework for deploying
grey-box and black-box attacks to malware detection systems.