These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Machine learning is used in a number of security related applications such as
biometric user authentication, speaker identification etc. A type of causative
integrity attack against machine learning called Poisoning attack works by
injecting specially crafted data points in the training data so as to increase
the false positive rate of the classifier. In the context of the biometric
authentication, this means that more intruders will be classified as valid
user, and in case of speaker identification system, user A will be classified
user B. In this paper, we examine poisoning attack against SVM and introduce -
Curie - a method to protect the SVM classifier from the poisoning attack. The
basic idea of our method is to identify the poisoned data points injected by
the adversary and filter them out. Our method is light weight and can be easily
integrated into existing systems. Experimental results show that it works very
well in filtering out the poisoned data.