Algorithmic fairness has attracted significant attention in recent years,
with many quantitative measures suggested for characterizing the fairness of
different machine learning algorithms. Despite this interest, the robustness of
those fairness measures with respect to an intentional adversarial attack has
not been properly addressed. Indeed, most adversarial machine learning has
focused on the impact of malicious attacks on the accuracy of the system,
without any regard to the system's fairness. We propose new types of data
poisoning attacks where an adversary intentionally targets the fairness of a
system. Specifically, we propose two families of attacks that target fairness
measures. In the anchoring attack, we skew the decision boundary by placing
poisoned points near specific target points to bias the outcome. In the
influence attack on fairness, we aim to maximize the covariance between the
sensitive attributes and the decision outcome and affect the fairness of the
model. We conduct extensive experiments that indicate the effectiveness of our
proposed attacks.