Machine learning (ML) based approaches have been the mainstream solution for
anti-phishing detection. When they are deployed on the client-side, ML-based
classifiers are vulnerable to evasion attacks. However, such potential threats
have received relatively little attention because existing attacks destruct the
functionalities or appearance of webpages and are conducted in the white-box
scenario, making it less practical. Consequently, it becomes imperative to
understand whether it is possible to launch evasion attacks with limited
knowledge of the classifier, while preserving the functionalities and
appearance.
In this work, we show that even in the grey-, and black-box scenarios,
evasion attacks are not only effective on practical ML-based classifiers, but
can also be efficiently launched without destructing the functionalities and
appearance. For this purpose, we propose three mutation-based attacks,
differing in the knowledge of the target classifier, addressing a key technical
challenge: automatically crafting an adversarial sample from a known phishing
website in a way that can mislead classifiers. To launch attacks in the white-
and grey-box scenarios, we also propose a sample-based collision attack to gain
the knowledge of the target classifier. We demonstrate the effectiveness and
efficiency of our evasion attacks on the state-of-the-art, Google's phishing
page filter, achieved 100% attack success rate in less than one second per
website. Moreover, the transferability attack on BitDefender's industrial
phishing page classifier, TrafficLight, achieved up to 81.25% attack success
rate. We further propose a similarity-based method to mitigate such evasion
attacks, Pelican. We demonstrate that Pelican can effectively detect evasion
attacks. Our findings contribute to design more robust phishing website
classifiers in practice.