Perceptual ad-blocking is a novel approach that detects online advertisements
based on their visual content. Compared to traditional filter lists, the use of
perceptual signals is believed to be less prone to an arms race with web
publishers and ad networks. We demonstrate that this may not be the case. We
describe attacks on multiple perceptual ad-blocking techniques, and unveil a
new arms race that likely disfavors ad-blockers. Unexpectedly, perceptual
ad-blocking can also introduce new vulnerabilities that let an attacker bypass
web security boundaries and mount DDoS attacks.
We first analyze the design space of perceptual ad-blockers and present a
unified architecture that incorporates prior academic and commercial work. We
then explore a variety of attacks on the ad-blocker's detection pipeline, that
enable publishers or ad networks to evade or detect ad-blocking, and at times
even abuse its high privilege level to bypass web security boundaries.
On one hand, we show that perceptual ad-blocking must visually classify
rendered web content to escape an arms race centered on obfuscation of page
markup. On the other, we present a concrete set of attacks on visual
ad-blockers by constructing adversarial examples in a real web page context.
For seven ad-detectors, we create perturbed ads, ad-disclosure logos, and
native web content that misleads perceptual ad-blocking with 100% success
rates. In one of our attacks, we demonstrate how a malicious user can upload
adversarial content, such as a perturbed image in a Facebook post, that fools
the ad-blocker into removing another users' non-ad content.
Moving beyond the Web and visual domain, we also build adversarial examples
for AdblockRadio, an open source radio client that uses machine learning to
detects ads in raw audio streams.