Many machine learning adversarial attacks find adversarial samples of a
victim model ${\mathcal M}$ by following the gradient of some attack objective
functions, either explicitly or implicitly. To confuse and detect such attacks,
we take the proactive approach that modifies those functions with the goal of
misleading the attacks to some local minimals, or to some designated regions
that can be easily picked up by an analyzer. To achieve this goal, we propose
adding a large number of artifacts, which we called $attractors$, onto the
otherwise smooth function. An attractor is a point in the input space, where
samples in its neighborhood have gradient pointing toward it. We observe that
decoders of watermarking schemes exhibit properties of attractors and give a
generic method that injects attractors from a watermark decoder into the victim
model ${\mathcal M}$. This principled approach allows us to leverage on known
watermarking schemes for scalability and robustness and provides explainability
of the outcomes. Experimental studies show that our method has competitive
performance. For instance, for un-targeted attacks on CIFAR-10 dataset, we can
reduce the overall attack success rate of DeepFool to 1.9%, whereas known
defense LID, FS and MagNet can reduce the rate to 90.8%, 98.5% and 78.5%
respectively.