Availability attacks, which poison the training data with imperceptible
perturbations, can make the data \emph{not exploitable} by machine learning
algorithms so as to prevent unauthorized use of data. In this work, we
investigate why these perturbations work in principle. We are the first to
unveil an important population property of the perturbations of these attacks:
they are almost \textbf{linearly separable} when assigned with the target
labels of the corresponding samples, which hence can work as \emph{shortcuts}
for the learning objective. We further verify that linear separability is
indeed the workhorse for availability attacks. We synthesize linearly-separable
perturbations as attacks and show that they are as powerful as the deliberately
crafted attacks. Moreover, such synthetic perturbations are much easier to
generate. For example, previous attacks need dozens of hours to generate
perturbations for ImageNet while our algorithm only needs several seconds. Our
finding also suggests that the \emph{shortcut learning} is more widely present
than previously believed as deep models would rely on shortcuts even if they
are of an imperceptible scale and mixed together with the normal features. Our
source code is published at
\url{https://github.com/dayu11/Availability-Attacks-Create-Shortcuts}.