Recommender systems play an important role in modern information and
e-commerce applications. While increasing research is dedicated to improving
the relevance and diversity of the recommendations, the potential risks of
state-of-the-art recommendation models are under-explored, that is, these
models could be subject to attacks from malicious third parties, through
injecting fake user interactions to achieve their purposes. This paper revisits
the adversarially-learned injection attack problem, where the injected fake
user `behaviors' are learned locally by the attackers with their own model --
one that is potentially different from the model under attack, but shares
similar properties to allow attack transfer. We found that most existing works
in literature suffer from two major limitations: (1) they do not solve the
optimization problem precisely, making the attack less harmful than it could
be, (2) they assume perfect knowledge for the attack, causing the lack of
understanding for realistic attack capabilities. We demonstrate that the exact
solution for generating fake users as an optimization problem could lead to a
much larger impact. Our experiments on a real-world dataset reveal important
properties of the attack, including attack transferability and its limitations.
These findings can inspire useful defensive methods against this possible
existing attack.