Deep learning models have consistently outperformed traditional machine
learning models in various classification tasks, including image
classification. As such, they have become increasingly prevalent in many real
world applications including those where security is of great concern. Such
popularity, however, may attract attackers to exploit the vulnerabilities of
the deployed deep learning models and launch attacks against security-sensitive
applications. In this paper, we focus on a specific type of data poisoning
attack, which we refer to as a {\em backdoor injection attack}. The main goal
of the adversary performing such attack is to generate and inject a backdoor
into a deep learning model that can be triggered to recognize certain embedded
patterns with a target label of the attacker's choice. Additionally, a backdoor
injection attack should occur in a stealthy manner, without undermining the
efficacy of the victim model. Specifically, we propose two approaches for
generating a backdoor that is hardly perceptible yet effective in poisoning the
model. We consider two attack settings, with backdoor injection carried out
either before model training or during model updating. We carry out extensive
experimental evaluations under various assumptions on the adversary model, and
demonstrate that such attacks can be effective and achieve a high attack
success rate (above $90\%$) at a small cost of model accuracy loss (below
$1\%$) with a small injection rate (around $1\%$), even under the weakest
assumption wherein the adversary has no knowledge either of the original
training data or the classifier model.