Neural machine translation (NMT) systems have been shown to give undesirable
translation when a small change is made in the source sentence. In this paper,
we study the behaviour of NMT systems when multiple changes are made to the
source sentence. In particular, we ask the following question "Is it possible
for an NMT system to predict same translation even when multiple words in the
source sentence have been replaced?". To this end, we propose a soft-attention
based technique to make the aforementioned word replacements. The experiments
are conducted on two language pairs: English-German (en-de) and English-French
(en-fr) and two state-of-the-art NMT systems: BLSTM-based encoder-decoder with
attention and Transformer. The proposed soft-attention based technique achieves
high success rate and outperforms existing methods like HotFlip by a
significant margin for all the conducted experiments. The results demonstrate
that state-of-the-art NMT systems are unable to capture the semantics of the
source language. The proposed soft-attention based technique is an
invariance-based adversarial attack on NMT systems. To better evaluate such
attacks, we propose an alternate metric and argue its benefits in comparison
with success rate.