We study the privacy implications of training recurrent neural networks
(RNNs) with sensitive training datasets. Considering membership inference
attacks (MIAs), which aim to infer whether or not specific data records have
been used in training a given machine learning model, we provide empirical
evidence that a neural network's architecture impacts its vulnerability to
MIAs. In particular, we demonstrate that RNNs are subject to a higher attack
accuracy than feed-forward neural network (FFNN) counterparts. Additionally, we
study the effectiveness of two prominent mitigation methods for preempting
MIAs, namely weight regularization and differential privacy. For the former, we
empirically demonstrate that RNNs may only benefit from weight regularization
marginally as opposed to FFNNs. For the latter, we find that enforcing
differential privacy through either of the following two methods leads to a
less favorable privacy-utility trade-off in RNNs than alternative FFNNs: (i)
adding Gaussian noise to the gradients calculated during training as a part of
the so-called DP-SGD algorithm and (ii) adding Gaussian noise to the trainable
parameters as a part of a post-training mechanism that we propose. As a result,
RNNs can also be less amenable to mitigation methods, bringing us to the
conclusion that the privacy risks pertaining to the recurrent architecture are
higher than the feed-forward counterparts.