Differentially private models seek to protect the privacy of data the model
is trained on, making it an important component of model security and privacy.
At the same time, data scientists and machine learning engineers seek to use
uncertainty quantification methods to ensure models are as useful and
actionable as possible. We explore the tension between uncertainty
quantification via dropout and privacy by conducting membership inference
attacks against models with and without differential privacy. We find that
models with large dropout slightly increases a model's risk to succumbing to
membership inference attacks in all cases including in differentially private
models.