Labels Predicted by AI
Please note that these labels were automatically added by AI. Therefore, they may not be entirely accurate.
For more details, please see the About the Literature Database page.
Abstract
Differentially private models seek to protect the privacy of data the model is trained on, making it an important component of model security and privacy. At the same time, data scientists and machine learning engineers seek to use uncertainty quantification methods to ensure models are as useful and actionable as possible. We explore the tension between uncertainty quantification via dropout and privacy by conducting membership inference attacks against models with and without differential privacy. We find that models with large dropout slightly increases a model’s risk to succumbing to membership inference attacks in all cases including in differentially private models.