AIセキュリティポータル K Program
Differentially Private Post-Processing for Fair Regression
Share
Abstract
This paper describes a differentially private post-processing algorithm for learning fair regressors satisfying statistical parity, addressing privacy concerns of machine learning models trained on sensitive data, as well as fairness concerns of their potential to propagate historical biases. Our algorithm can be applied to post-process any given regressor to improve fairness by remapping its outputs. It consists of three steps: first, the output distributions are estimated privately via histogram density estimation and the Laplace mechanism, then their Wasserstein barycenter is computed, and the optimal transports to the barycenter are used for post-processing to satisfy fairness. We analyze the sample complexity of our algorithm and provide fairness guarantee, revealing a trade-off between the statistical bias and variance induced from the choice of the number of bins in the histogram, in which using less bins always favors fairness at the expense of error.
Deep Learning with Differential Privacy
Martín Abadi, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, Li Zhang
Published: 2016
On the Sample Complexity of Privately Learning Unbounded High-Dimensional Gaussians
Ishaq Aden-Ali, Hassan Ashtiani, Gautam Kamath
Published: 2021
A reductions approach to fair classification
Alekh Agarwal, Alina Beygelzimer, Miroslav Dudik, John Langford, Hanna Wallach
Published: 2018
Fair Regression: Quantitative Definitions and Reduction-Based Algorithms
Alekh Agarwal, Miroslav Dudík, Zhiwei Steven Wu
Published: 2019
Trade-Offs between Fairness and Privacy in Machine Learning
Sushant Agarwal
Published: 2020
Wasserstein Barycenters can be Computed in Polynomial Time in Fixed Dimension
Jason M. Altschuler, Enric Boix-Adserà
Published: 2021
Discrete Wasserstein Barycenters: Optimal Transport for Discrete Data
Ethan Anderes, Steffen Borgwardt, Jacob Miller
Published: 2016
Differential privacy has disparate impact on model accuracy
B.Eugene, P.Omid, S.Vitaly
Published: 2019
Big Data’s Disparate Impact
Solon Baracas, Andrew D. Selbst
Published: 2016
Fairness and Machine Learning: Limitations and Opportunities
Solon Baracas, Moritz Hardt, Arvind Narayanan
Published: 2023
Private empirical risk minimization: Efficient algorithms and tight error bounds
R. Bassily, A. Smith, A. Thakurta
Published: 2014
Fairness in Criminal Justice Risk Assessments: The State of the Art
Richard Berk, Hoda Heidari, Shahin Jabbari, Michael Kearns, Aaron Roth
Published: 2021
Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings.
Tolga Bolukbasi, Kai-Wei Chang, James Zou, Venkatesh Saligrama, Adam Kalai
Published: 2016
Private Hypothesis Selection
Mark Bun, Gautam Kamath, Thomas Steinke, Steven Zhiwei Wu
Published: 2019
Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification
Joy Buolamwini, Timnit Gebru
Published: 2018
Building Classifiers with Independency Constraints
Toon Calders, Faisal Kamiran, Mykola Pechenizkiy
Published: 2009
Differentially private empirical risk minimization
K. Chaudhuri, C. Monteleoni, A. D. Sarwate
Published: 2011
Understanding and Mitigating Accuracy Disparity in Regression
Jianfeng Chi, Yuan Tian, Geoffrey J. Gordon, Han Zhao
Published: 2021
Faster Wasserstein Distance Estimation with the Sinkhorn Divergence
Lénaïc Chizat, Pierre Roussillon, Flavien Léger, François-Xavier Vialard, Gabriel Peyré
Published: 2020
Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments
Alexandra Chouldechova
Published: 2017
Fair Regression with Wasserstein Barycenters
Evgenii Chzhen, Christophe Denis, Mohamed Hebiri, Luca Oneto, Massimiliano Pontil
Published: 2020
On the compatibility of privacy and fairness
Rachel Cummings, Varun Gupta, Dhamma Kimpara, Jamie Morgenstern
Published: 2019
Fast Computation of Wasserstein Barycenters
Marco Cuturi, Arnaud Doucet
Published: 2014
Differentially Private Learning of Structured Discrete Distributions
Ilias Diakonikolas, Moritz Hardt, Ludwig Schmidt
Published: 2015
Calibrating noise to sensitivity in private data analysis
Cynthia Dwork, Frank McSherry, Kobbi Nissim, Adam Smith
Published: 2006
Fairness Through Awareness
Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, Richard Zemel
Published: 2012
Decision Tree Classification with Differential Privacy: A Survey
Sam Fletcher, Md. Zahidul Islam
Published: 2019
On the Geometry of Differential Privacy
Moritz Hardt, Kunal Talwar
Published: 2010
Equality of Opportunity in Supervised Learning
Moritz Hardt, Eric Price, Nathan Srebro
Published: 10.8.2016
Multicalibration: Calibration for the (Computationally-Identifiable) Masses
Úrsula Hébert-Johnson, Michael P. Kim, Omer Reingold, Guy N. Rothblum
Published: 2018
Differentially Private Fair Learning
Matthew Jagielski, Michael Kearns, Jieming Mao, Alina Oprea, Aaron Roth, Saeed Sharifi-Malvajerdi, Jonathan Ullman
Published: 2019
Fairness-Aware Classifier with Prejudice Remover Regularizer
Toshihiro Kamishima, Shotaro Akaho, Hideki Asoh, Jun Sakuma
Published: 2012
Private Convex Empirical Risk Minimization and High-dimensional Regression
Daniel Kifer, Adam Smith, Abhradeep Thakurta
Published: 2012
Tail Bounds for Sums of Independent Two-Sided Exponential Random Variables
Jiawei Li, Tomasz Tkocz
Published: 2023
Rényi Differential Privacy
Ilya Mironov
Published: 2017
Fair Learning with Private Demographic Data
Hussein Mozannar, Mesrob I. Ohannessian, Nathan Srebro
Published: 2020
Hyperparameter Tuning with Renyi Differential Privacy
Nicolas Papernot, Thomas Steinke
Published: 2022
Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data
Nicolas Papernot, Martín Abadi, Úlfar Erlingsson, Ian Goodfellow, Kunal Talwar
Published: 10.19.2016
On Fairness and Calibration
Geoff Pleiss, Manish Raghavan, Felix Wu, Jon Kleinberg, Kilian Q. Weinberger
Published: 2017
A data-driven software tool for enabling cooperative information sharing among police departments
Michael Redmond, Alok Baveja
Published: 2002
Empirical Choice of Histograms and Kernel Density Estimators
Mats Rudemo
Published: 1982
Stochastic gradient descent with differentially private updates
S. Song, K. Chaudhuri, A. D. Sarwate
Published: 2013
Parallel Streaming Wasserstein Barycenters
Matthew Staib, Sebastian Claici, Justin Solomon, Stefanie Jegelka
Published: 2017
Differentially Private and Fair Deep Learning: A Lagrangian Dual Approach
Cuong Tran, Ferdinando Fioretto, Pascal Van Hentenryck
Published: 2021
The complexity of differential privacy
Vadhan, S.
Published: 2017
Revisiting differentially private linear regression: Optimal and adaptive prediction & estimation in unbounded domain
Yu-Xiang Wang
Published: 2018
Inequalities for the L1 Deviation of the Empirical Distribution
Tsachy Weissman, Erik Ordentlich, Gadiel Seroussi, Sergio Verdu, Marcelo J. Weinberger
Published: 2003
LSAC National Longitudinal Bar Passage Study
Linda F. Wightman
Published: 1998
Fair and Optimal Classification via Post-Processing
Ruicheng Xian, Lang Yin, Han Zhao
Published: 2023
Achieving differential privacy and fairness in logistic regression
Xu, Depeng, Yuan, Shuhan, Wu, Xintao
Published: 2019
Differentially Private Histogram Publication
Jia Xu, Zhenjie Zhang, Xiaokui Xiao, Yin Yang, Ge Yu
Published: 2012
Learning Fair Representations
Richard Zemel, Yu Ledell Wu, Kevin Swersky, Toni Pitassi, Cynthia Dwork
Published: 2013
Inherent Tradeoffs in Learning Fair Representations
Han Zhao, Geoffrey J. Gordon
Published: 2022
Share