These labels were automatically added by AI and may be inaccurate. For details, see About Literature Database.
Abstract
Machine learning (ML) has become prominent in applications that directly
affect people's quality of life, including in healthcare, justice, and finance.
ML models have been found to exhibit discrimination based on sensitive
attributes such as gender, race, or disability. Assessing if an ML model is
free of bias remains challenging to date, and by definition has to be done with
sensitive user characteristics that are subject of anti-discrimination and data
protection law. Existing libraries for fairness auditing of ML models offer no
mechanism to protect the privacy of the audit data. We present PrivFair, a
library for privacy-preserving fairness audits of ML models. Through the use of
Secure Multiparty Computation (MPC), PrivFair protects the confidentiality of
the model under audit and the sensitive data used for the audit, hence it
supports scenarios in which a proprietary classifier owned by a company is
audited using sensitive audit data from an external investigator. We
demonstrate the use of PrivFair for group fairness auditing with tabular data
or image data, without requiring the investigator to disclose their data to
anyone in an unencrypted manner, or the model owner to reveal their model
parameters to anyone in plaintext.