Recent work has explored how to train machine learning models which do not
discriminate against any subgroup of the population as determined by sensitive
attributes such as gender or race. To avoid disparate treatment, sensitive
attributes should not be considered. On the other hand, in order to avoid
disparate impact, sensitive attributes must be examined, e.g., in order to
learn a fair model, or to check if a given model is fair. We introduce methods
from secure multi-party computation which allow us to avoid both. By encrypting
sensitive attributes, we show how an outcome-based fair model may be learned,
checked, or have its outputs verified and held to account, without users
revealing their sensitive attributes.