The application of machine learning (ML) in computer systems introduces not
only many benefits but also risks to society. In this paper, we develop the
concept of ML governance to balance such benefits and risks, with the aim of
achieving responsible applications of ML. Our approach first systematizes
research towards ascertaining ownership of data and models, thus fostering a
notion of identity specific to ML systems. Building on this foundation, we use
identities to hold principals accountable for failures of ML systems through
both attribution and auditing. To increase trust in ML systems, we then survey
techniques for developing assurance, i.e., confidence that the system meets its
security requirements and does not exhibit certain known failures. This leads
us to highlight the need for techniques that allow a model owner to manage the
life cycle of their system, e.g., to patch or retire their ML system. Put
altogether, our systematization of knowledge standardizes the interactions
between principals involved in the deployment of ML throughout its life cycle.
We highlight opportunities for future work, e.g., to formalize the resulting
game between ML principals.