With the increased attention and legislation for data-privacy, collaborative
machine learning (ML) algorithms are being developed to ensure the protection
of private data used for processing. Federated learning (FL) is the most
popular of these methods, which provides privacy preservation by facilitating
collaborative training of a shared model without the need to exchange any
private data with a centralized server. Rather, an abstraction of the data in
the form of a machine learning model update is sent. Recent studies showed that
such model updates may still very well leak private information and thus more
structured risk assessment is needed. In this paper, we analyze existing
vulnerabilities of FL and subsequently perform a literature review of the
possible attack methods targetingFL privacy protection capabilities. These
attack methods are then categorized by a basic taxonomy. Additionally, we
provide a literature study of the most recent defensive strategies and
algorithms for FL aimed to overcome these attacks. These defensive strategies
are categorized by their respective underlying defence principle. The paper
concludes that the application of a single defensive strategy is not enough to
provide adequate protection to all available attack methods.