Personalized Federated Learning (pFL) holds immense promise for tailoring
machine learning models to individual users while preserving data privacy.
However, achieving optimal performance in pFL often requires a careful
balancing act between memory overhead costs and model accuracy. This paper
delves into the trade-offs inherent in pFL, offering valuable insights for
selecting the right algorithms for diverse real-world scenarios. We empirically
evaluate ten prominent pFL techniques across various datasets and data splits,
uncovering significant differences in their performance. Our study reveals
interesting insights into how pFL methods that utilize personalized (local)
aggregation exhibit the fastest convergence due to their efficiency in
communication and computation. Conversely, fine-tuning methods face limitations
in handling data heterogeneity and potential adversarial attacks while
multi-objective learning methods achieve higher accuracy at the cost of
additional training and resource consumption. Our study emphasizes the critical
role of communication efficiency in scaling pFL, demonstrating how it can
significantly affect resource usage in real-world deployments.