Federated learning (FL) is a recently developed area of machine learning, in
which the private data of a large number of distributed clients is used to
develop a global model under the coordination of a central server without
explicitly exposing the data. The standard FL strategy has a number of
significant bottlenecks including large communication requirements and high
impact on the clients' resources. Several strategies have been described in the
literature trying to address these issues. In this paper, a novel scheme based
on the notion of "model growing" is proposed. Initially, the server deploys a
small model of low complexity, which is trained to capture the data complexity
during the initial set of rounds. When the performance of such a model
saturates, the server switches to a larger model with the help of
function-preserving transformations. The model complexity increases as more
data is processed by the clients, and the overall process continues until the
desired performance is achieved. Therefore, the most complex model is broadcast
only at the final stage in our approach resulting in substantial reduction in
communication cost and client computational requirements. The proposed approach
is tested extensively on three standard benchmarks and is shown to achieve
substantial reduction in communication and client computation while achieving
comparable accuracy when compared to the current most effective strategies.