With increasing usage of deep learning algorithms in many application, new
research questions related to privacy and adversarial attacks are emerging.
However, the deep learning algorithm improvement needs more and more data to be
shared within research community. Methodologies like federated learning,
differential privacy, additive secret sharing provides a way to train machine
learning models on edge without moving the data from the edge. However, it is
very computationally intensive and prone to adversarial attacks. Therefore,
this work introduces a privacy preserving FedCollabNN framework for training
machine learning models at edge, which is computationally efficient and robust
against adversarial attacks. The simulation results using MNIST dataset
indicates the effectiveness of the framework.