Federated learning is a branch of machine learning where a collection of agent nodes collaboratively train a model, by using a large data set partitioned among the nodes. It has widespread applications ranging from signal detection, channel estimation and collaborative beamforming in wireless communications, content caching in wireless networks, augmented and virtual reality, image and video processing, etc. The nodes locally compute a part of the model by using the locally available data set, and these local computation results are shared globally over a network. This ensures limited data exchange in the process of learning the global model.
However, in many applications, the message exchange can severely compromise the privacy of the nodes. For example, exchanging local computation results in a collaborative wireless channel estimation problem might result in revealing a significant amount of channel information which can be used by an attacker to launch a denial-of-service attack. In a network of autonomous vehicles, navigation decisions might be taken based on environment or object classification. This can be performed via training a global classification model using the images taken by vehicle cameras, but sharing even some local summary of the images might risk location privacy of each individual vehicle. One way to alleviate this problem is message obfuscation, but that compromises the accuracy of the model being learnt. Hence, it is important to characterize the fundamental trade-off among model accuracy, convergence speed and user privacy in federated learning, and also design schemes that can achieve a good compromise among all three. This project seeks to carry out an in-depth theoretical study of these problems, and also apply the results to some potential use cases.
- Study of trade-offs in federated learning privacy
- Privacy preserving schemes
- Publications in reputed journals and conferences
- Patents, if applicable
- Tentative: application for external grants