LearnDefend: Learning to Defend against Backdoor Attacks on Federated Learning @AIMLSystems Doctoral Symposium 2022

Abstract

Federated Learning has emerged as an important paradigm for training Machine Learning (ML) models. The key idea is that many clients own the data needed to train their local ML models, and share the local models with a master, which in turn shares the aggregated global model back with each of the clients. The federated averaging algorithm has been a mainstay of federated learning, due to its effectiveness, simplicity, and privacy preserving properties. However, they have been shown to be particularly vulnerable to model-poisoning attacks by one or more clients. Two particular properties of modern model-poisoning attacks make them virtually undetectable. Firstly, the model-replacement attacks proposed in, can offset the “correct” models contributed by many other clients, even under bounded-deviation constraints. Secondly, edge-case backdoor attacks, can manifest themselves on a very small subset of the input feature space. These factors lead Wang et al. to lead to the conclusion that no fixed defense rule can stop the backdoor attack on federated learning system

Date
Oct 14, 2022 12:00 AM