Defending Federated Learning Systems to Mitigate Model Poisoning Attacks
Main Article Content
Abstract
Federated learning is a distributed machine learning methodology in which a server uses updates from multiple clients to train a global shared model without knowing the local training data and machine learning model each client has learned. Different vulnerabilities result from the invisibility of the client details. Among these, model poisoning attacks has the potential to impact a global model's security and performance seriously, so it is critical to examine current strategies and offer suggestions for future research. An experimental study of existing model poisoning attacks like Fang and LIE validates that the accuracy drop in the global model performance is very high which varies with the datasets. While current defense strategies aim to reduce the impact of model poisoning attacks on a global model, they are unable to prevent them. The experiment is also done using FLTrust defense method with its variations in the standard algorithm settings.
Article Details

This work is licensed under a Creative Commons Attribution 4.0 International License.