Publication:
Biasing federated learning with a new adversarial graph attention network

Placeholder

Organizational Units

Program

KU-Authors

Akan, Özgür Barış

KU Authors

Co-Authors

Li K., Zheng J., Ni W., Huang H., Lio P., Dressler F.

Advisor

Publication Date

Language

Journal Title

Journal ISSN

Volume Title

Abstract

Fairness in Federated Learning (FL) is imperative not only for the ethical utilization of technology but also for ensuring that models provide accurate, equitable, and beneficial outcomes across varied user demographics and equipment. This paper proposes a new adversarial architecture, referred to as Adversarial Graph Attention Network (AGAT), which deliberately instigates fairness attacks with an aim to bias the learning process across the FL. The proposed AGAT is developed to synthesize malicious, biasing model updates, where the minimum of Kullback-Leibler (KL) divergence between the user's model update and the global model is maximized. Due to a limited set of labeled input-output biasing data samples, a surrogate model is created, which presents the behavior of a complex malicious model update. Moreover, a graph autoencoder (GAE) is designed within the AGAT architecture, which is trained together with sub- gradient descent to reconstruct manipulatively the correlations of the model updates, and maximize the reconstruction loss while keeping the malicious, biasing model updates undetectable. The proposed AGAT attack is implemented in PyTorch, showing experimentally that AGAT successfully increases the minimum value of KL divergence of benign model updates by 60.9% and bypasses the detection of existing defense models. The source code of the AGAT attack is released on GitHub. © 2002-2012 IEEE.

Source:

IEEE Transactions on Mobile Computing

Publisher:

Institute of Electrical and Electronics Engineers Inc.

Keywords:

Subject

Electrical and electronics engineering

Citation

Endorsement

Review

Supplemented By

Referenced By

Copyrights Note

0

Views

0

Downloads

View PlumX Details