Publication:
Data-agnostic model poisoning against federated learning: a graph autoencoder approach

Thumbnail Image

School / College / Institute

Program

KU Authors

Co-Authors

Li, Kai
Zheng, Jingjing
Yuan, Xin
Ni, Wei
Poor, H. Vincent

Publication Date

Language

Embargo Status

Journal Title

Journal ISSN

Volume Title

Alternative Title

Abstract

This paper proposes a novel, data-agnostic, model poisoning attack on Federated Learning (FL), by designing a new adversarial graph autoencoder (GAE)-based framework. The attack requires no knowledge of FL training data and achieves both effectiveness and undetectability. By listening to the benign local models and the global model, the attacker extracts the graph structural correlations among the benign local models and the training data features substantiating the models. The attacker then adversarially regenerates the graph structural correlations while maximizing the FL training loss, and subsequently generates malicious local models using the adversarial graph structure and the training data features of the benign ones. A new algorithm is designed to iteratively train the malicious local models using GAE and sub-gradient descent. The convergence of FL under attack is rigorously proved, with a considerably large optimality gap. Experiments show that the FL accuracy drops gradually under the proposed attack and existing defense mechanisms fail to detect it. The attack can give rise to an infection across all benign devices, making it a serious threat to FL. © 2005-2012 IEEE.

Source

Publisher

IEEE-Inst Electrical Electronics Engineers Inc

Subject

Learning systems, Data privacy, Internet of things

Citation

Has Part

Source

IEEE Transactions on Information Forensics and Security

Book Series Title

Edition

DOI

10.1109/TIFS.2024.3362147

item.page.datauri

Link

Rights

 

Copyrights Note

Endorsement

Review

Supplemented By

Referenced By

4

Views

3

Downloads

View PlumX Details