Publication:
Data-agnostic model poisoning against federated learning: a graph autoencoder approach

dc.contributor.coauthorLi, Kai
dc.contributor.coauthorZheng, Jingjing
dc.contributor.coauthorYuan, Xin
dc.contributor.coauthorNi, Wei
dc.contributor.coauthorPoor, H. Vincent
dc.contributor.departmentDepartment of Electrical and Electronics Engineering
dc.contributor.kuauthorAkan, Özgür Barış
dc.contributor.otherDepartment of Electrical and Electronics Engineering
dc.contributor.researchcenter 
dc.contributor.schoolcollegeinstituteCollege of Engineering
dc.contributor.unit 
dc.date.accessioned2024-12-29T09:37:54Z
dc.date.issued2024
dc.description.abstractThis paper proposes a novel, data-agnostic, model poisoning attack on Federated Learning (FL), by designing a new adversarial graph autoencoder (GAE)-based framework. The attack requires no knowledge of FL training data and achieves both effectiveness and undetectability. By listening to the benign local models and the global model, the attacker extracts the graph structural correlations among the benign local models and the training data features substantiating the models. The attacker then adversarially regenerates the graph structural correlations while maximizing the FL training loss, and subsequently generates malicious local models using the adversarial graph structure and the training data features of the benign ones. A new algorithm is designed to iteratively train the malicious local models using GAE and sub-gradient descent. The convergence of FL under attack is rigorously proved, with a considerably large optimality gap. Experiments show that the FL accuracy drops gradually under the proposed attack and existing defense mechanisms fail to detect it. The attack can give rise to an infection across all benign devices, making it a serious threat to FL. © 2005-2012 IEEE.
dc.description.indexedbyWoS
dc.description.indexedbyScopus
dc.description.openaccessAll Open Access
dc.description.openaccessGreen Open Access
dc.description.publisherscopeInternational
dc.description.sponsorsReal-Time and Embedded Computing Systems Research Centre (CISTER) Research Unit
dc.description.volume19
dc.identifier.doi10.1109/TIFS.2024.3362147
dc.identifier.eissn1556-6021
dc.identifier.issn1556-6013
dc.identifier.link 
dc.identifier.quartileQ1
dc.identifier.scopus2-s2.0-85184823206
dc.identifier.urihttps://doi.org/10.1109/TIFS.2024.3362147
dc.identifier.urihttps://hdl.handle.net/20.500.14288/22493
dc.identifier.wos1174295900013
dc.keywordsFeature correlation
dc.keywordsFederated learning
dc.keywordsGraph autoencoder
dc.keywordsModel poisoning attack
dc.languageen
dc.publisherIEEE-Inst Electrical Electronics Engineers Inc
dc.relation.grantno 
dc.rights 
dc.sourceIEEE Transactions on Information Forensics and Security
dc.subjectLearning systems
dc.subjectData privacy
dc.subjectInternet of things
dc.titleData-agnostic model poisoning against federated learning: a graph autoencoder approach
dc.typeJournal article
dc.type.other 
dspace.entity.typePublication
local.contributor.kuauthorAkan, Özgür Barış
relation.isOrgUnitOfPublication21598063-a7c5-420d-91ba-0cc9b2db0ea0
relation.isOrgUnitOfPublication.latestForDiscovery21598063-a7c5-420d-91ba-0cc9b2db0ea0

Files