Research Outputs
Permanent URI for this communityhttps://hdl.handle.net/20.500.14288/2
Browse
2 results
Search Results
Publication Metadata only Byzantines can also learn from history: fall of centered clipping in federated learning(IEEE-Inst Electrical Electronics Engineers Inc, 2024) Özfatura, Emre; Gündüz, Deniz; Department of Computer Engineering; Özfatura, Ahmet Kerem; Küpçü, Alptekin; Department of Computer Engineering; Koç Üniversitesi İş Bankası Enfeksiyon Hastalıkları Uygulama ve Araştırma Merkezi (EHAM) / Koç University İşbank Center for Infectious Diseases (KU-IS CID); Graduate School of Sciences and Engineering; College of Engineering;The increasing popularity of the federated learning (FL) framework due to its success in a wide range of collaborative learning tasks also induces certain security concerns. Among many vulnerabilities, the risk of Byzantine attacks is of particular concern, which refers to the possibility of malicious clients participating in the learning process. Hence, a crucial objective in FL is to neutralize the potential impact of Byzantine attacks and to ensure that the final model is trustable. It has been observed that the higher the variance among the clients' models/updates, the more space there is for Byzantine attacks to be hidden. As a consequence, by utilizing momentum, and thus, reducing the variance, it is possible to weaken the strength of known Byzantine attacks. The centered clipping (CC) framework has further shown that the momentum term from the previous iteration, besides reducing the variance, can be used as a reference point to neutralize Byzantine attacks better. In this work, we first expose vulnerabilities of the CC framework, and introduce a novel attack strategy that can circumvent the defences of CC and other robust aggregators and reduce their test accuracy up to %33 on best-case scenarios in image classification tasks. Then, we propose a new robust and fast defence mechanism that is effective against the proposed and other existing Byzantine attacks.Publication Metadata only Data-agnostic model poisoning against federated learning: a graph autoencoder approach(IEEE-Inst Electrical Electronics Engineers Inc, 2024) Li, Kai; Zheng, Jingjing; Yuan, Xin; Ni, Wei; Poor, H. Vincent; Department of Electrical and Electronics Engineering; Akan, Özgür Barış; Department of Electrical and Electronics Engineering; ; College of Engineering;This paper proposes a novel, data-agnostic, model poisoning attack on Federated Learning (FL), by designing a new adversarial graph autoencoder (GAE)-based framework. The attack requires no knowledge of FL training data and achieves both effectiveness and undetectability. By listening to the benign local models and the global model, the attacker extracts the graph structural correlations among the benign local models and the training data features substantiating the models. The attacker then adversarially regenerates the graph structural correlations while maximizing the FL training loss, and subsequently generates malicious local models using the adversarial graph structure and the training data features of the benign ones. A new algorithm is designed to iteratively train the malicious local models using GAE and sub-gradient descent. The convergence of FL under attack is rigorously proved, with a considerably large optimality gap. Experiments show that the FL accuracy drops gradually under the proposed attack and existing defense mechanisms fail to detect it. The attack can give rise to an infection across all benign devices, making it a serious threat to FL. © 2005-2012 IEEE.