Publication:
FedBWO: enhancing communication efficiency in federated learning

dc.conference.dateMAY 26-28, 2025
dc.conference.locationAbu Dhabi
dc.contributor.departmentGraduate School of Sciences and Engineering
dc.contributor.departmentDepartment of Computer Engineering
dc.contributor.kuauthorHayyolalam, Vahideh
dc.contributor.kuauthorÖzkasap, Öznur
dc.contributor.schoolcollegeinstituteGRADUATE SCHOOL OF SCIENCES AND ENGINEERING
dc.contributor.schoolcollegeinstituteCollege of Engineering
dc.date.accessioned2025-12-31T08:21:21Z
dc.date.available2025-12-31
dc.date.issued2025
dc.description.abstractFederated Learning (FL) is a distributed Machine Learning (ML) setup, where a shared model is collaboratively trained by various clients using their local datasets while keeping the data private. Considering resource-constrained devices, FL clients often suffer from restricted transmission capacity. Aiming to enhance the system performance, the communication between clients and server needs to be diminished. Current FL strategies transmit a tremendous amount of data (model weights) within the FL process, which needs a high communication bandwidth. Considering resource constraints, increasing the number of clients and, consequently, the amount of data (model weights) can lead to a bottleneck. In this paper, we introduce the Federated Black Widow Optimization (FedBWO) technique to decrease the amount of transmitted data by transmitting only a performance score rather than the local model weights from clients. FedBWO employs the BWO algorithm to improve local model updates. The conducted experiments prove that FedBWO remarkably improves the performance of the global model and the communication efficiency of the overall system. According to the experimental outcomes, FedBWO enhances the global model accuracy by an average of 21% over FedAvg, and 12% over FedGWO. Furthermore, FedBWO dramatically decreases the communication cost compared to other methods.
dc.description.fulltextYes
dc.description.harvestedfromManual
dc.description.indexedbyScopus
dc.description.publisherscopeInternational
dc.description.readpublishN/A
dc.description.sponsoredbyTubitakEuTÜBİTAK
dc.description.sponsorshipTürkiye Bilimsel ve Teknolojik Araştırma Kurumu, TUBITAK; (121C338)
dc.identifier.doi10.1109/ICHMS65439.2025.11154363
dc.identifier.embargoNo
dc.identifier.endpage268
dc.identifier.isbn9798331521646
dc.identifier.quartileN/A
dc.identifier.scopus2-s2.0-105017769592
dc.identifier.startpage263
dc.identifier.urihttps://doi.org/10.1109/ICHMS65439.2025.11154363
dc.identifier.urihttps://hdl.handle.net/20.500.14288/31578
dc.keywordsArtificial intelligence
dc.keywordsDistributed learning
dc.keywordsMeta-heuristic
dc.keywordsNode selection
dc.keywordsOptimization
dc.language.isoeng
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)
dc.relation.affiliationKoç University
dc.relation.collectionKoç University Institutional Repository
dc.relation.ispartof5th IEEE International Conference on Human-Machine Systems, ICHMS 2025
dc.relation.openaccessYes
dc.rightsCC BY-NC-ND (Attribution-NonCommercial-NoDerivs)
dc.rights.urihttps://creativecommons.org/licenses/by-nc-nd/4.0/
dc.subjectEngineering
dc.titleFedBWO: enhancing communication efficiency in federated learning
dc.typeConference Proceeding
dspace.entity.typePublication
person.familyNameHayyolalam
person.familyNameÖzkasap
person.givenNameVahideh
person.givenNameÖznur
relation.isOrgUnitOfPublication3fc31c89-e803-4eb1-af6b-6258bc42c3d8
relation.isOrgUnitOfPublication89352e43-bf09-4ef4-82f6-6f9d0174ebae
relation.isOrgUnitOfPublication.latestForDiscovery3fc31c89-e803-4eb1-af6b-6258bc42c3d8
relation.isParentOrgUnitOfPublication434c9663-2b11-4e66-9399-c863e2ebae43
relation.isParentOrgUnitOfPublication8e756b23-2d4a-4ce8-b1b3-62c794a8c164
relation.isParentOrgUnitOfPublication.latestForDiscovery434c9663-2b11-4e66-9399-c863e2ebae43

Files