Publication:
A hybrid architecture for federated and centralized learning

dc.contributor.coauthorElbir, Ahmet M.
dc.contributor.coauthorPapazafeiropoulos, Anastasios K.
dc.contributor.coauthorKourtessis, Pandelis
dc.contributor.coauthorChatzinotas, Symeon
dc.contributor.departmentDepartment of Electrical and Electronics Engineering
dc.contributor.kuauthorErgen, Sinem Çöleri
dc.contributor.kuprofileFaculty Member
dc.contributor.otherDepartment of Electrical and Electronics Engineering
dc.contributor.schoolcollegeinstituteCollege of Engineering
dc.contributor.yokid7211
dc.date.accessioned2024-11-09T13:51:42Z
dc.date.issued2022
dc.description.abstractMany of the machine learning tasks rely on centralized learning (CL), which requires the transmission of local datasets from the clients to a parameter server (PS) entailing huge communication overhead. To overcome this, federated learning (FL) has been suggested as a promising tool, wherein the clients send only the model updates to the PS instead of the whole dataset. However, FL demands powerful computational resources from the clients. In practice, not all the clients have sufficient computational resources to participate in training. To address this common scenario, we propose a more efficient approach called hybrid federated and centralized learning (HFCL), wherein only the clients with sufficient resources employ FL, while the remaining ones send their datasets to the PS, which computes the model on behalf of them. Then, the model parameters are aggregated at the PS. To improve the efficiency of dataset transmission, we propose two different techniques: i) increased computation-per-client and ii) sequential data transmission. Notably, the HFCL frameworks outperform FL with up to 20% improvement in the learning accuracy when only half of the clients perform FL while having 50% less communication overhead than CL since all the clients collaborate on the learning process with their datasets.
dc.description.fulltextYES
dc.description.indexedbyWoS
dc.description.indexedbyScopus
dc.description.issue3
dc.description.openaccessYES
dc.description.publisherscopeInternational
dc.description.sponsoredbyTubitakEuTÜBİTAK
dc.description.sponsoredbyTubitakEuEU
dc.description.sponsorshipEuropean Union (EU)
dc.description.sponsorshipHorizon 2020
dc.description.sponsorshipEuropean Research Council (ERC)
dc.description.sponsorshipProject AGNOSTIC
dc.description.sponsorshipCHIST-ERA
dc.description.sponsorshipScientific and Technological Council of Turkey (TÜBİTAK)
dc.description.versionAuthor's final manuscript
dc.description.volume8
dc.formatpdf
dc.identifier.doi10.1109/TCCN.2022.3181032
dc.identifier.embargoNO
dc.identifier.filenameinventorynoIR04004
dc.identifier.issn2332-7731
dc.identifier.linkhttps://doi.org/10.1109/TCCN.2022.3181032
dc.identifier.quartileQ1
dc.identifier.scopus2-s2.0-85131765961
dc.identifier.urihttps://hdl.handle.net/20.500.14288/3957
dc.identifier.wos852215200020
dc.keywordsMachine learning
dc.keywordsFederated learning
dc.keywordsCentralized learning
dc.keywordsEdge intelligence
dc.keywordsEdge efficiency
dc.languageEnglish
dc.publisherInstitute of Electrical and Electronics Engineers (IEEE)
dc.relation.grantnoCHIST-ERA-18-SDCDN-001
dc.relation.grantno119E350
dc.relation.urihttp://cdm21054.contentdm.oclc.org/cdm/ref/collection/IR/id/10884
dc.sourceIEEE Transactions on Cognitive Communications and Networking
dc.subjectTelecommunications
dc.titleA hybrid architecture for federated and centralized learning
dc.typeJournal Article
dspace.entity.typePublication
local.contributor.authorid0000-0002-7502-3122
local.contributor.kuauthorErgen, Sinem Çöleri
relation.isOrgUnitOfPublication21598063-a7c5-420d-91ba-0cc9b2db0ea0
relation.isOrgUnitOfPublication.latestForDiscovery21598063-a7c5-420d-91ba-0cc9b2db0ea0

Files

Original bundle

Now showing 1 - 1 of 1
Thumbnail Image
Name:
10884.pdf
Size:
1019.47 KB
Format:
Adobe Portable Document Format