Publication:
Hybrid federated and centralized learning

dc.contributor.coauthorMishra, Kumar Vijay
dc.contributor.departmentN/A
dc.contributor.departmentDepartment of Electrical and Electronics Engineering
dc.contributor.kuauthorElbir, Ahmet Musab
dc.contributor.kuauthorErgen, Sinem Çöleri
dc.contributor.kuprofileN/A
dc.contributor.kuprofileFaculty Member
dc.contributor.otherDepartment of Electrical and Electronics Engineering
dc.contributor.schoolcollegeinstituteGraduate School of Sciences and Engineering
dc.contributor.schoolcollegeinstituteCollege of Engineering
dc.contributor.yokidN/A
dc.contributor.yokid7211
dc.date.accessioned2024-11-10T00:00:03Z
dc.date.issued2021
dc.description.abstractMany of the machine learning tasks are focused on centralized learning (CL), which requires the transmission of local datasets from the clients to a parameter server (PS) leading to a huge communication overhead. Federated learning (FL) overcomes this issue by allowing the clients to send only the model updates to the PS instead of the whole dataset. In this way, FL brings the learning to edge level, wherein powerful computational resources are required on the client side. This requirement may not always be satisfied because of diverse computational capabilities of edge devices. We address this through a novel hybrid federated and centralized learning (HFCL) framework to effectively train a learning model by exploiting the computational capability of the clients. In HFCL, only the clients who have sufficient resources employ FL; the remaining clients resort to CL by transmitting their local dataset to PS. This allows all the clients to collaborate on the learning process regardless of their computational resources. We also propose a sequential data transmission approach with HFCL (HFCL-SDT) to reduce the training duration. The proposed HFCL frameworks outperform previously proposed non-hybrid FL (CL) based schemes in terms of learning accuracy (communication overhead) since all the clients collaborate on the learning process with their datasets regardless of their computational resources.
dc.description.indexedbyWoS
dc.description.indexedbyScopus
dc.description.openaccessYES
dc.description.publisherscopeInternational
dc.description.sponsoredbyTubitakEuTÜBİTAK
dc.description.sponsorshipCHIST-ERA grant [CHIST-ERA-18-SDCDN-001]
dc.description.sponsorshipScientific and Technological Council of Turkey [119E350] S. Coleri acknowledges the support of the CHIST-ERA grant CHIST-ERA-18-SDCDN-001 and the Scientific and Technological Council of Turkey 119E350.
dc.identifier.doiN/A
dc.identifier.isbn978-9-0827-9706-0
dc.identifier.issn2076-1465
dc.identifier.scopus2-s2.0-85123189353
dc.identifier.urihttps://hdl.handle.net/20.500.14288/15745
dc.identifier.wos764066600307
dc.keywordsMachine learning
dc.keywordsFederated learning
dc.keywordsCentralized learning
dc.keywordsEdge intelligence
dc.keywordsEdge efficiency
dc.languageEnglish
dc.publisherEuropean Assoc Signal Speech & Image Processing-Eurasip
dc.source29th European Signal Processing Conference (EUSIPCO 2021)
dc.subjectAcoustics
dc.subjectComputer science
dc.subjectEngineering
dc.subjectSoftware engineering
dc.subjectElectrical and electronics engineering
dc.subjectImaging science
dc.subjectPhotographic technology
dc.subjectTelecommunications
dc.titleHybrid federated and centralized learning
dc.typeConference proceeding
dspace.entity.typePublication
local.contributor.authoridN/A
local.contributor.authorid0000-0002-7502-3122
local.contributor.kuauthorElbir, Ahmet Musab
local.contributor.kuauthorErgen, Sinem Çöleri
relation.isOrgUnitOfPublication21598063-a7c5-420d-91ba-0cc9b2db0ea0
relation.isOrgUnitOfPublication.latestForDiscovery21598063-a7c5-420d-91ba-0cc9b2db0ea0

Files