Researcher: Elbir, Ahmet Musab
Name Variants
Elbir, Ahmet Musab
Email Address
Birth Date
4 results
Search Results
Now showing 1 - 4 of 4
Publication Metadata only Hybrid federated and centralized learning(European Assoc Signal Speech & Image Processing-Eurasip, 2021) Mishra, Kumar Vijay; N/A; Department of Electrical and Electronics Engineering; Elbir, Ahmet Musab; Ergen, Sinem Çöleri; N/A; Faculty Member; Department of Electrical and Electronics Engineering; Graduate School of Sciences and Engineering; College of Engineering; N/A; 7211Many of the machine learning tasks are focused on centralized learning (CL), which requires the transmission of local datasets from the clients to a parameter server (PS) leading to a huge communication overhead. Federated learning (FL) overcomes this issue by allowing the clients to send only the model updates to the PS instead of the whole dataset. In this way, FL brings the learning to edge level, wherein powerful computational resources are required on the client side. This requirement may not always be satisfied because of diverse computational capabilities of edge devices. We address this through a novel hybrid federated and centralized learning (HFCL) framework to effectively train a learning model by exploiting the computational capability of the clients. In HFCL, only the clients who have sufficient resources employ FL; the remaining clients resort to CL by transmitting their local dataset to PS. This allows all the clients to collaborate on the learning process regardless of their computational resources. We also propose a sequential data transmission approach with HFCL (HFCL-SDT) to reduce the training duration. The proposed HFCL frameworks outperform previously proposed non-hybrid FL (CL) based schemes in terms of learning accuracy (communication overhead) since all the clients collaborate on the learning process with their datasets regardless of their computational resources.Publication Metadata only Asymptotic analysis of max-min weighted SINR for IRS-assisted MISO aystems with hardware impairments(IEEE, 2023) Papazafeiropoulos, Anastasios; Pan, Cunhua; Nguyen, Van-Dinh; Kourtessis, Pandelis; Chatzinotas, Symeon; N/A; Elbir, Ahmet Musab; N/A; Graduate School of Sciences and Engineering; N/AWe focus on the realistic maximization of the uplink minimum signal-to-interference-plus-noise ratio (SINR) of a general multiple-input single-output (MISO) system assisted by an intelligent reflecting surface (IRS) in the large system limit accounting for HIs hardware impairments (HIs). In particular, we introduce the HIs at both the IRS (IRS-HIs) and the additive transceiver HIs (AT-HIs), usually neglected despite their inevitable impact. Specifically, the deterministic equivalent analysis enables the derivation of the asymptotic weighted maximum-minimum SINR with HIs by jointly optimizing the HIs-aware receiver, the transmit power, and the reflecting beamforming matrix (RBM). Notably, we obtain the optimal power allocation and reflecting beamforming matrix with low overhead instead of their frequent necessary computation in IRS-assisted MIMO systems based on the instantaneous channel information. Monte Carlo simulations verify the analytical results which show the insightful interplay among the key parameters and the degradation of the performance due to HIs.Publication Open Access Hybrid federated and centralized learning(Institute of Electrical and Electronics Engineers (IEEE), 2021) Mishra, K.V.; Department of Electrical and Electronics Engineering; Ergen, Sinem Çöleri; Elbir, Ahmet Musab; Faculty Member; Department of Electrical and Electronics Engineering; College of Engineering; 7211; N/AMany of the machine learning tasks are focused on centralized learning (CL), which requires the transmission of local datasets from the clients to a parameter server (PS) leading to a huge communication overhead. Federated learning (FL) overcomes this issue by allowing the clients to send only the model updates to the PS instead of the whole dataset. In this way, FL brings the learning to edge level, wherein powerful computational resources are required on the client side. This requirement may not always be satisfied because of diverse computational capabilities of edge devices. We address this through a novel hybrid federated and centralized learning (HFCL) framework to effectively train a learning model by exploiting the computational capability of the clients. In HFCL, only the clients who have sufficient resources employ FL; the remaining clients resort to CL by transmitting their local dataset to PS. This allows all the clients to collaborate on the learning process regardless of their computational resources. We also propose a sequential data transmission approach with HFCL (HFCL-SDT) to reduce the training duration. The proposed HFCL frameworks outperform previously proposed non-hybrid FL (CL) based schemes in terms of learning accuracy (communication overhead) since all the clients collaborate on the learning process with their datasets regardless of their computational resources.Publication Open Access Federated dropout learning for hybrid beamforming with spatial path index modulation in multi-user MMWave-MIMO systems(Institute of Electrical and Electronics Engineers (IEEE), 2021) Mishra, Kumar Vijay; Department of Electrical and Electronics Engineering; Ergen, Sinem Çöleri; Elbir, Ahmet Musab; Faculty Member; Department of Electrical and Electronics Engineering; College of Engineering; 7211; N/AMillimeter wave multiple-input multiple-output (mmWave-MIMO) systems with small number of radio-frequency (RF) chains have limited multiplexing gain. Spatial path index modulation (SPIM) is helpful in improving this gain by utilizing additional signal bits modulated by the indices of spatial paths. In this paper, we introduce model-based and model-free frameworks for beamformer design in multi-user SPIM-MIMO systems. We first design the beamformers via model-based manifold optimization algorithm. Then, we leverage federated learning (FL) with dropout learning (DL) to train a learning model on the local dataset of users, who estimate the beamformers by feeding the model with their channel data. The DL randomly selects different set of model parameters during training, thereby further reducing the transmission overhead compared to conventional FL. Numerical experiments show that the proposed framework exhibits higher spectral efficiency than the state-of-the-art SPIM-MIMO methods and mmWave-MIMO, which relies on the strongest propagation path. Furthermore, the proposed FL approach provides at least 10 times lower transmission overhead than the centralized learning techniques.