TY - JOUR
T1 - Federated learning for DL-CSI prediction in FDD massive MIMO systems
AU - Hou, Weihao
AU - Sun, Jinlong
AU - Gui, Guan
AU - Ohtsuki, Tomoaki
AU - Elbir, Ahmet M.
AU - Gacanin, Haris
AU - Sari, Hikmet
N1 - Publisher Copyright:
© 2012 IEEE.
PY - 2021/8
Y1 - 2021/8
N2 - In frequency-division duplexing (FDD) massive multiple-input multiple-output (MIMO) systems, deep learning for predicting the downlink channel state information (DL-CSI) has been extensively studied. However, in some small cellular base stations (SBSs), a small amount of training data is insufficient to produce an excellent model for CSI prediction. Traditional centralized learning (CL) based method brings all the data together for training, which can lead to overwhelming communication overheads. In this work, we introduce a federated learning (FL) based framework for DL-CSI prediction, where the global model is trained at the macro base station (MBS) by collecting the local models from the edge SBSs. We propose a novel model aggregation algorithm, which updates the global model twice by considering the local model weights and the local gradients, respectively. Numerical simulations show that the proposed aggregation algorithm can make the global model converge faster and perform better than the compared schemes. The performance of the FL architecture is close to that of the CL-based method, and the transmission overheads are much fewer.
AB - In frequency-division duplexing (FDD) massive multiple-input multiple-output (MIMO) systems, deep learning for predicting the downlink channel state information (DL-CSI) has been extensively studied. However, in some small cellular base stations (SBSs), a small amount of training data is insufficient to produce an excellent model for CSI prediction. Traditional centralized learning (CL) based method brings all the data together for training, which can lead to overwhelming communication overheads. In this work, we introduce a federated learning (FL) based framework for DL-CSI prediction, where the global model is trained at the macro base station (MBS) by collecting the local models from the edge SBSs. We propose a novel model aggregation algorithm, which updates the global model twice by considering the local model weights and the local gradients, respectively. Numerical simulations show that the proposed aggregation algorithm can make the global model converge faster and perform better than the compared schemes. The performance of the FL architecture is close to that of the CL-based method, and the transmission overheads are much fewer.
KW - Centralized learning
KW - Channel state information
KW - Federated learning
KW - Macro base station
KW - Small cellular base station
UR - http://www.scopus.com/inward/record.url?scp=85107214434&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85107214434&partnerID=8YFLogxK
U2 - 10.1109/LWC.2021.3081695
DO - 10.1109/LWC.2021.3081695
M3 - Article
AN - SCOPUS:85107214434
SN - 2162-2337
VL - 10
SP - 1810
EP - 1814
JO - IEEE Wireless Communications Letters
JF - IEEE Wireless Communications Letters
IS - 8
M1 - 9435623
ER -