TY - GEN
T1 - Accelerating Deep Learning using Multiple GPUs and FPGA-Based 10GbE Switch
AU - Itsubo, Tomoya
AU - Koibuchi, Michihiro
AU - Amano, Hideharu
AU - Matsutani, Hiroki
N1 - Publisher Copyright:
© 2020 IEEE.
PY - 2020/3
Y1 - 2020/3
N2 - A back-propagation algorithm following a gradient descent approach is used for training deep neural networks. Since it iteratively performs a large number of matrix operations to compute the gradients, GPUs (Graphics Processing Units) are efficient especially for the training phase. Thus, a cluster of computers each of which equips multiple GPUs can significantly accelerate the training phase. Although the gradient computation is still a major bottleneck of the training, gradient aggregation and parameter optimization impose both communication and computation overheads, which should also be reduced for further shortening the training time. To address this issue, in this paper, multiple GPUs are interconnected with a PCI Express (PCIe) over 10 Gbit Ethernet (10GbE) technology. Since these remote GPUs are interconnected via network switches, gradient aggregation and optimizers (e.g., SGD, Adagrad, Adam, and SMORMS3) are offloaded to an FPGA-based network switch between a host machine and remote GPUs; thus, the gradient aggregation and optimization are completed in the network. Evaluation results using four remote GPUs connected via the FPGA-based 10GbE switch that implements the four optimizers demonstrate that these optimization algorithms are accelerated by up to 3. 0x and 1. 25x compared to CPU and GPU implementations, respectively. Also, the gradient aggregation throughput by the FPGA-based switch achieves 98.3% of the 10GbE line rate.
AB - A back-propagation algorithm following a gradient descent approach is used for training deep neural networks. Since it iteratively performs a large number of matrix operations to compute the gradients, GPUs (Graphics Processing Units) are efficient especially for the training phase. Thus, a cluster of computers each of which equips multiple GPUs can significantly accelerate the training phase. Although the gradient computation is still a major bottleneck of the training, gradient aggregation and parameter optimization impose both communication and computation overheads, which should also be reduced for further shortening the training time. To address this issue, in this paper, multiple GPUs are interconnected with a PCI Express (PCIe) over 10 Gbit Ethernet (10GbE) technology. Since these remote GPUs are interconnected via network switches, gradient aggregation and optimizers (e.g., SGD, Adagrad, Adam, and SMORMS3) are offloaded to an FPGA-based network switch between a host machine and remote GPUs; thus, the gradient aggregation and optimization are completed in the network. Evaluation results using four remote GPUs connected via the FPGA-based 10GbE switch that implements the four optimizers demonstrate that these optimization algorithms are accelerated by up to 3. 0x and 1. 25x compared to CPU and GPU implementations, respectively. Also, the gradient aggregation throughput by the FPGA-based switch achieves 98.3% of the 10GbE line rate.
UR - http://www.scopus.com/inward/record.url?scp=85085473795&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85085473795&partnerID=8YFLogxK
U2 - 10.1109/PDP50117.2020.00022
DO - 10.1109/PDP50117.2020.00022
M3 - Conference contribution
AN - SCOPUS:85085473795
T3 - Proceedings - 2020 28th Euromicro International Conference on Parallel, Distributed and Network-Based Processing, PDP 2020
SP - 102
EP - 109
BT - Proceedings - 2020 28th Euromicro International Conference on Parallel, Distributed and Network-Based Processing, PDP 2020
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 28th Euromicro International Conference on Parallel, Distributed and Network-Based Processing, PDP 2020
Y2 - 11 March 2020 through 13 March 2020
ER -