TY - GEN
T1 - Hardware-Accelerated Event-Graph Neural Networks for Low-Latency Time-Series Classification on SoC FPGA
AU - Nakano, Hiroshi
AU - Blachut, Krzysztof
AU - Jeziorek, Kamil
AU - Wzorek, Piotr
AU - Dampfhoffer, Manon
AU - Mesquida, Thomas
AU - Nishi, Hiroaki
AU - Kryjak, Tomasz
AU - Dalgaty, Thomas
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.
PY - 2025
Y1 - 2025
N2 - As the quantities of data recorded by embedded edge sensors grow, so too does the need for intelligent local processing. Such data often comes in the form of time-series signals, based on which real-time predictions can be made locally using an AI model. However, a hardware-software approach capable of making low-latency predictions with low power consumption is required. In this paper, we present a hardware implementation of an event-graph neural network for time-series classification. We leverage an artificial cochlea model to convert the input time-series signals into a sparse event-data format that allows the event-graph to drastically reduce the number of calculations relative to other AI methods. We implemented the design on a SoC FPGA and applied it to the real-time processing of the Spiking Heidelberg Digits (SHD) dataset to benchmark our approach against competitive solutions. Our method achieves a floating-point accuracy of 92.7% on the SHD dataset for the base model, which is only 2.4% and 2% less than the state-of-the-art models with over 10× and 67× fewer model parameters, respectively. It also outperforms FPGA-based spiking neural network implementations by 19.3% and 4.5%, achieving 92.3% accuracy for the quantised model while using fewer computational resources and reducing latency.
AB - As the quantities of data recorded by embedded edge sensors grow, so too does the need for intelligent local processing. Such data often comes in the form of time-series signals, based on which real-time predictions can be made locally using an AI model. However, a hardware-software approach capable of making low-latency predictions with low power consumption is required. In this paper, we present a hardware implementation of an event-graph neural network for time-series classification. We leverage an artificial cochlea model to convert the input time-series signals into a sparse event-data format that allows the event-graph to drastically reduce the number of calculations relative to other AI methods. We implemented the design on a SoC FPGA and applied it to the real-time processing of the Spiking Heidelberg Digits (SHD) dataset to benchmark our approach against competitive solutions. Our method achieves a floating-point accuracy of 92.7% on the SHD dataset for the base model, which is only 2.4% and 2% less than the state-of-the-art models with over 10× and 67× fewer model parameters, respectively. It also outperforms FPGA-based spiking neural network implementations by 19.3% and 4.5%, achieving 92.3% accuracy for the quantised model while using fewer computational resources and reducing latency.
KW - FPGA
KW - event-based audio processing
KW - graph convolutional neural networks dynamic audio sensor artificial cochlea
UR - https://www.scopus.com/pages/publications/105002928372
UR - https://www.scopus.com/inward/citedby.url?scp=105002928372&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-87995-1_4
DO - 10.1007/978-3-031-87995-1_4
M3 - Conference contribution
AN - SCOPUS:105002928372
SN - 9783031879944
T3 - Lecture Notes in Computer Science
SP - 51
EP - 68
BT - Applied Reconfigurable Computing. Architectures, Tools, and Applications - 21st International Symposium, ARC 2025, Proceedings
A2 - Giorgi, Roberto
A2 - Stojilovic, Mirjana
A2 - Stroobandt, Dirk
A2 - Brox Jiménez, Piedad
A2 - Barriga Barros, Ángel
PB - Springer Science and Business Media Deutschland GmbH
T2 - 21st International Symposium on Applied Reconfigurable Computing, ARC 2025
Y2 - 9 April 2025 through 11 April 2025
ER -