Slightly-slacked dropout for improving neural network learning on FPGA

Sota Sawaguchi, Hiroaki Nishi

Research output: Contribution to journalArticlepeer-review

6 Citations (Scopus)


Neural Network Learning (NNL) is compute-intensive. It often involves a dropout technique which effectively regularizes the network to avoid overfitting. As such, a hardware accelerator for dropout NNL has been proposed; however, the existing method encounters a huge transfer cost between hardware and software. This paper proposes Slightly-Slacked Dropout (SS-Dropout), a novel deterministic dropout technique to address the transfer cost while accelerating the process. Experimental results show that our SS-Dropout technique improves both the usual and dropout NNL accelerator, i.e., 1.55 times speed-up and three order-of-magnitude less transfer cost, respectively.

Original languageEnglish
Pages (from-to)75-80
Number of pages6
JournalICT Express
Issue number2
Publication statusPublished - 2018 Jun


  • Dropout technique
  • Mini-batch SGD algorithm
  • Neural Network
  • SoC FPGA

ASJC Scopus subject areas

  • Software
  • Information Systems
  • Hardware and Architecture
  • Computer Networks and Communications
  • Artificial Intelligence


Dive into the research topics of 'Slightly-slacked dropout for improving neural network learning on FPGA'. Together they form a unique fingerprint.

Cite this