Language modeling using augmented echo state networks

Arnaud Rachez, Masafumi Hagiwara

Research output: Contribution to journalArticlepeer-review

Abstract

Interest in natural language modeling using neural networks has been growing in the past decade. The objective of this paper is to investigate the predictive capabilities of echo state networks (ESNs) at the task of modeling English sentences. Based on the finding that ESNs exhibit a Markovian organization of their state space that makes them, close to the widely used n-gram models, we describe two modifications of the conventional architecture that allow significant improvement by leveraging the kind of representation developed in the reservoir. Firstly, the addition of pre-recurrent features is shown to capture syntactic similarities between words and can be trained efficiently by using the contracting property of the reservoir to truncate the gradient descent. Secondly, the addition of multiple linear readouts using the mixture of experts framework is also shown to greatly improve accuracy while being trainable in parallel using Expectation- Maximization. Furthermore it can easily be transformed into a supervised mixture of expert model with several variations allowing reducing the training time and can take into account handmade features.

Original languageEnglish
Pages (from-to)1969-1981
Number of pages13
JournalInternational Journal of Innovative Computing, Information and Control
Volume10
Issue number6
Publication statusPublished - 2014 Dec 1

Keywords

  • Echo state
  • Expectation-maximisation
  • Gradient descent
  • Language model
  • Multiple readout
  • Recurrent neural network

ASJC Scopus subject areas

  • Software
  • Theoretical Computer Science
  • Information Systems
  • Computational Theory and Mathematics

Fingerprint

Dive into the research topics of 'Language modeling using augmented echo state networks'. Together they form a unique fingerprint.

Cite this