Stochastic simple recurrent neural networks

Mostefa Golea, Masahiro Matsuoka, Yasubumi Sakakibara

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Simple recurrent neural networks (SRNs) have been advocated as an alternative to traditional probabilistic models for grammatical inference and language modeling. However, unlike hidden Markov Models and stochastic grammars, SRNs are not formulated explicitly as probability models, in that they do not provide their predictions in the form of a probability distribution over the alphabet. In this paper, we introduce a stochastic variant of the SRN. This new variant makes explicit the functional description of how the SRN solution reflects the target structure generating the training sequence. We explore the links between the stochastic version of SRNs and traditional grammatical inference models. We show that the stochastic single-layer SRN can be seen as a generalized hidden Markov model or a probabilistic automaton. The two-layer stochastic SRN can be interpreted as a probabilistic machine whose state-transitions are triggered by inputs producing outputs, that is, a probabilistic finite-state sequential transducer. It can also be thought of as a hidden Markov model with two alphabets, each with its own distinct output distribution. We provide efficient procedures based on the forward-backward approach, used in the context of hidden Markov models, to evaluate the various probabilities occurring in the model. We derive a gradient-based algorithm for finding the parameters of the network that maximize the likelihood of the training sequences. Finally, we show that if the target structure generating the training sequences is unifilar, then the trained stochastic SRN behaves deterministically.

Original languageEnglish
Title of host publicationGrammatical Inference
Subtitle of host publicationLearning Syntax from Sentences - 3rd International Colloquium, ICGI-1996, Proceedings
EditorsColin de la Higuera, Laurent Miclet
PublisherSpringer Verlag
Pages262-273
Number of pages12
ISBN (Print)3540617787, 9783540617785
DOIs
Publication statusPublished - 1996
Externally publishedYes
Event3rd International Colloquium on Grammatical Inference, ICGI 1996 - Montpellier, France
Duration: 1996 Sept 251996 Sept 27

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume1147
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Other

Other3rd International Colloquium on Grammatical Inference, ICGI 1996
Country/TerritoryFrance
CityMontpellier
Period96/9/2596/9/27

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Computer Science(all)

Fingerprint

Dive into the research topics of 'Stochastic simple recurrent neural networks'. Together they form a unique fingerprint.

Cite this