A new method for inverting feedforward neural networks

Yoshio Araki, Toshifumi Ohki, Daniel Citterio, Masafumi Hagiwara, Koji Suzuki

Research output: Contribution to journalConference articlepeer-review

1 Citation (Scopus)


In this paper, we propose a new method for inverting feedforward neural networks. Inversion of neural networks means to find the inputs which produce given outputs. In general, this is an ill-posed problem whose solution isn't unique. Inversion using iterative optimization method (for example gradient descent, quasi-Newton method) is useful to this problem and it is called "iterative inversion". We propose a new iterative inversion using a Bottleneck Neural Network with Hidden layer's input units (BNNH), which we design on the basis of Bottleneck Neural Network (BNN). Compressing input space by BNNH, we reduce the dimension of search space, or input space to be searched with iterative inversion. With reduction of the search space's dimension, performance about computation time and accuracy is expected to become better. In experiments, the proposed method is applied to some examples. These results show the effectively of the proposed method.

Original languageEnglish
Pages (from-to)1612-1617
Number of pages6
JournalProceedings of the IEEE International Conference on Systems, Man and Cybernetics
Publication statusPublished - 2003 Nov 24
EventSystem Security and Assurance - Washington, DC, United States
Duration: 2003 Oct 52003 Oct 8


  • Bottleneck neural networks
  • Ill-posed problem
  • Inverse problem
  • Iterative optimization method

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Hardware and Architecture


Dive into the research topics of 'A new method for inverting feedforward neural networks'. Together they form a unique fingerprint.

Cite this