TY - JOUR
T1 - Backpropagation with selection—reduction of learning time and elimination of hidden units
AU - Hagiwara, Masafumi
PY - 1992
Y1 - 1992
N2 - This paper proposes a new backpropagation‐type learning algorithm which incorporates selection capability in the hidden‐layer units. This algorithm is simple and is effective in reducing both the number of training cycles required and the number of hidden‐layer units. In the proposed algorithm, the method consists of selecting the “worst” units from among those in the hidden layer and eliminating them. Before convergence is achieved, by resetting the connection weights of the selected “bad” units to small random values, an escape from a local minimum is effected and learning time is shortened. When the network is converging, by preferentially deleting units starting with the “worst,” a reduction in the number of units on the hidden layer is achieved. The reduction of the number of units on the hidden layer increases the generalization capability of the network and contributes to a reduction in the computation costs and the like. Through a computer simulation, the superior performance of the proposed algorithm is demonstrated.
AB - This paper proposes a new backpropagation‐type learning algorithm which incorporates selection capability in the hidden‐layer units. This algorithm is simple and is effective in reducing both the number of training cycles required and the number of hidden‐layer units. In the proposed algorithm, the method consists of selecting the “worst” units from among those in the hidden layer and eliminating them. Before convergence is achieved, by resetting the connection weights of the selected “bad” units to small random values, an escape from a local minimum is effected and learning time is shortened. When the network is converging, by preferentially deleting units starting with the “worst,” a reduction in the number of units on the hidden layer is achieved. The reduction of the number of units on the hidden layer increases the generalization capability of the network and contributes to a reduction in the computation costs and the like. Through a computer simulation, the superior performance of the proposed algorithm is demonstrated.
KW - Neural network
KW - acceleration of convergence
KW - backpropagation
KW - elimination of hidden units
UR - http://www.scopus.com/inward/record.url?scp=0027086339&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=0027086339&partnerID=8YFLogxK
U2 - 10.1002/scj.4690230805
DO - 10.1002/scj.4690230805
M3 - Article
AN - SCOPUS:0027086339
SN - 0882-1666
VL - 23
SP - 46
EP - 54
JO - Systems and Computers in Japan
JF - Systems and Computers in Japan
IS - 8
ER -