TY - JOUR
T1 - Microcode optimization with neural networks
AU - Bharitkar, Sunil
AU - Tsuchiya, Kazuhiro
AU - Takefuji, Yoshiyasu
N1 - Funding Information:
Manuscript received February 12, 1998; revised November 17, 1998. This work was supported in part by a grant from Tokyo Electric Power Corporation (TEPCO). S. Bharitkar is with the Department of Electrical Engineering-Systems, University of Southern California, Los Angeles, CA 90089-2564 USA. K. Tsuchiya is with the System and Software Laboratory, Fuji Electric Corp., 1, Fuji-Town, Hino-City, Tokyo 191, Japan. Y. Takefuji is with the Department of Environmental Information, Keio University, Endoh, Fujisawa 100, Japan. Publisher Item Identifier S 1045-9227(99)03996-X.
PY - 1999
Y1 - 1999
N2 - Microcode optimization is an NP-complete combinatorial optimization problem. This paper proposes a new method based on the Hopfield neural network for optimizing the wordwidth in the control memory of a microprogrammed digital computer. We present two methodologies, viz., the maximum clique approach, and a cost function based method to minimize an objective function. The maximum clique approach albeit being near O(1) in complexity, is limited in its use for small problem sizes, since it only partitions the data based on the compatibility between the microoperations, and does not minimize the cost function. We thereby use this approach to condition the data initially (to form compatibility classes), and then use the proposed second method to optimize on the cost function. The latter method is then able to discover better solutions than other schemes for the benchmark data set.
AB - Microcode optimization is an NP-complete combinatorial optimization problem. This paper proposes a new method based on the Hopfield neural network for optimizing the wordwidth in the control memory of a microprogrammed digital computer. We present two methodologies, viz., the maximum clique approach, and a cost function based method to minimize an objective function. The maximum clique approach albeit being near O(1) in complexity, is limited in its use for small problem sizes, since it only partitions the data based on the compatibility between the microoperations, and does not minimize the cost function. We thereby use this approach to condition the data initially (to form compatibility classes), and then use the proposed second method to optimize on the cost function. The latter method is then able to discover better solutions than other schemes for the benchmark data set.
UR - http://www.scopus.com/inward/record.url?scp=0032656186&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=0032656186&partnerID=8YFLogxK
U2 - 10.1109/72.761728
DO - 10.1109/72.761728
M3 - Article
C2 - 18252569
AN - SCOPUS:0032656186
SN - 1045-9227
VL - 10
SP - 698
EP - 703
JO - IEEE Transactions on Neural Networks
JF - IEEE Transactions on Neural Networks
IS - 3
ER -