TY - GEN
T1 - Alleviating overfitting for polysemous words for word representation estimation using lexicons
AU - Ke, Yuanzhi
AU - Hagiwara, Masafumi
N1 - Publisher Copyright:
© 2017 IEEE.
PY - 2017/6/30
Y1 - 2017/6/30
N2 - Though there are some works on improving distributed word representations using lexicons, the improper over-fitting of the words that have multiple meanings is a remaining issue deteriorating the learning when lexicons are used, which needs to be solved. An alternative method is to allocate a vector per sense instead of a vector per word. However, the word representations estimated in the former way are not as easy to use as the latter one. Our previous work uses a probabilistic method to alleviate the overfitting, but it is not robust with a small corpus. In this paper, we propose a new neural network to estimate distributed word representations using a lexicon and a corpus. We add a lexicon layer in the continuous bag-of-words model and a threshold node after the output of the lexicon layer. The threshold rejects the unreliable outputs of the lexicon layer that are less likely to be the same with their inputs. In this way, it alleviates the overfitting of the polysemous words. The proposed neural network can be trained using negative sampling, which maximizing the log probabilities of target words given the context words, by distinguishing the target words from random noises. We compare the proposed neural network with the continuous bag-of-words model, the other works improving it, and the previous works estimating distributed word representations using both a lexicon and a corpus. The experimental results show that the proposed neural network is more efficient and balanced for both semantic tasks and syntactic tasks than the previous works, and robust to the size of the corpus.
AB - Though there are some works on improving distributed word representations using lexicons, the improper over-fitting of the words that have multiple meanings is a remaining issue deteriorating the learning when lexicons are used, which needs to be solved. An alternative method is to allocate a vector per sense instead of a vector per word. However, the word representations estimated in the former way are not as easy to use as the latter one. Our previous work uses a probabilistic method to alleviate the overfitting, but it is not robust with a small corpus. In this paper, we propose a new neural network to estimate distributed word representations using a lexicon and a corpus. We add a lexicon layer in the continuous bag-of-words model and a threshold node after the output of the lexicon layer. The threshold rejects the unreliable outputs of the lexicon layer that are less likely to be the same with their inputs. In this way, it alleviates the overfitting of the polysemous words. The proposed neural network can be trained using negative sampling, which maximizing the log probabilities of target words given the context words, by distinguishing the target words from random noises. We compare the proposed neural network with the continuous bag-of-words model, the other works improving it, and the previous works estimating distributed word representations using both a lexicon and a corpus. The experimental results show that the proposed neural network is more efficient and balanced for both semantic tasks and syntactic tasks than the previous works, and robust to the size of the corpus.
UR - http://www.scopus.com/inward/record.url?scp=85030999961&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85030999961&partnerID=8YFLogxK
U2 - 10.1109/IJCNN.2017.7966117
DO - 10.1109/IJCNN.2017.7966117
M3 - Conference contribution
AN - SCOPUS:85030999961
T3 - Proceedings of the International Joint Conference on Neural Networks
SP - 2164
EP - 2170
BT - 2017 International Joint Conference on Neural Networks, IJCNN 2017 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2017 International Joint Conference on Neural Networks, IJCNN 2017
Y2 - 14 May 2017 through 19 May 2017
ER -