Empirical comparison of word similarity measures based on co-occurrence, context, and a vector space model

Natsuki Kadowaki, Kazuaki Kishida

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)


Word similarity is often measured to enhance system performance in the information retrieval field and other related areas. This paper reports on an experimental comparison of values for word similarity measures that were computed based on 50 intentionally selected words from a Reuters corpus. There were three targets, including (1) co-occurrence-based similarity measures (for which a co-occurrence frequency is counted as the number of documents or sentences), (2) context-based distributional similarity measures obtained from a latent Dirichlet allocation (LDA), nonnegative matrix factorization (NMF), and Word2Vec algorithm, and (3) similarity measures computed from the tf-idf weights of each word according to a vector space model (VSM). Here, a Pearson correlation coefficient for a pair of VSM-based similarity measures and co-occurrence-based similarity measures according to the number of documents was highest. Group-average agglomerative hierarchical clustering was also applied to similarity matrices computed by individual measures. An evaluation of the cluster sets according to an answer set revealed that VSM-and LDA-based similarity measures performed best.

Original languageEnglish
Pages (from-to)6-17
Number of pages12
JournalJournal of Information Science Theory and Practice
Issue number2
Publication statusPublished - 2020 Jun 1


  • Topic model
  • Word clustering
  • Word embedding
  • Word similarity

ASJC Scopus subject areas

  • Information Systems
  • Information Systems and Management
  • Library and Information Sciences


Dive into the research topics of 'Empirical comparison of word similarity measures based on co-occurrence, context, and a vector space model'. Together they form a unique fingerprint.

Cite this