Why Does a Hilbertian Metric Work Efficiently in Online Learning with Kernels?

Masahiro Yukawa, Klaus Robert Muller

研究成果: Article査読

7 被引用数 (Scopus)

抄録

The autocorrelation matrix of the kernelized input vector is well approximated by the squared Gram matrix (scaled down by the dictionary size). This holds true under the condition that the input covariance matrix in the feature space is approximated by its sample estimate based on the dictionary elements, leading to a couple of fundamental insights into online learning with kernels. First, the eigenvalue spread of the autocorrelation matrix relevant to the hyperplane projection along affine subspace algorithm is approximately a square root of that for the kernel normalized least mean square algorithm. This clarifies the mechanism behind fast convergence due to the use of a Hilbertian metric. Second, for efficient function estimation, the dictionary needs to be constructed in general by taking into account the distribution of the input vector, so as to satisfy the condition. The theoretical results are justified by computer experiments.

本文言語English
論文番号7536151
ページ(範囲)1424-1428
ページ数5
ジャーナルIEEE Signal Processing Letters
23
10
DOI
出版ステータスPublished - 2016 10月

ASJC Scopus subject areas

  • 信号処理
  • 電子工学および電気工学
  • 応用数学

フィンガープリント

「Why Does a Hilbertian Metric Work Efficiently in Online Learning with Kernels?」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル