Incremental Text-to-Speech Synthesis Using Pseudo Lookahead with Large Pretrained Language Model

Takaaki Saeki, Shinnosuke Takamichi, Hiroshi Saruwatari

研究成果: Article査読

11 被引用数 (Scopus)

抄録

This letter presents an incremental text-to-speech (TTS) method that performs synthesis in small linguistic units while maintaining the naturalness of output speech. Incremental TTS is generally subject to a trade-off between latency and synthetic speech quality. It is challenging to produce high-quality speech with a low-latency setup that does not make much use of an unobserved future sentence (hereafter, 'lookahead'). To resolve this issue, we propose an incremental TTS method that uses a pseudo lookahead generated with a language model to take the future contextual information into account without increasing latency. Our method can be regarded as imitating a human's incremental reading and uses pretrained GPT2, which accounts for the large-scale linguistic knowledge, for the lookahead generation. Evaluation results show that our method 1) achieves higher speech quality than the method taking only observed information into account and 2) achieves a speech quality equivalent to waiting for the future context observation.

本文言語English
論文番号9406329
ページ(範囲)857-861
ページ数5
ジャーナルIEEE Signal Processing Letters
28
DOI
出版ステータスPublished - 2021
外部発表はい

ASJC Scopus subject areas

  • 信号処理
  • 電子工学および電気工学
  • 応用数学

フィンガープリント

「Incremental Text-to-Speech Synthesis Using Pseudo Lookahead with Large Pretrained Language Model」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル