Modulation of early auditory processing by visually based sound prediction

Atsushi Aoyama, Hiroshi Endo, Satoshi Honda, Tsunehiro Takeda

研究成果: Article査読

11 被引用数 (Scopus)

抄録

Brain activity was measured by magnetoencephalography (MEG) to investigate whether the early auditory system can detect changes in audio-visual patterns when the visual part is presented earlier. We hypothesized that a template underlying the mismatch field (MMF) phenomenon, which is usually formed by past sound regularities, is also used in visually based sound prediction. Activity similar to the MMF may be elicited by comparing an incoming sound with the template. The stimulus was modeled after a keyboard: an animation in which one of two keys was depressed was accompanied by either a lower or higher tone. Congruent audio-visual pairs were designed to be frequent and incongruent pairs to be infrequent. Subjects were instructed to predict an incoming sound based on key movement in two sets of trials (prediction condition), whereas they were instructed not to do so in the other two sets (non-prediction condition). For each condition, the movement took 50 ms in one set (Δ = 50 ms) and 300 ms in the other (Δ = 300 ms) to reach the bottom, at which time a tone was delivered. As a result, only under the prediction condition with Δ = 300 ms was additional activity for incongruent pairs observed bilaterally in the supratemporal area within 100-200 ms of the auditory stimulus onset; this activity had spatio-temporal properties similar to those of MMF. We concluded that a template is created by the visually based sound prediction only after the visual discriminative and sound prediction processes have already been performed.

本文言語English
ページ(範囲)194-204
ページ数11
ジャーナルBrain Research
1068
1
DOI
出版ステータスPublished - 2006 1月 12

ASJC Scopus subject areas

  • 神経科学一般
  • 分子生物学
  • 臨床神経学
  • 発生生物学

フィンガープリント

「Modulation of early auditory processing by visually based sound prediction」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル