Prioritized contig combining to segregate voices in polyphonic music

Asako Ishigaki, Masaki Matsubara, Hiroaki Saito

Research output: Contribution to conferencePaperpeer-review

7 Citations (Scopus)


Polyphonic music is comprised of independent voices sounding synchronously. The task of voice segregation is to assign notes from symbolic representation of a musical score to monophonic voices. Human auditory sense can distinguish these voices. Hence, many previous works utilize perceptual principles. Voice segregation can be applied to music information retrieval and automatic music transcription of polyphonic music. In this paper, we propose to modify the voice segregation algorithm of contig mapping approach by Chew and Wu. This approach consists of 3 steps; segmentation, separation, and combining. We present a modification of "combining" step on the assumption that the accuracy of voice segregation depends on whether the segregation manages to correctly identify which voice is resting. Our algorithm prioritize voice combining at segmentation boundaries with increasing voice counts. We tested our voice segregation algorithm on 78 pieces of polyphonic music by J.S.Bach. The results show that our algorithm attained 92.21% of average voice consistency.

Original languageEnglish
Publication statusPublished - 2011
Event8th Sound and Music Computing Conference, SMC 2011 - Padova, Italy
Duration: 2011 Jul 62011 Jul 9


Other8th Sound and Music Computing Conference, SMC 2011

ASJC Scopus subject areas

  • General Computer Science


Dive into the research topics of 'Prioritized contig combining to segregate voices in polyphonic music'. Together they form a unique fingerprint.

Cite this