• DocumentCode
    1336290
  • Title

    Modeling Music as a Dynamic Texture

  • Author

    Barrington, Luke ; Chan, Antoni B. ; Lanckriet, Gert

  • Author_Institution
    Dept. of Electr. & Comput. Eng., Univ. of California, San Diego, CA, USA
  • Volume
    18
  • Issue
    3
  • fYear
    2010
  • fDate
    3/1/2010 12:00:00 AM
  • Firstpage
    602
  • Lastpage
    612
  • Abstract
    We consider representing a short temporal fragment of musical audio as a dynamic texture, a model of both the timbral and rhythmical qualities of sound, two of the important aspects required for automatic music analysis. The dynamic texture model treats a sequence of audio feature vectors as a sample from a linear dynamical system. We apply this new representation to the task of automatic song segmentation. In particular, we cluster audio fragments, extracted from a song, as samples from a dynamic texture mixture (DTM) model. We show that the DTM model can both accurately cluster coherent segments in music and detect transition boundaries. Moreover, the generative character of the proposed model of music makes it amenable for a wide range of applications besides segmentation. As examples, we use DTM models of songs to suggest possible improvements in other music information retrieval applications such as music annotation and similarity.
  • Keywords
    audio signal processing; music; audio fragments; automatic music analysis; automatic song segmentation; dynamic texture; musical audio; Automatic segmentation; dynamic texture model (DTM); music modeling; music similarity;
  • fLanguage
    English
  • Journal_Title
    Audio, Speech, and Language Processing, IEEE Transactions on
  • Publisher
    ieee
  • ISSN
    1558-7916
  • Type

    jour

  • DOI
    10.1109/TASL.2009.2036306
  • Filename
    5337999