Melody harmonisation with interpolated probabilistic models

Stanislaw Raczynski 1, 2 Satoru Fukayama 3 Emmanuel Vincent 1, 4
1 METISS - Speech and sound data modeling and processing
IRISA - Institut de Recherche en Informatique et Systèmes Aléatoires, Inria Rennes – Bretagne Atlantique
4 PAROLE - Analysis, perception and recognition of speech
Inria Nancy - Grand Est, LORIA - NLPKD - Department of Natural Language Processing & Knowledge Discovery
Abstract : Most melody harmonisation systems use the generative hidden Markov model (HMM), which model the relation between the hidden chords and the observed melody. Relations to other variables, such as the tonality or the metric struc- ture, are handled by training multiple HMMs or are ignored. In this paper, we propose a discriminative means of combining multiple probabilistic models of various musical variables by means of model interpolation. We evaluate our models in terms of their cross-entropy and their performance in harmonisation experiments. The proposed model o ered higher chord root accuracy than the reference musicological rule-based harmoniser by up to 5% absolute.
Complete list of metadatas

https://hal.inria.fr/hal-00876128
Contributor : Emmanuel Vincent <>
Submitted on : Wednesday, October 23, 2013 - 5:24:26 PM
Last modification on : Wednesday, April 3, 2019 - 1:22:53 AM

Identifiers

Citation

Stanislaw Raczynski, Satoru Fukayama, Emmanuel Vincent. Melody harmonisation with interpolated probabilistic models. Journal of New Music Research, Taylor & Francis (Routledge), 2013, 42 (3), pp.223-235. ⟨10.1080/09298215.2013.822000⟩. ⟨hal-00876128⟩

Share

Metrics

Record views

730