Music Structure and Prosody: Do You Hear What I Hear?

Elaine Chew
Queen Mary, University of London

The processing of music for search and retrieval often relies on aggregate features inferred from a large corpus of music, with a focus on systematic patterns manifest across a large dataset. Such methods fail to account for the idiosyncratic (and sometimes iconoclastic) nature of music creation, expression, and perception. Person-specific variations are key to the understanding and modeling of music and performance styles, and of music perception and insight, and can serve as pathways to explaining artistic judgment and creative genius. They are also critical to the design of human-centered models and systems for large-scale multimedia search.

Two examples describe why and how inter-person variations matter in music structure analysis and in musical prosody. The first focuses on two sets of structure analyses of human-machine improvisations by composer-keyboardist Isaac Schankler and Alexandre François’ Mimi (multi-modal interaction in musical improvisation) system – one produced by the improviser himself, and another by Jordan Smith, a skilled listener and music structure analyst. The second examines recordings of Beethoven’s Moonlight Sonata by Maurizio Pollini, Daniel Barenboim, and Artur Schnabel; graphs of the extracted tempi superimposed on score-based metric structures reveal the timing strategies in the respective interpretations. The Moonlight Sonata analyses were inspired by Jeanne Bamberger’s lecture: “What is Time – a hearing is a performance, a performance is a hearing.”

If time permits, other examples include the design and analysis of Merrick Mosst’s music emotion prediction system and of Ching-Hua Chuan’s Automatic Style-Specific Accompaniment system.


Back to Large Scale Multimedia Search