Much of current research in Artificial Intelligence and Music, and particularly in the field of Music Information Retrieval (MIR), focuses on algorithms that interpret musical signals and recognise musically relevant objects and patterns at various levels ? from notes to beats and rhythm, to melodic and harmonic patterns and higher-level structure -, with the goal of supporting novel applications in the digital music world. This presentation will give the audience a glimpse of what computational music perception systems can currently do with music, and what this is good for. However, we will also find that while some of these capabilities are quite impressive, they are still far from showing (or requiring) a deeper ?understanding? of music. An ongoing project will be presented that aims to take AI & music research a step further, going beyond surface features and focusing on the *expressive* aspects, and how these are communicated in music. We will look at recent work on computational models of expressive music performance and some examples of the state of the art, and will discuss possible applications of this research. In the process, the audience will be subjected to a little experiment which may, or may not, yield a surprising result.
Sprache der Kurzfassung:
Hauptvortrag / Eingeladener Vortrag auf einer Tagung