The CMMR conference in Marseille
When I told other people about my upcoming trip to the CMMR conference mid-October, I found it very hard to remember what the acronym stands for. Conference … Music … something … Research? The CMMR is not the best-known music conference so far, even though this year’s event was already the 10th edition, and attracted around 100 music researchers from Europe, Northern America and Asia. Now I know that CMMR stands for “Computer Music Multidisciplinary Research”. The conference was founded to sit in between the Music Information Retrieval and Computer Music communities, and as such did not only have a call for papers and posters, but also for concerts and sound installations.
I was a bit skeptical about such a combination of music research and performance, and also the broad scope of the conference. I had been to the NIME (New Instruments for Musical Expression) conference some years ago, which also has a concert and academic programme, but I found neither the performances nor the research presented there very stimulating. Then I was working at STEIM in Amsterdam, an institution which facilitates musicians interested in exploring new means of expression using technology. Musicians deal with a complex space of possibilities, and would like computers to help them make sense of this complexity. Music research, on the other hand, has to break down this complexity into simple problems. If the output of such research is used musically, it often leads to simplistic music, music that is not very challenging or pleasant. On the other hand, musicians who explain their practice can be hard to understand for music researchers.
The CMMR was a pleasant surprise after this. There was a good concert programme, and a good academic programme, and the many aspects of the conference stimulated dialogue between different fields. The content of the paper and poster sessions was very diverse, from performance gestures to auditory perception. There were two sessions which were related to my research on finding musical patterns: themes, variations and motifs. I presented my literature overview of repeated pattern discovery in music, to be found here, and was happy with the questions and comments. I have taken home many interesting ideas and good contacts to follow up on.
One of the presentations I personally enjoyed best was about the evaluation of music algorithms. The author, Bob L. Sturm, investigated algorithms which were designed for the task of musical mood recoginition. These algorithms classify music to associated emotions, in order to facilitate music searching and browsing. However, the algorithms can be fooled quite easily: a piece of music that is classified as happy (one example was by the Jackson Five) will be classified as sad if it is filtered to emphasize low tones.
Again, there is a complex problem, human’s emotional responses to music, and there is a seemingly simple solution: algorithms seem to have ample predictive power when they take into account the timbre of music; bright timbres imply happy, dark timbres sad music. But do we learn anything about the complex problem from an algorithm with 80% classification accuracy, outperforming another algorithm by a couple of percent?
It is important to ask these questions, and trying to break models by confronting them with slightly modified problems is an interesting way to ask them. We need to know whether an algorithm captures part of the complex problem, or if it is like a circus horse that seems to be able to count through tapping its hoof, even though it really reacts to subtle clues that tell it when to stop tapping. This is a successful strategy, but only for a specific case. Too simple evaluation can lead us to believe that a horse can count.
I have been thinking about the most suitable evaluation strategies for my own research, on the automatic discovery of stable musical patterns in folk songs, i.e. the discovery parts of the melody which change relatively little through oral transmission. How do I know the algorithm performs what I want it to do? It is good to be aware that evaluation metrics do not necessarily analyze the part of a model that you are interested in, even if they are well-established and widely used.
The next few months I will spend on breaking down the complex task of the discovery stable melodic patterns into simpler strategies, and I realize that the best way to proceed is to spend time on thinking of my experimental design first, and ask how I can evaluate outcomes, and, most importantly, to make sure that the desired outcomes really contribute to a better understanding of music.