Hi, I'm Matthias. Welcome to my website and blog!

I'm a lecturer and researcher in the field of music informatics. I currently work as a Royal Academy of Engineering Research Fellow with the Centre for Digital Music at Queen Mary, University of London (see my Queen Mary web page).

Past work places include the Internet music platform Last.fm, where I worked as Research Fellow, the Japanese research centre AIST in Tsukuba, and, as a research student, the Centre for Digital Music. Find more info on my biography page.

My main research interest (and the subject of my PhD thesis) has been the automatic transcription of chords from audio, but I've also done work on segmentation, harpsichord tuning estimation and, recently, lyrics-to-audio alignment. Please do have a look at my publications website to learn more about my work, ask Google Scholar directly, or visit my Software site if you're more interested in just using it.

Done and Liked »

[2 Aug 2014 | Comments Off | 93 views]
I’ve thrown together a little website, POETRY // CHAIN, which in theory should be quite fun — if at least some people use it. You can browse mini poems (up to 111 characters) and contribute your changes and improvements to the latest poems. Oh, and browse their “families”. Well, I hope you give it a go.

from me to you »

[30 Jul 2014 | Comments Off | 179 views]
When I started out as a researcher I didn’t really think of reviewers as humans. Scientific peer-to-peer review was simply the gateway to publishing my first papers, and the reviewers were usually not really peers yet at all: they were all more senior than I was, and I perceived any wrong judgements they made as noise, random errors. And when reviews are bad (short, unhelpful) as well as negative, it’s usual to view the reviewers as evil machines. Having reviewed papers myself now for several years, it has become apparent to me that there’s more to it, there’s a non-random, predictable component involved. It’s got to do with the fact that reviewers are human. Take me: I’m in research because I en…

Conference Paper, Done and Liked, Publication »

[19 Jul 2014 | Comments Off | 316 views]
Abstract. We propose a novel method for automatic drum transcription from audio that achieves the recognition of individual drums by classifying bar-level drum patterns. Automatic drum transcription has to date been tackled by recognising individual drums or drum combinations. In high-level tasks such as audio similarity, statistics of longer rhythmic patterns have been used, reflecting that musical rhythm emerges over time. We combine these two approaches by classifying bar-level drum patterns on sub-beat quantised timbre features using support vector machines. We train the classifier using synthesised audio and carry out a series of experiments to evaluate our approach. Using six different drum kits, we show that t…

Done and Liked »

[14 Jul 2014 | Comments Off | 111 views]
I’m happy to let you know that a paper I co-authored with lead authors Daniel Stoller and Igor Vatolkin as well as senior author Claus Weihs, all from the TU Dortmund has been awarded a Best Paper Award at this year’s ECDA Conference. The paper was submitted after last year’s conference and is going to appear in Springer’s series on Studies in Classification, Data Analysis and Knowledge Organisation… soon. Check out the abstract here.

Conference Paper, Publication »

[14 Jul 2014 | Comments Off | 100 views]
Abstract. This paper presents a comparative study of classification performance in automatic audio chord recognition based on three chroma feature implementations, with the aim of distinguishing effects of frame size, instrumentation, and choice of chroma feature. Research in automatic chord recognition has to date focused on the development of complete systems. While results have remarkably improved, the understanding of the error sources remains low. In order to isolate sources of chord recognition error we create a corpus of artificial instrument mixtures and investigate (a) the influence of different chroma frame sizes and (b) the impact of instrumentation and pitch height. We show that recognition performance i…

Seen and Liked »

[14 Jul 2014 | Comments Off | 118 views]
I went to a talk on the Science of Singing by David Howard recently. He’s a fascinating talker, a bit self-indulgent at times, but highly entertaining. He talked a lot about singing, and a lot of it I already knew, but there are two things I took away which I’d like to share. First, the reason why you should drink plenty (water) when singing — yes, it’s so your voice stays nicely “lubricated”, but what I had not appreciated is this: the lubrication does not work locally, i.e. it’s not that the water going past your vocal cords keeps them in shape. Instead, your body keeps a global water household and distributes it where it’s needed most. The problem is that the voice comes pretty far down the lis…

from me to you »

[4 Jun 2014 | Comments Off | 290 views]
[Edit: the survey is now closed.] You could help us improve pitch and note annotation tools by filling in a short survey. If you do music or speech research related to pitch and notes in audio, then you are probably aware of software to aid the manual creation of (ground truth) annotations. We (QMUL and NYU) are developing open source software to simplify the (monophonic) pitch annotation process — and we would like to get it right! So we were wondering: what software is currently used by you, the professionals, and what do you love/hate about it? It would be great if you could tell us in our 7-question mini-survey here. Thank you so much in advance!

Done and Liked, Featured, Journal Paper, Publication »

[19 Mai 2014 | Comments Off | 221 views]
http://schall-und-mauch.de/artificialmusicality/wp-content/uploads/2014/05/happy_birthday.png The preprint of our singing intonation paper is now available! Enjoy! Abstract:
This paper presents a study on intonation and intonation drift in unaccompanied singing and proposes a simple model of reference pitch memory that accounts for many of the effects observed. Singing experiments were conducted with 24 singers of varying ability under 3 conditions (Normal, Masked, Imagined). Over the duration of a recording, approximately 50 seconds, a median absolute intonation drift of 11 cents was observed. While smaller than the median note error (19 cents), drift was significant in 22% of recordings. Drift magnitude did not correlate with other measures of singing accuracy, singing experience or with the presence of conditio…

Done and Liked »

[29 Apr 2014 | Comments Off | 1,233 views]
Well, it’s not really news, but I thought I might say it again, since I hadn’t really written a dedicated blog post on Segmentino: Segmentino segments songs into segments. Also I finally got round to making a lil Segmentino page on Isophonics: http://www.isophonics.net/segmentino. The repository, software builds and all can still be found on the Segmentino page on SoundSoftware. That’s it. Check it out!

Done and Liked »

[11 Apr 2014 | Comments Off | 355 views]
I had this code lying around for a long time, and nothing happened to it, so I thought I might just as well give it away so that people can try it out. It is based on my thesis work and employs the Dynamic Bayesian Network (DBN) described in this paper and my PhD thesis). It is not a masterpiece of software engineering and it uses unspeakable amounts of memory (songs of about 6 minutes can use >10GB!, but your standard 4 minute pop song should be doable on today’s laptops). But it still works. You need Matlab to run it, and the matlab binary should be in your path. To test, run ./doChordID-osx.sh testFileList.txt testout/ on OS/X or ./doChordID.sh testFileList.txt testout/. So for people who missed the first link, the code can be downloade…