An article recently published in the esteemed MIT Technology Review has brought us disappointment; we’ve just missed the start of the new school year and we’ve been inspired to take up studying again. So, with no real knowledge of computer science, and the most tenuous of grasps on information sciences, we’re going to attempt to discuss a rather interesting new may of analysing musical genres.
Researchers at Simon Bolivar University in Venezuela recognised that music is a form of communication, with a transmitter or source, a channel or method of dissemination, and a receiver all playing their part in turn. However, music is an extremely busy form of communication, with overlapping instruments, tempos, dynamics, and timbre. It is, therefore, very hard to convert into standardisable data to analyse, and compare and contrast. That said, there is a form of digital musical language that has been in use for over 30 years; MIDI. The first step for the researchers was simple; convert the MIDI files into .txt files where the content could take on a string of basic symbols. From there they began a process of identifying and extracting excess symbols while still retaining the character of the music. Our layman’s interpretation is that they simply stripped away the ‘noise’.
What the researchers were left with was a series of codes that showed the uniformity, predictability, and quantity of information that encompassed each piece of music. After looking at 450 pieces of music, from 71 composers, across 15 genres or time periods of music they were able to differentiate between certain Classical composers, though struggled to distinctly separate Classical and Rock music.
It’s an interesting piece of research, and we wonder whether it may have future applications in music recognition software that we have seen steadily developing over the last few years. If nothing else, it has encouraged us to look at music from a new angle.