Pop Music Highlighter: Marking the Emotion Keypoints

February 28, 2018 ยท Entered Twilight ยท ๐Ÿ› Transactions of the International Society for Music Information Retrieval

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 7.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: LICENSE, README.md, by-nc.png, lib.py, main.py, model.py, model, rwc-visualization

Authors Yu-Siang Huang, Szu-Yu Chou, Yi-Hsuan Yang arXiv ID 1802.10495 Category eess.AS: Audio & Speech Cross-listed cs.AI, cs.MM, cs.SD Citations 18 Venue Transactions of the International Society for Music Information Retrieval Repository https://github.com/remyhuang/pop-music-highlighter/ โญ 114 Last Checked 2 months ago
Abstract
The goal of music highlight extraction is to get a short consecutive segment of a piece of music that provides an effective representation of the whole piece. In a previous work, we introduced an attention-based convolutional recurrent neural network that uses music emotion classification as a surrogate task for music highlight extraction, for Pop songs. The rationale behind that approach is that the highlight of a song is usually the most emotional part. This paper extends our previous work in the following two aspects. First, methodology-wise we experiment with a new architecture that does not need any recurrent layers, making the training process faster. Moreover, we compare a late-fusion variant and an early-fusion variant to study which one better exploits the attention mechanism. Second, we conduct and report an extensive set of experiments comparing the proposed attention-based methods against a heuristic energy-based method, a structural repetition-based method, and a few other simple feature-based methods for this task. Due to the lack of public-domain labeled data for highlight extraction, following our previous work we use the RWC POP 100-song data set to evaluate how the detected highlights overlap with any chorus sections of the songs. The experiments demonstrate the effectiveness of our methods over competing methods. For reproducibility, we open source the code and pre-trained model at https://github.com/remyhuang/pop-music-highlighter/.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Audio & Speech