MusicTM-Dataset for Joint Representation Learning among Sheet Music, Lyrics, and Musical Audio

December 01, 2020 ยท Declared Dead ยท ๐Ÿ› Proceedings of the 8th Conference on Sound and Music Technology

๐Ÿฆด CAUSE OF DEATH: Skeleton Repo
Boilerplate only, no real code

Repo contents: 00a7a14ff9e9666032e1dce602856582_00Fragment.npy, 0a1c541bc1005aea8440ad9f68511bd8_00Fragment.npy, 0a1c541bc1005aea8440ad9f68511bd8_01Fragment.npy, 0a1c541bc1005aea8440ad9f68511bd8_02Fragment.npy, 0a1eb612307a7e52db3f31f34382b9c6_00Fragment.npy, 0a1eb612307a7e52db3f31f34382b9c6_01Fragment.npy, 0a1eb612307a7e52db3f31f34382b9c6_02Fragment.npy, 0a1f5036f0406fc1cdce717fce708db8_00Fragment.npy, 0a1f5036f0406fc1cdce717fce708db8_01Fragment.npy, 0a1f5036f0406fc1cdce717fce708db8_02Fragment.npy, 0a2ecc8127c1818903bd794a690129f2_00Fragment.npy, 0a2ecc8127c1818903bd794a690129f2_02Fragment.npy, 0a3d9d7eea539faecf98f21abb0e08a0_00Fragment.npy, 0a3fdc454bd8432bb6cd4f47811f98cb_00Fragment.npy, 0a3fdc454bd8432bb6cd4f47811f98cb_01Fragment.npy, 0a3fdc454bd8432bb6cd4f47811f98cb_02Fragment.npy, 0a5ab1f26be756511a4b365be5a897c0_00Fragment.npy, 0a5ab1f26be756511a4b365be5a897c0_02Fragment.npy, 0a5d6ce187ecc46bb3257cdaf9609308_01Fragment.npy, 0a5d6ce187ecc46bb3257cdaf9609308_02Fragment.npy, 0a5e74308021080d58d5e686c28fd22d_00Fragment.npy, 0a5e74308021080d58d5e686c28fd22d_01Fragment.npy, 0a5e74308021080d58d5e686c28fd22d_02Fragment.npy, README.md

Authors Donghuo Zeng, Yi Yu, Keizo Oyama arXiv ID 2012.00290 Category cs.SD: Sound Cross-listed cs.DB, cs.IR, cs.MM, eess.AS Citations 3 Venue Proceedings of the 8th Conference on Sound and Music Technology Repository https://github.com/dddzeng/MusicTM-Dataset โญ 9 Last Checked 1 month ago
Abstract
This work present a music dataset named MusicTM-Dataset, which is utilized in improving the representation learning ability of different types of cross-modal retrieval (CMR). Little large music dataset including three modalities is available for learning representations for CMR. To collect a music dataset, we expand the original musical notation to synthesize audio and generated sheet-music image, and build musical notation based sheet-music image, audio clip and syllable-denotation text as fine-grained alignment, such that the MusicTM-Dataset can be exploited to receive shared representation for multimodal data points. The MusicTM-Dataset presents 3 kinds of modalities, which consists of the image of sheet-music, the text of lyrics and synthesized audio, their representations are extracted by some advanced models. In this paper, we introduce the background of music dataset and express the process of our data collection. Based on our dataset, we achieve some basic methods for CMR tasks. The MusicTM-Dataset are accessible in https: //github.com/dddzeng/MusicTM-Dataset.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Sound

Died the same way โ€” ๐Ÿฆด Skeleton Repo

R.I.P. ๐Ÿฆด Skeleton Repo

Neural Style Transfer: A Review

Yongcheng Jing, Yezhou Yang, ... (+4 more)

cs.CV ๐Ÿ› IEEE TVCG ๐Ÿ“š 828 cites 8 years ago