Seq-U-Net: A One-Dimensional Causal U-Net for Efficient Sequence Modelling
November 14, 2019 ยท Entered Twilight ยท ๐ International Joint Conference on Artificial Intelligence
"Last commit was 5.0 years ago (โฅ5 year threshold)"
Evidence collected by the PWNC Scanner
Repo contents: LICENSE, README.md, __init__.py, architecture.png, char_cnn, copy_memory, poly_music, raw_audio, requirements.txt, sequnet.py, sequnet_res.py, sequnet_utils.py, system_diagram.png, tcn.py, word_cnn
Authors
Daniel Stoller, Mi Tian, Sebastian Ewert, Simon Dixon
arXiv ID
1911.06393
Category
cs.LG: Machine Learning
Cross-listed
cs.SD,
eess.AS,
stat.ML
Citations
40
Venue
International Joint Conference on Artificial Intelligence
Repository
https://github.com/f90/Seq-U-Net
โญ 80
Last Checked
1 month ago
Abstract
Convolutional neural networks (CNNs) with dilated filters such as the Wavenet or the Temporal Convolutional Network (TCN) have shown good results in a variety of sequence modelling tasks. However, efficiently modelling long-term dependencies in these sequences is still challenging. Although the receptive field of these models grows exponentially with the number of layers, computing the convolutions over very long sequences of features in each layer is time and memory-intensive, prohibiting the use of longer receptive fields in practice. To increase efficiency, we make use of the "slow feature" hypothesis stating that many features of interest are slowly varying over time. For this, we use a U-Net architecture that computes features at multiple time-scales and adapt it to our auto-regressive scenario by making convolutions causal. We apply our model ("Seq-U-Net") to a variety of tasks including language and audio generation. In comparison to TCN and Wavenet, our network consistently saves memory and computation time, with speed-ups for training and inference of over 4x in the audio generation experiment in particular, while achieving a comparable performance in all tasks.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Machine Learning
R.I.P.
๐ป
Ghosted
R.I.P.
๐ป
Ghosted
XGBoost: A Scalable Tree Boosting System
R.I.P.
๐ป
Ghosted
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
R.I.P.
๐ป
Ghosted
Semi-Supervised Classification with Graph Convolutional Networks
R.I.P.
๐ป
Ghosted
Proximal Policy Optimization Algorithms
R.I.P.
๐ป
Ghosted