MEW-UNet: Multi-axis representation learning in frequency domain for medical image segmentation

October 25, 2022 Β· Entered Twilight Β· πŸ› arXiv.org

πŸ’€ TWILIGHT: Eternal Rest
Repo abandoned since publication

Repo contents: LICENSE, README.md, data, datasets, lists, models, our_results, our_weights, test.py, test_log, utils.py

Authors Jiacheng Ruan, Mingye Xie, Suncheng Xiang, Ting Liu, Yuzhuo Fu arXiv ID 2210.14007 Category eess.IV: Image & Video Processing Cross-listed cs.CV Citations 28 Venue arXiv.org Repository https://github.com/JCruan519/MEW-UNet ⭐ 31 Last Checked 1 month ago
Abstract
Recently, Visual Transformer (ViT) has been widely used in various fields of computer vision due to applying self-attention mechanism in the spatial domain to modeling global knowledge. Especially in medical image segmentation (MIS), many works are devoted to combining ViT and CNN, and even some works directly utilize pure ViT-based models. However, recent works improved models in the aspect of spatial domain while ignoring the importance of frequency domain information. Therefore, we propose Multi-axis External Weights UNet (MEW-UNet) for MIS based on the U-shape architecture by replacing self-attention in ViT with our Multi-axis External Weights block. Specifically, our block performs a Fourier transform on the three axes of the input feature and assigns the external weight in the frequency domain, which is generated by our Weights Generator. Then, an inverse Fourier transform is performed to change the features back to the spatial domain. We evaluate our model on four datasets and achieve state-of-the-art performances. In particular, on the Synapse dataset, our method outperforms MT-UNet by 10.15mm in terms of HD95. Code is available at https://github.com/JCruan519/MEW-UNet.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

πŸ“œ Similar Papers

In the same crypt β€” Image & Video Processing