Cross-Scope Spatial-Spectral Information Aggregation for Hyperspectral Image Super-Resolution

November 29, 2023 Β· Entered Twilight Β· πŸ› IEEE Transactions on Image Processing

πŸ’€ TWILIGHT: Eternal Rest
Repo abandoned since publication

Repo contents: LICENSE, README.md, common.py, data, demo.sh, loss.py, main_CST.py, metrics.py, network, test_demo.sh, utils.py

Authors Shi Chen, Lefei Zhang, Liangpei Zhang arXiv ID 2311.17340 Category eess.IV: Image & Video Processing Cross-listed cs.CV Citations 48 Venue IEEE Transactions on Image Processing Repository https://github.com/Tomchenshi/CST.git} ⭐ 16 Last Checked 1 month ago
Abstract
Hyperspectral image super-resolution has attained widespread prominence to enhance the spatial resolution of hyperspectral images. However, convolution-based methods have encountered challenges in harnessing the global spatial-spectral information. The prevailing transformer-based methods have not adequately captured the long-range dependencies in both spectral and spatial dimensions. To alleviate this issue, we propose a novel cross-scope spatial-spectral Transformer (CST) to efficiently investigate long-range spatial and spectral similarities for single hyperspectral image super-resolution. Specifically, we devise cross-attention mechanisms in spatial and spectral dimensions to comprehensively model the long-range spatial-spectral characteristics. By integrating global information into the rectangle-window self-attention, we first design a cross-scope spatial self-attention to facilitate long-range spatial interactions. Then, by leveraging appropriately characteristic spatial-spectral features, we construct a cross-scope spectral self-attention to effectively capture the intrinsic correlations among global spectral bands. Finally, we elaborate a concise feed-forward neural network to enhance the feature representation capacity in the Transformer structure. Extensive experiments over three hyperspectral datasets demonstrate that the proposed CST is superior to other state-of-the-art methods both quantitatively and visually. The code is available at \url{https://github.com/Tomchenshi/CST.git}.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

πŸ“œ Similar Papers

In the same crypt β€” Image & Video Processing