Event Camera Demosaicing via Swin Transformer and Pixel-focus Loss
April 03, 2024 Β· Declared Dead Β· π 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
Repo contents: README.md
Authors
Yunfan Lu, Yijie Xu, Wenzong Ma, Weiyu Guo, Hui Xiong
arXiv ID
2404.02731
Category
eess.IV: Image & Video Processing
Cross-listed
cs.CV,
cs.MM
Citations
6
Venue
2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
Repository
https://github.com/yunfanLu/ev-demosaic
β 4
Last Checked
1 month ago
Abstract
Recent research has highlighted improvements in high-quality imaging guided by event cameras, with most of these efforts concentrating on the RGB domain. However, these advancements frequently neglect the unique challenges introduced by the inherent flaws in the sensor design of event cameras in the RAW domain. Specifically, this sensor design results in the partial loss of pixel values, posing new challenges for RAW domain processes like demosaicing. The challenge intensifies as most research in the RAW domain is based on the premise that each pixel contains a value, making the straightforward adaptation of these methods to event camera demosaicing problematic. To end this, we present a Swin-Transformer-based backbone and a pixel-focus loss function for demosaicing with missing pixel values in RAW domain processing. Our core motivation is to refine a general and widely applicable foundational model from the RGB domain for RAW domain processing, thereby broadening the model's applicability within the entire imaging process. Our method harnesses multi-scale processing and space-to-depth techniques to ensure efficiency and reduce computing complexity. We also proposed the Pixel-focus Loss function for network fine-tuning to improve network convergence based on our discovery of a long-tailed distribution in training loss. Our method has undergone validation on the MIPI Demosaic Challenge dataset, with subsequent analytical experimentation confirming its efficacy. All code and trained models are released here: https://github.com/yunfanLu/ev-demosaic
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
π Similar Papers
In the same crypt β Image & Video Processing
R.I.P.
π»
Ghosted
R.I.P.
π»
Ghosted
Kvasir-SEG: A Segmented Polyp Dataset
R.I.P.
π»
Ghosted
Deep Learning for Hyperspectral Image Classification: An Overview
R.I.P.
π»
Ghosted
U-Net and its variants for medical image segmentation: theory and applications
R.I.P.
π»
Ghosted
Algorithm Unrolling: Interpretable, Efficient Deep Learning for Signal and Image Processing
R.I.P.
π»
Ghosted
ResUNet++: An Advanced Architecture for Medical Image Segmentation
Died the same way β π Death by README
R.I.P.
π
Death by README
Momentum Contrast for Unsupervised Visual Representation Learning
R.I.P.
π
Death by README
LLaMA-Adapter V2: Parameter-Efficient Visual Instruction Model
R.I.P.
π
Death by README
Revisiting Graph based Collaborative Filtering: A Linear Residual Graph Convolutional Network Approach
R.I.P.
π
Death by README