FRDiff : Feature Reuse for Universal Training-free Acceleration of Diffusion Models

December 06, 2023 · Declared Dead · 🏛 European Conference on Computer Vision

⚰️ CAUSE OF DEATH: The Empty Tomb
GitHub repo is empty
Authors Junhyuk So, Jungwon Lee, Eunhyeok Park arXiv ID 2312.03517 Category cs.CV: Computer Vision Cross-listed cs.AI Citations 16 Venue European Conference on Computer Vision Repository https://github.com/ECoLab-POSTECH/FRDiff Last Checked 1 month ago
Abstract
The substantial computational costs of diffusion models, especially due to the repeated denoising steps necessary for high-quality image generation, present a major obstacle to their widespread adoption. While several studies have attempted to address this issue by reducing the number of score function evaluations (NFE) using advanced ODE solvers without fine-tuning, the decreased number of denoising iterations misses the opportunity to update fine details, resulting in noticeable quality degradation. In our work, we introduce an advanced acceleration technique that leverages the temporal redundancy inherent in diffusion models. Reusing feature maps with high temporal similarity opens up a new opportunity to save computation resources without compromising output quality. To realize the practical benefits of this intuition, we conduct an extensive analysis and propose a novel method, FRDiff. FRDiff is designed to harness the advantages of both reduced NFE and feature reuse, achieving a Pareto frontier that balances fidelity and latency trade-offs in various generative tasks.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

📜 Similar Papers

In the same crypt — Computer Vision

Died the same way — ⚰️ The Empty Tomb