How to Backdoor Consistency Models?

October 14, 2024 Β· Declared Dead Β· πŸ› Pacific-Asia Conference on Knowledge Discovery and Data Mining

πŸ’€ CAUSE OF DEATH: 404 Not Found
Code link is broken/dead
Authors Chengen Wang, Murat Kantarcioglu arXiv ID 2410.19785 Category cs.CR: Cryptography & Security Cross-listed cs.CV, cs.LG Citations 1 Venue Pacific-Asia Conference on Knowledge Discovery and Data Mining Repository https://github.com/chengenw/backdoorCM}{https://github.com/chengenw/backdoorCM} Last Checked 1 month ago
Abstract
Consistency models are a new class of models that generate images by directly mapping noise to data, allowing for one-step generation and significantly accelerating the sampling process. However, their robustness against adversarial attacks has not yet been thoroughly investigated. In this work, we conduct the first study on the vulnerability of consistency models to backdoor attacks. While previous research has explored backdoor attacks on diffusion models, those studies have primarily focused on conventional diffusion models, employing a customized backdoor training process and objective, whereas consistency models have distinct training processes and objectives. Our proposed framework demonstrates the vulnerability of consistency models to backdoor attacks. During image generation, poisoned consistency models produce images with a FrΓ©chet Inception Distance (FID) comparable to that of a clean model when sampling from Gaussian noise. However, once the trigger is activated, they generate backdoor target images. We explore various trigger and target configurations to evaluate the vulnerability of consistency models, including the use of random noise as a trigger. This novel trigger is visually inconspicuous, more challenging to detect, and aligns well with the sampling process of consistency models. Across all configurations, our framework successfully compromises the consistency models while maintaining high utility and specificity. We also examine the stealthiness of our proposed attack, which is attributed to the unique properties of consistency models and the elusive nature of the Gaussian noise trigger. Our code is available at \href{https://github.com/chengenw/backdoorCM}{https://github.com/chengenw/backdoorCM}.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

πŸ“œ Similar Papers

In the same crypt β€” Cryptography & Security

Died the same way β€” πŸ’€ 404 Not Found