Mitigating large adversarial perturbations on X-MAS (X minus Moving Averaged Samples)

December 19, 2019 Β· Entered Twilight Β· πŸ› arXiv.org

πŸŒ… TWILIGHT: Old Age
Predates the code-sharing era β€” a pioneer of its time

"Last commit was 5.0 years ago (β‰₯5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: CW-pytorch_with_Mitigation, Mitigation.ipynb, PGD-pytorch_with_Mitigation, README.md, ambulance_224x224.png, ambulance_fgsm_adversarial_eps64.png, ambulance_ifgsm_adversarial_eps32.png, ambulance_ifgsm_ll_adversarial_eps64.png, banana_224x224.png, banana_ifgsm_adversarial_eps32.png, data, mitigating_adversarial_with_3x3_estimation.sh, mitigating_adversarial_with_7x7_estimation.sh, mitigating_adversarial_with_7x7_estimation_for_Figure_11_in_Appendix_B.sh, panda_224x224.png, panda_ifgsm_adversarial_eps32.png, sports_car_224x224.png, sports_car_ifgsm_adversarial_eps64.png, sports_car_ifgsm_adversarial_eps64_mitigated_with_heterogeneous_7x7_weights_and_soothed_by_JPEG.jpg, streetsign_224x224.png, streetsign_ifgsm_adversarial_eps2.png, streetsign_ifgsm_adversarial_eps32.png, sunflower_224x224.png, sunflower_ifgsm_adversarial_eps32.png

Authors Woohyung Chun, Sung-Min Hong, Junho Huh, Inyup Kang arXiv ID 1912.12170 Category cs.CV: Computer Vision Cross-listed cs.CR, cs.LG Citations 0 Venue arXiv.org Repository https://github.com/stonylinux/mitigating_large_adversarial_perturbations_on_X-MAS Last Checked 2 months ago
Abstract
We propose the scheme that mitigates the adversarial perturbation $Ξ΅$ on the adversarial example $X_{adv}$ ($=$ $X$ $\pm$ $Ξ΅$, $X$ is a benign sample) by subtracting the estimated perturbation $\hatΞ΅$ from $X$ $+$ $Ξ΅$ and adding $\hatΞ΅$ to $X$ $-$ $Ξ΅$. The estimated perturbation $\hatΞ΅$ comes from the difference between $X_{adv}$ and its moving-averaged outcome $W_{avg}*X_{adv}$ where $W_{avg}$ is $N \times N$ moving average kernel that all the coefficients are one. Usually, the adjacent samples of an image are close to each other such that we can let $X$ $\approx$ $W_{avg}*X$ (naming this relation after X-MAS[X minus Moving Averaged Samples]). By doing that, we can make the estimated perturbation $\hatΞ΅$ falls within the range of $Ξ΅$. The scheme is also extended to do the multi-level mitigation by configuring the mitigated adversarial example $X_{adv}$ $\pm$ $\hatΞ΅$ as a new adversarial example to be mitigated. The multi-level mitigation gets $X_{adv}$ closer to $X$ with a smaller (i.e. mitigated) perturbation than original unmitigated perturbation by setting the moving averaged adversarial sample $W_{avg} * X_{adv}$ (which has the smaller perturbation than $X_{adv}$ if $X$ $\approx$ $W_{avg}*X$) as the boundary condition that the multi-level mitigation cannot cross over (i.e. decreasing $Ξ΅$ cannot go below and increasing $Ξ΅$ cannot go beyond). With the multi-level mitigation, we can get high prediction accuracies even in the adversarial example having a large perturbation (i.e. $Ξ΅$ $>$ $16$). The proposed scheme is evaluated with adversarial examples crafted by the FGSM (Fast Gradient Sign Method) based attacks on ResNet-50 trained with ImageNet dataset.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

πŸ“œ Similar Papers

In the same crypt β€” Computer Vision