R.I.P.
๐ป
Ghosted
On deceiving malware classification with section injection
August 12, 2022 ยท Entered Twilight ยท ๐ Machine Learning and Knowledge Extraction
Repo contents: .dockerignore, .gitattributes, .gitignore, .gitmodules, Dockerfile, LICENSE, README.md, code, data, dependencies, run.sh
Authors
Adeilson Antonio da Silva, Mauricio Pamplona Segundo
arXiv ID
2208.06092
Category
cs.CR: Cryptography & Security
Cross-listed
cs.LG
Citations
5
Venue
Machine Learning and Knowledge Extraction
Repository
https://github.com/adeilsonsilva/malware-injection
โญ 36
Last Checked
1 month ago
Abstract
We investigate how to modify executable files to deceive malware classification systems. This work's main contribution is a methodology to inject bytes across a malware file randomly and use it both as an attack to decrease classification accuracy but also as a defensive method, augmenting the data available for training. It respects the operating system file format to make sure the malware will still execute after our injection and will not change its behavior. We reproduced five state-of-the-art malware classification approaches to evaluate our injection scheme: one based on GIST+KNN, three CNN variations and one Gated CNN. We performed our experiments on a public dataset with 9,339 malware samples from 25 different families. Our results show that a mere increase of 7% in the malware size causes an accuracy drop between 25% and 40% for malware family classification. They show that a automatic malware classification system may not be as trustworthy as initially reported in the literature. We also evaluate using modified malwares alongside the original ones to increase networks robustness against mentioned attacks. Results show that a combination of reordering malware sections and injecting random data can improve overall performance of the classification. Code available at https://github.com/adeilsonsilva/malware-injection.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Cryptography & Security
R.I.P.
๐ป
Ghosted
Membership Inference Attacks against Machine Learning Models
R.I.P.
๐ป
Ghosted
The Limitations of Deep Learning in Adversarial Settings
R.I.P.
๐ป
Ghosted
Practical Black-Box Attacks against Machine Learning
R.I.P.
๐ป
Ghosted
Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks
R.I.P.
๐ป
Ghosted