R.I.P.
๐ป
Ghosted
EVulHunter: Detecting Fake Transfer Vulnerabilities for EOSIO's Smart Contracts at Webassembly-level
June 25, 2019 ยท Declared Dead ยท ๐ arXiv.org
Authors
Lijin Quan, Lei Wu, Haoyu Wang
arXiv ID
1906.10362
Category
cs.CR: Cryptography & Security
Citations
32
Venue
arXiv.org
Repository
https://github.com/EVulHunter/EVulHunter}
Last Checked
1 month ago
Abstract
As one of the representative Delegated Proof-of-Stake (DPoS) blockchain platforms, EOSIO's ecosystem grows rapidly in recent years. A number of vulnerabilities and corresponding attacks of EOSIO's smart contracts have been discovered and observed in the wild, which caused a large amount of financial damages. However, the majority of EOSIO's smart contracts are not open-sourced. As a result, the WebAssembly code may become the only available object to be analyzed in most cases. Unfortunately, current tools are web-application oriented and cannot be applied to EOSIO WebAssembly code directly, which makes it more difficult to detect vulnerabilities from those smart contracts. In this paper, we propose \toolname, a static analysis tool that can be used to detect vulnerabilities from EOSIO WASM code automatically. We focus on one particular type of vulnerabilities named \textit{fake-transfer}, and the exploitation of such vulnerabilities has led to millions of dollars in damages. To the best of our knowledge, it is the first attempt to build an automatic tool to detect vulnerabilities of EOSIO's smart contracts. The experimental results demonstrate that our tool is able to detect fake transfer vulnerabilities quickly and precisely. EVulHunter is available on GitHub\footnote{Tool and benchmarks: https://github.com/EVulHunter/EVulHunter} and YouTube\footnote{Demo video: https://youtu.be/5SJ0ZJKVZvw}.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Cryptography & Security
R.I.P.
๐ป
Ghosted
Membership Inference Attacks against Machine Learning Models
R.I.P.
๐ป
Ghosted
The Limitations of Deep Learning in Adversarial Settings
R.I.P.
๐ป
Ghosted
Practical Black-Box Attacks against Machine Learning
R.I.P.
๐ป
Ghosted
Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks
R.I.P.
๐ป
Ghosted
Extracting Training Data from Large Language Models
Died the same way โ ๐ 404 Not Found
R.I.P.
๐
404 Not Found
Deep High-Resolution Representation Learning for Visual Recognition
R.I.P.
๐
404 Not Found
HuggingFace's Transformers: State-of-the-art Natural Language Processing
R.I.P.
๐
404 Not Found
CCNet: Criss-Cross Attention for Semantic Segmentation
R.I.P.
๐
404 Not Found