Exploring the Capacity of an Orderless Box Discretization Network for Multi-orientation Scene Text Detection

December 20, 2019 ยท Entered Twilight ยท ๐Ÿ› arXiv.org

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"No code URL or promise found in abstract"
"Code repo scraped from project page (backfill)"

Evidence collected by the PWNC Scanner

Repo contents: ABSTRACTIONS.md, CODE_OF_CONDUCT.md, CONTRIBUTING.md, INSTALL.md, LICENSE, MASKRCNN_README.md, MODEL_ZOO.md, README.md, TROUBLESHOOTING.md, configs, datasets, demo, ic15_TIoU_metric, maskrcnn_benchmark.egg-info, maskrcnn_benchmark, my_test.sh, quick_train_guide.sh, requirements.txt, setup.py, single_image_demo.sh, tests, tools

Authors Yuliang Liu, Tong He, Hao Chen, Xinyu Wang, Canjie Luo, Shuaitao Zhang, Chunhua Shen, Lianwen Jin arXiv ID 1912.09629 Category cs.CV: Computer Vision Citations 5 Venue arXiv.org Repository https://github.com/Yuliang-Liu/Box_Discretization_Network.git โญ 271 Last Checked 29 days ago
Abstract
Multi-orientation scene text detection has recently gained significant research attention. Previous methods directly predict words or text lines, typically by using quadrilateral shapes. However, many of these methods neglect the significance of consistent labeling, which is important for maintaining a stable training process, especially when it comprises a large amount of data. Here we solve this problem by proposing a new method, Orderless Box Discretization (OBD), which first discretizes the quadrilateral box into several key edges containing all potential horizontal and vertical positions. To decode accurate vertex positions, a simple yet effective matching procedure is proposed for reconstructing the quadrilateral bounding boxes. Our method solves the ambiguity issue, which has a significant impact on the learning process. Extensive ablation studies are conducted to validate the effectiveness of our proposed method quantitatively. More importantly, based on OBD, we provide a detailed analysis of the impact of a collection of refinements, which may inspire others to build state-of-the-art text detectors. Combining both OBD and these useful refinements, we achieve state-of-the-art performance on various benchmarks, including ICDAR 2015 and MLT. Our method also won the first place in the text detection task at the recent ICDAR2019 Robust Reading Challenge for Reading Chinese Text on Signboards, further demonstrating its superior performance. The code is available at https://git.io/TextDet.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Computer Vision