Point Anywhere: Directed Object Estimation from Omnidirectional Images

August 02, 2023 ยท Entered Twilight ยท ๐Ÿ› SIGGRAPH Posters

๐Ÿ’ค TWILIGHT: Eternal Rest
Repo abandoned since publication

Repo contents: .gitattributes, .gitignore, Correct.py, Equi2Pers.py, Experiment, GreatCircle.py, LICENSE, Pers2Equi.py, PointingVector.py, README.md, README_ja.md, ROI, RegionOfInterst.py, Test.py, inputOmni, ml, pytorch_openpose, run.py, utils, yolov5

Authors Nanami Kotani, Asako Kanezaki arXiv ID 2308.01010 Category cs.HC: Human-Computer Interaction Cross-listed cs.CV Citations 1 Venue SIGGRAPH Posters Repository https://github.com/NKotani/PointAnywhere โญ 12 Last Checked 1 month ago
Abstract
One of the intuitive instruction methods in robot navigation is a pointing gesture. In this study, we propose a method using an omnidirectional camera to eliminate the user/object position constraint and the left/right constraint of the pointing arm. Although the accuracy of skeleton and object detection is low due to the high distortion of equirectangular images, the proposed method enables highly accurate estimation by repeatedly extracting regions of interest from the equirectangular image and projecting them onto perspective images. Furthermore, we found that training the likelihood of the target object in machine learning further improves the estimation accuracy.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Human-Computer Interaction