Generating Socially Acceptable Perturbations for Efficient Evaluation of Autonomous Vehicles

March 18, 2020 ยท Declared Dead ยท ๐Ÿ› 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)

๐Ÿ‘ป CAUSE OF DEATH: Ghosted
No code link whatsoever

"No code URL or promise found in abstract"

Evidence collected by the PWNC Scanner

Authors Songan Zhang, Huei Peng, Subramanya Nageshrao, H. Eric Tseng arXiv ID 2003.08034 Category eess.SY: Systems & Control (EE) Cross-listed cs.RO Citations 5 Venue 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) Last Checked 2 months ago
Abstract
Deep reinforcement learning methods have been widely used in recent years for autonomous vehicle's decision-making. A key issue is that deep neural networks can be fragile to adversarial attacks or other unseen inputs. In this paper, we address the latter issue: we focus on generating socially acceptable perturbations (SAP), so that the autonomous vehicle (AV agent), instead of the challenging vehicle (attacker), is primarily responsible for the crash. In our process, one attacker is added to the environment and trained by deep reinforcement learning to generate the desired perturbation. The reward is designed so that the attacker aims to fail the AV agent in a socially acceptable way. After training the attacker, the agent policy is evaluated in both the original naturalistic environment and the environment with one attacker. The results show that the agent policy which is safe in the naturalistic environment has many crashes in the perturbed environment.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Systems & Control (EE)

Died the same way โ€” ๐Ÿ‘ป Ghosted