Biases for Emergent Communication in Multi-agent Reinforcement Learning
December 11, 2019 Β· Declared Dead Β· π Neural Information Processing Systems
"No code URL or promise found in abstract"
Evidence collected by the PWNC Scanner
Authors
Tom Eccles, Yoram Bachrach, Guy Lever, Angeliki Lazaridou, Thore Graepel
arXiv ID
1912.05676
Category
cs.MA: Multiagent Systems
Cross-listed
cs.CL,
cs.LG
Citations
85
Venue
Neural Information Processing Systems
Last Checked
1 month ago
Abstract
We study the problem of emergent communication, in which language arises because speakers and listeners must communicate information in order to solve tasks. In temporally extended reinforcement learning domains, it has proved hard to learn such communication without centralized training of agents, due in part to a difficult joint exploration problem. We introduce inductive biases for positive signalling and positive listening, which ease this problem. In a simple one-step environment, we demonstrate how these biases ease the learning problem. We also apply our methods to a more extended environment, showing that agents with these inductive biases achieve better performance, and analyse the resulting communication protocols.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
π Similar Papers
In the same crypt β Multiagent Systems
R.I.P.
π»
Ghosted
R.I.P.
π»
Ghosted
Mean Field Multi-Agent Reinforcement Learning
R.I.P.
π»
Ghosted
A Survey and Critique of Multiagent Deep Reinforcement Learning
R.I.P.
π»
Ghosted
A Survey of Learning in Multiagent Environments: Dealing with Non-Stationarity
R.I.P.
π»
Ghosted
Collaborative vehicle routing: a survey
R.I.P.
π»
Ghosted
Deep Reinforcement Learning for Swarm Systems
Died the same way β π» Ghosted
R.I.P.
π»
Ghosted
Language Models are Few-Shot Learners
R.I.P.
π»
Ghosted
PyTorch: An Imperative Style, High-Performance Deep Learning Library
R.I.P.
π»
Ghosted
XGBoost: A Scalable Tree Boosting System
R.I.P.
π»
Ghosted