Deep Reinforcement Learning for Joint Spectrum and Power Allocation in Cellular Networks

December 19, 2020 ยท Entered Twilight ยท ๐Ÿ› 2021 IEEE Globecom Workshops (GC Wkshps)

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 5.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: DDPG.py, DQN.py, LICENSE.txt, README.md, __pycache__, config, fig, get_benchmarks.py, paper.pdf, project_backend.py, random_deployment.py, requirements.txt, scripts, simulations, testJoint.py, testProposed.py, test_results.py, trainJoint.py, trainProposed.py, train_results.py

Authors Yasar Sinan Nasir, Dongning Guo arXiv ID 2012.10682 Category eess.SP: Signal Processing Cross-listed cs.IT, cs.LG Citations 45 Venue 2021 IEEE Globecom Workshops (GC Wkshps) Repository https://github.com/sinannasir/Spectrum-Power-Allocation โญ 91 Last Checked 1 month ago
Abstract
A wireless network operator typically divides the radio spectrum it possesses into a number of subbands. In a cellular network those subbands are then reused in many cells. To mitigate co-channel interference, a joint spectrum and power allocation problem is often formulated to maximize a sum-rate objective. The best known algorithms for solving such problems generally require instantaneous global channel state information and a centralized optimizer. In fact those algorithms have not been implemented in practice in large networks with time-varying subbands. Deep reinforcement learning algorithms are promising tools for solving complex resource management problems. A major challenge here is that spectrum allocation involves discrete subband selection, whereas power allocation involves continuous variables. In this paper, a learning framework is proposed to optimize both discrete and continuous decision variables. Specifically, two separate deep reinforcement learning algorithms are designed to be executed and trained simultaneously to maximize a joint objective. Simulation results show that the proposed scheme outperforms both the state-of-the-art fractional programming algorithm and a previous solution based on deep reinforcement learning.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Signal Processing