Deep Reinforcement Learning for Robust Beamforming in IRS-assisted Wireless Communications
- Publisher:
- IEEE
- Publication Type:
- Conference Proceeding
- Citation:
- 2020 IEEE Global Communications Conference, GLOBECOM 2020 - Proceedings, 2021, 00, pp. 1-6
- Issue Date:
- 2021
Closed Access
| Filename | Description | Size | |||
|---|---|---|---|---|---|
| Deep_Reinforcement_Learning_for_Robust_Beamforming_in_IRS-assisted_Wireless_Communications.pdf | Published version | 986.22 kB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
Intelligent reflecting surface (IRS) is a promising technology to assist downlink information transmissions from a multi-antenna access point (AP) to a receiver. In this paper, we minimize the AP's transmit power by a joint optimization of the AP's active beamforming and the IRS's passive beamforming. Due to uncertain channel conditions, we formulate a robust power minimization problem subject to the receiver's signal-to-noise ratio (SNR) requirement and the IRS's power budget constraint. We propose a deep reinforcement learning (DRL) approach that can adapt the beamforming strategies from past experiences. To improve the learning performance, we derive a convex approximation as a lower bound on the robust problem, which is integrated with the DRL framework and thus promoting a novel optimization-driven deep deterministic policy gradient (DDPG) approach. In particular, when the DDPG algorithm generates a part of the action (e.g., passive beamforming), we can use the model-based convex approximation to optimize the other part of the action (e.g., active beamforming) efficiently. Our simulation results demonstrate that the optimization-driven DDPG algorithm can improve both the learning rate and reward significantly compared to the conventional DDPG algorithm.
Please use this identifier to cite or link to this item:
