The massive deployment of small cells (SCs) represents one of the most promising solutions adopted by 5G cellular networks to meet the foreseen huge traffic demand. The high number of network elements entails a significant increase in the energy consumption. The usage of renewable energies for powering the small cells can help reduce the environmental impact of mobile networks in terms of energy consumption and also save on electric bills. In this paper, we consider a two-tier cellular network architecture where SCs can offload macro base stations and solely rely on energy harvesting and storage. In order to deal with the erratic nature of the energy arrival process, we exploit an ON/OFF switching algorithm, based on reinforcement learning, that autonomously learns energy income and traffic demand patterns. The algorithm is based on distributed multi-agent Q-learning for jointly optimizing the system performance and the self-sustainability of the SCs. We analyze the algorithm by assessing its convergence time, characterizing the obtained ON/OFF policies, and evaluating an offline trained variant. Simulation results demonstrate that our solution is able to increase the energy efficiency of the system with respect to simpler approaches. Moreover, the proposed method provides an harvested energy surplus, which can be used by mobile operators to offer ancillary services to the smart electricity grid.

Switch-On/Off Policies for Energy Harvesting Small Cells through Distributed Q-Learning

ROSSI, MICHELE;DINI, PAOLO
2017

Abstract

The massive deployment of small cells (SCs) represents one of the most promising solutions adopted by 5G cellular networks to meet the foreseen huge traffic demand. The high number of network elements entails a significant increase in the energy consumption. The usage of renewable energies for powering the small cells can help reduce the environmental impact of mobile networks in terms of energy consumption and also save on electric bills. In this paper, we consider a two-tier cellular network architecture where SCs can offload macro base stations and solely rely on energy harvesting and storage. In order to deal with the erratic nature of the energy arrival process, we exploit an ON/OFF switching algorithm, based on reinforcement learning, that autonomously learns energy income and traffic demand patterns. The algorithm is based on distributed multi-agent Q-learning for jointly optimizing the system performance and the self-sustainability of the SCs. We analyze the algorithm by assessing its convergence time, characterizing the obtained ON/OFF policies, and evaluating an offline trained variant. Simulation results demonstrate that our solution is able to increase the energy efficiency of the system with respect to simpler approaches. Moreover, the proposed method provides an harvested energy surplus, which can be used by mobile operators to offer ancillary services to the smart electricity grid.
2017
Proceedings of IEEE Wireless Communications and Networking Conference Workshops, WCNCW 2017
9781509059089
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11577/3239274
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 31
  • ???jsp.display-item.citation.isi??? 15
social impact