Dynamic Spectrum Allocation Using Q-Learning in Cognitive Radio Systems

Article Preview

Abstract:

In this paper we present an improved dynamic spectrum allocation algorithm based on the intelligence of Q-learning. The state space, action space and reward function of the algorithm are built, and, the agents are guided to perform actions through designing the reward function. Numerical simulation results show that the proposed algorithm can improve system throughput efficiently compared to other algorithms. Facing the status of spectrum resources is tension and spectrum utilization is low, it can also boost the spectrum using condition in the future.

You might also be interested in these eBooks

Info:

Periodical:

Pages:

1579-1584

Citation:

Online since:

September 2013

Export:

Price:

Permissions CCC:

Permissions PLS:

Сopyright:

© 2013 Trans Tech Publications Ltd. All Rights Reserved

Share:

Citation:

* - Corresponding Author

[1] Xin Xiang, Software Radio Principle and Technology, Xian: Xian electronic science and technology university, (2010).

Google Scholar

[2] Jie Chen, Shaoqian Li and Chulin Liao, The Research of Spectrum Resource Allocation Algorithm Based on Demand in Cognitive Radio Network, Computer applications, vol. 28, A9, pp.2188-2191, (2008).

DOI: 10.3724/sp.j.1087.2008.02188

Google Scholar

[3] Mo Li, Youyun Xu and Junquan Hu, A Q-Learning Based Sensing Task Selection Scheme for Cognitive Radio Networks, International Conference on Wireless Communication & Signal Processing, Nanjing, pp.1-5, (2009).

DOI: 10.1109/wcsp.2009.5371749

Google Scholar

[4] Zhe Chen and Robert C. Qiu, Cooperative Spectrum Sensing Using Q-Learning with Experimental Validation, Proceedings of IEEE, pp.405-408, (2011).

Google Scholar

[5] Y. B. Reddy, Detecting Primary Signals for Efficient Utilization of Spectrum Using Q-Learning, Procedings of the Fifth International Conference on Information Technology: New Generations, Las Vegas, pp.360-364, (2008).

DOI: 10.1109/itng.2008.95

Google Scholar

[6] Ana Galindo-Serrano and L. Giupponi, Decentralized Q-learning for Aggregated Interference Control in Completely and Partially Observable Cognitive Radio Networks, Proccedings of the IEEE CCNC, pp.1-6, (2010).

DOI: 10.1109/ccnc.2010.5421826

Google Scholar

[7] Cheng Wu, Kaushik Chowdhury and Marco Di Felice, Spectrum Management of Cognitive Radio Using Multi-agent Reinforcement Learning, Proceedings of the International Conference on Autonomous Agents and Multiagent Systems, pp.1705-1712, (2010).

Google Scholar

[8] Yongjing Zhang, Zhiyong Feng and Ping Zhang ping, The Independent Joint Wireless Resource Management Algorithm Based on Q-learning, Journal of electronics and information, vol. 30, A3, pp.676-679, (2008).

Google Scholar

[9] Mo Li, Youyun Xu and Yueming Cai, A Q-Learning Based Sensing Management Algorithm for Cognitive Radio System, Journal of electronics and information, vol. 32, A3, pp.623-628, (2010).

DOI: 10.3724/sp.j.1146.2009.00296

Google Scholar

[10] Kok-Lim Alvin Yau, Peter Komisarczuk and Paul D. Teal, A Context-aware and Intelligent Dynamic Channel Selection Scheme for Cognitive Radio Networks, Proceedings of The 4th Iternational Conference on CROWNCOM, Hannover, pp.1-6, (2009).

DOI: 10.1109/crowncom.2009.5189427

Google Scholar

[11] Kok-Lim Alvin Yau, Peter Komisarczuk, Paul D. Teal et al, Enhancing Network Performance in Distributed Cognitive Radio Networks using Single-Agent and Multi-AgnetReinforcement Learning, 2010 IEEE 35th Conference on Computer Networks, Denver, pp.152-159, (2010).

DOI: 10.1109/lcn.2010.5735689

Google Scholar

[12] Yanjun Yao and Zhiyong Feng, Centralized Channel and Power Allocation for Cognitive Radio Networks: A Q-learning Solution, Proceedings of Future Network and Mobile Summit, Florence, pp.1-8, (2010).

Google Scholar