Multi-Robot Q-Learning over Community Perception Network with Homogeneous Delays

Article Preview

Abstract:

With the more robots, the information interaction of multi-robot system becomes more sophisticated and important in a community perception network environment. By exploiting and fusing the learning information of robots in a perception community, the community information sharing mechanism is proposed, as well as updating rules of the community Q-value table. Moreover, considering the existence of delays of learning information transmission, an improved Q-learning method based on homogeneous delays is presented to improve the robot learning efficiency over the community perception network. Finally, the test experiments demonstrate the effectiveness of the proposed scheme.

You might also be interested in these eBooks

Info:

Periodical:

Pages:

321-325

Citation:

Online since:

October 2013

Export:

Price:

Permissions CCC:

Permissions PLS:

Сopyright:

© 2013 Trans Tech Publications Ltd. All Rights Reserved

Share:

Citation:

[1] M. Tan, S. Wang and Z. Q. Cao: Multi-Robot System. (Tsinghua University Press, China 2005).

Google Scholar

[2] L. Jin, C. B. Ni, Y. Q. Yang and Z. Q. Cao: Study on Q-Learning of Multi-robot Systems Based on Community Perception Networks. International Conference on Mechanic Automation and Control Engineering, Baotou, 2012: 511-515.

Google Scholar

[3] S. Alberto, H. Norihiro and S. Alessandro: Network Robot Systems, Robotics and Autonomous Systems, 2008, 56(10): 793-797.

Google Scholar

[4] Information on http: /urus. upc. es/index. html.

Google Scholar

[5] T. Kanda, M. Shiomi and Z. Miyashita: A Communication Robot in A Shopping Mall. IEEE Transactions on Robotics, 2010, 26(5): 897-913.

DOI: 10.1109/tro.2010.2062550

Google Scholar

[6] S. Chao S, T. Kanda and M. Shimada: Easy Development of Communicative Behaviors in Social Robots. IEEE/RSJ International Conference on Intelligent Robots and Systems, 2010: 5302-5309.

DOI: 10.1109/iros.2010.5650128

Google Scholar

[7] J. Shen, G. Gu and H. Liu: A Survey of Hierarchical Reinforcement Learning. Pattern Recognition and Artificial Intelligence, 2005, 18(5): 574-581.

Google Scholar

[8] Y. Q. Yang, L. Jin, Z. Q. Cao and C. B. Ni: A Survey of Reinforcement Learning Research and It's Application for Multi-robot Systems. Proceedings of the 31st Chinese control Conference, 2012: 3068-3074.

Google Scholar

[9] L. Jin, Y. Q. Yang, C. B. Ni, Z. Q. Cao and X. R. Tang: Study on Online εNN Q-learning Based on Community Perception Network. Lecture Notes in Electrical Engineering, 2013, in press.

DOI: 10.1007/978-3-642-38524-7_96

Google Scholar

[10] L. Liu and L. Li: Hierarchical Regional Cooperative Q-learning. Computer Engineering and Application, 2009, 45(22): 7-9.

Google Scholar

[11] Y. Zhu and J. Mao: Robot's Behavioral Decision Making Strategy Based on Artificial Emotion and Q-learning, Intelligence Engineering, 2011: 61-65.

Google Scholar

[12] Q. Song and Z. Hu: Q-learning Based on The Experience Knowledge. Techniques of Automation and Applications, 2006, 25(11): 10-12.

Google Scholar

[13] J. Hu and Q. Zhu: Path Planning of Robot for Unknown Environment Based on Prior Knowledge Rolling Q-learning. Control and Decision, 2010, 25(9): 1364-1368.

Google Scholar

[14] C. Du, Y. Gao and W. Zhang: Q-learning With Prior Knowledge in Multi-agent Systems. Journal of Tsinghua University (Sci & Tech), 2005, 45(7): 981-984.

Google Scholar

[15] H. Zhang, H. Chen and Z. Liu: Application of The Experience-shared Distributed Q-learning Model in Robcup. Artificial Intelligence, 2010, 26(3-2): 25-26.

Google Scholar

[16] C Chang, X. Yin and Y. Bao: A Shared Experience Tuples Multi-agent Cooperative Reinforcement Learning Algorithm. Pattern Recognition and Artificial Intelligence, 2005, 18(2): 234-239.

Google Scholar