Multi-target detection and grasping control for humanoid robot NAO
Lei Zhang
Beijing Key Laboratory of Robot Bionics and Function Research, Beijing University of Civil Engineering and Architecture, Beijing, China
Search for more papers by this authorHuayan Zhang
Beijing Key Laboratory of Robot Bionics and Function Research, Beijing University of Civil Engineering and Architecture, Beijing, China
Search for more papers by this authorHanting Yang
Beijing Key Laboratory of Robot Bionics and Function Research, Beijing University of Civil Engineering and Architecture, Beijing, China
Search for more papers by this authorCorresponding Author
Gui-Bin Bian
State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China
Gui-Bin Bian, State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China.
Email: [email protected]
Search for more papers by this authorWanqing Wu
CAS Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institutes of Advanced Technology, Shenzhen, China
Search for more papers by this authorLei Zhang
Beijing Key Laboratory of Robot Bionics and Function Research, Beijing University of Civil Engineering and Architecture, Beijing, China
Search for more papers by this authorHuayan Zhang
Beijing Key Laboratory of Robot Bionics and Function Research, Beijing University of Civil Engineering and Architecture, Beijing, China
Search for more papers by this authorHanting Yang
Beijing Key Laboratory of Robot Bionics and Function Research, Beijing University of Civil Engineering and Architecture, Beijing, China
Search for more papers by this authorCorresponding Author
Gui-Bin Bian
State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China
Gui-Bin Bian, State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China.
Email: [email protected]
Search for more papers by this authorWanqing Wu
CAS Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institutes of Advanced Technology, Shenzhen, China
Search for more papers by this authorSummary
Graspirng objects is an important capability for humanoid robots. Due to complexity of environmental and diversity of objects, it is difficult for the robot to accurately recognize and grasp multiple objects. In response to this problem, we propose a robotic grasping method that uses the deep learning method You Only Look Once v3 for multi-target detection and the auxiliary signs to obtain target location. The method can control the movement of the robot and plan the grasping trajectory based on visual feedback information. It is verified by experiments that this method can make the humanoid robot NAO grasp the object effectively, and the success rate of grasping can reach 80% in the experimental environment.
REFERENCES
- 1Kajita S, Hirukawa H, Harada K, Yokoi K. Introduction to Humanoid Robotics. Berlin, Germany: Springer-Verlag Berlin Heidelberg; 2014.
10.1007/978-3-642-54536-8 Google Scholar
- 2Yang Y. Study on the Machine Vision Based Intelligent Grasping for Service Robot [PhD thesis]. Shanghai, China: Shanghai Jiao Tong University; 2014.
- 3Lenz I, Lee H, Saxena A. Deep learning for detecting robotic grasps. Int J Robotics Res. 2015; 34(4-5): 705-724.
- 4Redmon J, Angelova A. Real-time grasp detection using convolutional neural networks. Paper presented at: 2015 IEEE International Conference on Robotics and Automation (ICRA); 2015; Seattle, WA.
- 5Levine S, Pastor P, Krizhevsky A, Quillen D. Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection. Int J Robotics Res. 2018; 37: 421-436.
- 6Sung J, Jin SH, Lenz I, Saxena A. Robobarista: learning to manipulate novel objects via deep multimodal embedding. arXiv preprint. 2016. https://arxiv.org/abs/1601.02705
- 7Finn C, Tan XY, Duan Y, Darrell T, Levine S, Abbeel P. Deep spatial autoencoders for visuomotor learning. Paper presented at: 2016 IEEE International Conference on Robotics and Automation (ICRA); 2016; Stockholm, Sweden.
- 8Saxena A, Pandya H, Kumar G, Gaud A, Krishna KM. Exploring convolutional networks for end-to-end visual servoing. Paper presented at: 2017 IEEE International Conference on Robotics and Automation (ICRA); 2017; Singapore.
- 9Müller J, Frese U, Röfer T. Grab a mug - object detection and grasp motion planning with the NAO robot. Paper presented at: 2012 12th IEEE-RAS International Conference on Humanoid Robots (Humanoids); 2013; Osaka, Japan.
- 10Jiang Y, Chang F, Liang F. Kinematics modeling and trajectory planning for NAO robot object grasping based on image analysis. Paper presented at: 2017 Chinese Automation Congress (CAC); 2017; Jinan, China.
- 11Eppe M, Kerzel M, Griffiths S, Ng HG, Wermter S. Combining deep learning for visuomotor coordination with object identification to realize a high-level interface for robot object-picking. Paper presented at: 2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids); 2017; Birmingham, UK.
- 12Redmon J, Divvala S, Girshick R, Farhadi A. You only look once: unified, real-time object detection. Paper presented at: IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2016; Las Vegas, NV.
- 13Schroff F, Kalenichenko D, Philbin J. Facenet: a unified embedding for face recognition and clustering. Paper presented at: IEEE Conference on Computer Vision and Pattern Recognition; 2015; Boston, MA.
- 14Ji S, Yang M, Yu K, Xu W. 3D convolutional neural networks for human action recognition. IEEE Trans Pattern Anal Mach Intell. 2013; 35(1): 221-231.
- 15Xu K, Ba J, Kiros R, et al. Show, attend and tell: neural image caption generation with visual attention. In: Proceedings of the 32nd International Conference on International Conference on Machine Learning (ICML); 2015; Lille, France.
- 16Liu H, Luo C, Zhang L. Target recognition and heavy load operation posture control of humanoid robot for trolley operation. Paper presented at: 2018 IEEE-RAS 18th International Conference on Humanoid Robots (Humanoids); 2018; Beijing, China.
- 17Chenxi L. Research on Path Planning Based on Improved A-Star Algorithm of Humanoid Robot [PhD thesis]. Beijing, China: Beijing University of Civil Engineering and Architecture; 2018.
- 18Zhai R, Wen S, Zhu J, Guo G. Trajectory planning of NAO robot arm based on target recognition. Paper presented at: 2017 International Conference on Advanced Mechatronic Systems (ICAMechS); 2017; Xiamen, China.
- 19 Robotics A. NAO software documentation. 2015. http://doc.aldebaran.com/2-1/home_nao.html
- 20Ju Z, Yang C, Ma H. Kinematics modeling and experimental verification of baxter robot. In: Proceedings of the 33rd Chinese Control Conference; 2014; Nanjing, China.
- 21Yuan L, Tian G, Li G. Visual servo grasping of household objects for NAO robot. J Shandong Univ (Eng Sci). 2014; 44(3): 57-63.
- 22Wu S, Xu Y, Zhao D. Survey of object detection based on deep convolutional network. Pattern Recognit Artif Intell. 2018; 31(4): 335-346.
- 23Redmon J, Farhadi A. Yolov3: an incremental improvement. arXiv preprint. 2018. https://arxiv.org/abs/1804.02767