TY - GEN
T1 - Learning instruction-guided manipulation affordance via large models for embodied robotic tasks
AU - Li, Dayou
AU - Zhao, Chenkun
AU - Yang, Shuo
AU - Ma, Lin
AU - Li, Yibin
AU - Zhang, Wei
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024/10/18
Y1 - 2024/10/18
N2 - We study the task of language instruction-guided robotic manipulation, in which an embodied robot is supposed to manipulate the target objects based on the language instructions. In previous studies, the predicted manipulation regions of the target object typically do not change with specification from the language instructions, which means that the language perception and manipulation prediction are separate. However, in human behavioral patterns, the manipulation regions of the same object will change for different language instructions. In this paper, we propose Instruction-Guided Affordance Net (IGANet) for predicting affordance maps of instruction-guided robotic manipulation tasks by utilizing powerful priors from vision and language encoders pre-trained on large-scale datasets. We develop a Vison-Language-Models(VLMs)-based data augmentation pipeline, which can generate a large amount of data automatically for model training. Besides, with the help of Large-Language-Models(LLMs), actions can be effectively executed to finish the tasks defined by instructions. A series of real-world experiments revealed that our method can achieve better performance with generated data. Moreover, our model can generalize better to scenarios with unseen objects and language instructions.
AB - We study the task of language instruction-guided robotic manipulation, in which an embodied robot is supposed to manipulate the target objects based on the language instructions. In previous studies, the predicted manipulation regions of the target object typically do not change with specification from the language instructions, which means that the language perception and manipulation prediction are separate. However, in human behavioral patterns, the manipulation regions of the same object will change for different language instructions. In this paper, we propose Instruction-Guided Affordance Net (IGANet) for predicting affordance maps of instruction-guided robotic manipulation tasks by utilizing powerful priors from vision and language encoders pre-trained on large-scale datasets. We develop a Vison-Language-Models(VLMs)-based data augmentation pipeline, which can generate a large amount of data automatically for model training. Besides, with the help of Large-Language-Models(LLMs), actions can be effectively executed to finish the tasks defined by instructions. A series of real-world experiments revealed that our method can achieve better performance with generated data. Moreover, our model can generalize better to scenarios with unseen objects and language instructions.
UR - https://www.scopus.com/pages/publications/85208065391
U2 - 10.1109/icarm62033.2024.10715821
DO - 10.1109/icarm62033.2024.10715821
M3 - Conference contribution
AN - SCOPUS:85208065391
T3 - ICARM 2024 - 2024 9th IEEE International Conference on Advanced Robotics and Mechatronics
SP - 662
EP - 667
BT - ICARM 2024 - 2024 9th IEEE International Conference on Advanced Robotics and Mechatronics
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 9th IEEE International Conference on Advanced Robotics and Mechatronics, ICARM 2024
Y2 - 8 July 2024 through 10 July 2024
ER -