Abstract
Detecting feasible graspable positions on object is crucial for robotic grasping. Existing methods generally evaluate grasp detection by comparing predicted grasps with limited ground truth data. However, since the labeled ground truth grasps are not exhaustive, this strategy lacks comprehensiveness of grasping features and may miss some feasible grasps. To solve this problem, we enhance grasp representation from isolated rectangles to consistent paths on objects, represented by single or multiple line segments, in this work. A novel grasp detection model is also proposed to predict feasible graspable regions by offering more varied selections, where multi-dimensional attention mechanism is integrated to highlight grasping-specific features. This facilitates automatic search of optimal grasp rectangles from numerous grasp regions as per the physical size of gripper and task-specific requirements. A Grasp Path Dataset using grasp paths to reveal the spatial distribution of viable grasps is constructed for the first time and experimental results taken on benchmark datasets as well as real-world scenarios demonstrate that the proposed grasp path representation can enhance detection accuracy in public datasets and success rates in practical robotic grasping tasks, providing a richer set of grasp candidates.
| Original language | English |
|---|---|
| Article number | 1119298 |
| Number of pages | 14 |
| Journal | IEEE Transactions on Cognitive and Developmental Systems |
| Early online date | 7 Aug 2025 |
| DOIs | |
| Publication status | E-pub ahead of print - 7 Aug 2025 |