Object Shell Reconstruction: Camera-centric Object Representation for Robotic Grasping [preprint]

Preprint date

September 14, 2021

Authors

Nikhil Chavan-Dafle, Sergiy Popovych, Shubham Agrawal, Daniel D Lee, Volkan Isler (professor)

Abstract

Robots can effectively grasp and manipulate objects using their 3D models. In this paper, we propose a simple shape representation and a reconstruction method that outperforms state-of-the-art methods in terms of geometric metrics and enables grasp generation with high precision and success. Our reconstruction method models the object geometry as a pair of depth images, composing the "shell" of the object. This representation allows using image-to-image residual ConvNet architectures for 3D reconstruction, generates object reconstruction directly in the camera frame, and generalizes well to novel object types. Moreover, an object shell can be converted into an object mesh in a fraction of a second, providing time and memory efficient alternative to voxel or implicit representations. We explore the application of shell representation for grasp planning. With rigorous experimental validation, both in simulation and on a real setup, we show that shell reconstruction encapsulates sufficient geometric information to generate precise grasps and the associated grasp quality with over 90% accuracy. Diverse grasps computed on shell reconstructions allow the robot to select and execute grasps in cluttered scenes with more than 93% success rate.

Link to full paper

Object Shell Reconstruction: Camera-centric Object Representation for Robotic Grasping

Keywords

robotics

Share