Higher order function networks for view planning and multi-view reconstruction [conference paper]

Conference

IEEE International Conference on Robotics and Automation (ICRA) - May 31, 2020

Authors

Selim Engin (Ph.D. student), Eric Mitchell, Daewon Lee, Volkan Isler (professor), Daniel D Lee

Abstract

We consider the problem of planning views for a robot to acquire images of an object for visual inspection and reconstruction. In contrast to offline methods which require a 3D model of the object as input or online methods which rely on only local measurements, our method uses a neural network which encodes shape information for a large number of objects. We build on recent deep learning methods capable of generating a complete 3D reconstruction of an object from a single image. Specifically, in this work, we extend a recent method which uses Higher Order Functions (HOF) to represent the shape of the object. We present a new generalization of this method to incorporate multiple images as input and establish a connection between visibility and reconstruction quality. This relationship forms the foundation of our view planning method where we compute viewpoints to visually cover the output of the multiview HOF network with as few images as possible. Experiments indicate that our method provides a good compromise between online and offline methods: Similar to online methods, our method does not require the true object model as input. In terms of number of views, it is much more efficient. In most cases, its performance is comparable to the optimal offline case even on object classes the network has not been trained on.

Link to full paper

Higher order function networks for view planning and multi-view reconstruction

Keywords

robotics, computer vision

Share