Predicting the Future Motion of Divers for Enhanced Underwater Human-Robot Collaboration [conference paper]

Conference

IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) - September 27-October 1, 2021

Authors

Tanmay Agawal (undergraduate research assistant), Michael Fulton (Ph.D. student), Junaed Sattar (assistant professor)

Abstract

Autonomous Underwater Vehicles (AUVs) can be effective collaborators to human scuba divers in many applications, such as environmental surveying, mapping, or infrastructure repair. However, for these applications to be realized in the real world, it is essential that robots are able to both lead and follow their human collaborators. Current algorithms for diver following are not robust to non-uniform changes in the motion of the diver, and no framework currently exists for robots to lead divers. One method to improve the robustness of diver following and enable the capability of diver leading is to predict the future motion of a diver. In this paper, we present a vision-based approach for AUVs to predict the future motion trajectory of divers, utilizing the Vanilla-LSTM and Social-LSTM temporal deep neural networks. We also present a dense optical flow-based method to stabilize the input annotations from the dataset and reduce the effects of camera ego-motion. We analyze the results of these models on scenarios ranging from swimming pools to the open ocean and present the model’s accuracy at varying prediction lengths. We find that our LSTM models can generate predictions with significant accuracy 1.5 seconds into the future and that stabilizing LSTM models significantly improves trajectory prediction performance.

Link to full paper

Predicting the Future Motion of Divers for Enhanced Underwater Human-Robot Collaboration

Keywords

underwater robotics

Share