Visual Diver Face Recognition for Underwater Human-Robot Interaction [preprint]
Preprint date
November 18, 2020
Authors
Jungseok Hong (Ph.D. student), Sadman Sakib Enan (Ph.D. student), Christopher Morse (undergraduate research assistant), Junaed Sattar (assistant professor)
Abstract
This paper presents a deep-learned facial recognition method for underwater robots to identify scuba divers. Specifically, the proposed method is able to recognize divers underwater with faces heavily obscured by scuba masks and breathing apparatus. Our contribution in this research is towards robust facial identification of individuals under significant occlusion of facial features and image degradation from underwater optical distortions. With the ability to correctly recognize divers, autonomous underwater vehicles (AUV) will be able to engage in collaborative tasks with the correct person in human-robot teams and ensure that instructions are accepted from only those authorized to command the robots. We demonstrate that our proposed framework is able to learn discriminative features from real-world diver faces through different data augmentation and generation techniques. Experimental evaluations show that this framework achieves a 3-fold increase in prediction accuracy compared to the state-of-the-art (SOTA) algorithms and is well-suited for embedded inference on robotic platforms.
Link to full paper
Visual Diver Face Recognition for Underwater Human-Robot Interaction
Keywords
underwater robotics