Machine Vision for Improved Human-Robot Cooperation in Adverse Underwater Conditions [thesis]

Author

Md Jahidul Islam (Ph.D. 2021)

Abstract

Visually-guided underwater robots are deployed alongside human divers for cooperative exploration, inspection, and monitoring tasks in numerous shallow-water and coastal-water applications. The most essential capability of such companion robots is to visually interpret their surroundings and assist the divers during various stages of an underwater mission. Despite recent technological advancements, the existing systems and solutions for real-time visual perception are greatly affected by marine artifacts such as poor visibility, lighting variation, and the scarcity of salient features. The difficulties are exacerbated by a host of non-linear image distortions caused by the vulnerabilities of underwater light propagation (e.g., wavelength-dependent attenuation, absorption, and scattering). In this dissertation, we present a set of novel and improved visual perception solutions to address these challenges for effective underwater human-robot cooperation. The research outcomes entail novel design and efficient implementation of the underlying vision and learning-based algorithms with extensive field experimental validations and real-time feasibility analyses for single-board deployments.

The dissertation is organized into three parts. The first part focuses on developing practical solutions for autonomous underwater vehicles (AUVs) to accompany human divers during an underwater mission. These include robust vision-based modules that enable AUVs to understand human swimming motion, hand gesture, and body pose for following and interacting with them while maintaining smooth spatiotemporal co-ordination. A series of closed-water and open-water field experiments demonstrate the utility and effectiveness of our proposed perception algorithms for underwater human-robot cooperation. We also identify and quantify their performance variability over a diverse set of operating constraints in adverse visual conditions. The second part of this dissertation is devoted to designing efficient techniques to overcome the effects of poor visibility and optical distortions in underwater imagery by restoring their perceptual and statistical qualities. We further demonstrate the practical feasibility of these techniques as pre-processors in the autonomy pipeline of visually-guided AUVs. Finally, the third part of this dissertation develops methodologies for high-level decision-making such as modeling spatial attention for fast visual search, learning to identify when image enhancement and super-resolution modules are necessary for a detailed perception, etc. We demonstrate that these methodologies facilitate up to 45% faster processing of the on-board visual perception modules and enable AUVs to make intelligent navigational and operational decisions, particularly in autonomous exploratory tasks.

In summary, this dissertation delineates our attempts to address the environmental and operational challenges of real-time machine vision for underwater human-robot cooperation. Aiming at a variety of important applications, we develop robust and efficient modules for AUVs to follow and interact with companion divers by accurately perceiving their surroundings while relying on noisy visual sensing alone. Moreover, our proposed perception solutions enable visually-guided robots to see better in noisy conditions, and do better with limited computational resources and real-time constraints. In addition to advancing the state-of-the-art, the proposed methodologies and systems take us one step closer toward bridging the gap between theory and practice for improved human-robot cooperation in the wild.

Link to full paper

Machine Vision for Improved Human-Robot Cooperation in Adverse Underwater Conditions

Keywords

underwater robotics, human-robot cooperation

Share