CS&E Colloquium: At the deep end: addressing the underwater human-robot collaboration problem
The computer science colloquium takes place on Mondays from 11:15 a.m. - 12:15 p.m.
This week's speaker, Junaed Sattar (University of Minnesota), will be giving a talk titled "At the deep end: addressing the underwater human-robot collaboration problem."
Autonomous underwater vehicles (AUV) have traditionally been used for standalone missions, with limited or no direct human involvement, in applications where it is infeasible for humans to closely collaborate with the robots (e.g., long-term oceanographic surveys, search-and-rescue, infrastructure inspection). However, in recent decades, the advent of smaller AUVs suitable for working closely with humans (termed co-AUVs) has enabled robots and humans to collaborate on many subsea tasks. The underwater domain, nonetheless, is unique in many ways and stands out with its numerous challenges -- in sensing, control, and human-robot interaction -- that can justifiably be considered extreme. Our research at the Interactive Robotics and Vision Lab at the University of Minnesota looks into numerous issues in robust underwater human-robot collaboration. Specifically, we investigate underwater bidirectional human-robot communication, underwater imagery enhancement, localization/mapping of underwater objects of interest using multimodal sensing, and biological and non-biological object tracking. We primarily investigate computational solutions to these problems, and use methods from robotics, machine vision, stochastic reasoning, and (deep) machine learning. This talk will present a brief overview of our research and present an in-depth discussion of some recent work in underwater human-robot interaction and imagery enhancement.
Junaed is an assistant professor at the Department of Computer Science and Engineering at the University of Minnesota and a MnDrive (Minnesota Discovery, Research, and Innovation Economy) faculty, and a member of the Minnesota Robotics Institute. He is the founding director of the Interactive Robotics and Vision Lab, where he and his students investigate problems in field robotics, robot vision, human-robot communication, assisted driving, and applied (deep) machine learning, and develop rugged robotic systems. His graduate degrees are from McGill University in Canada, and he has a BS in Engineering degree from the Bangladesh University of Engineering and Technology. Before coming to the UoM, he worked as a post-doctoral fellow at the University of British Columbia where his research focused on human-robot dialog and assistive wheelchair robots, and at Clarkson University in New York as an Assistant Professor. Find him at junaedsattar.info, and the IRV Lab at irvlab.cs.umn.edu, @irvlab on Twitter, and their YouTube page at https://www.youtube.com/channel/UCbzteddfNPrARE7i1C82NdQ.