An Analysis of Deep Object Detectors For Diver Detection [preprint]

Preprint date

November 25, 2020

Authors

Karin de Langis (Ph.D. student), Michael Fulton (Ph.D. student), Junaed Sattar (assistant professor)

Abstract

With the end goal of selecting and using diver detection models to support human-robot collaboration capabilities such as diver following, we thoroughly analyze a large set of deep neural networks for diver detection. We begin by producing a dataset of approximately 105,000 annotated images of divers sourced from videos--one of the largest and most varied diver detection datasets ever created. Using this dataset, we train a variety of state-of-the-art deep neural networks for object detection, including SSD with Mobilenet, Faster R-CNN, and YOLO. Along with these single-frame detectors, we also train networks designed for detection of objects in a video stream, using temporal information as well as single-frame image information. We evaluate these networks on typical accuracy and efficiency metrics, as well as on the temporal stability of their detections. Finally, we analyze the failures of these detectors, pointing out the most common scenarios of failure. Based on our results, we recommend SSDs or Tiny-YOLOv4 for real-time applications on robots and recommend further investigation of video object detection methods.

Link to full paper

An Analysis of Deep Object Detectors For Diver Detection

Keywords

underwater robotics

Share