Zero-shot Natural Language Video Localization [conference paper]


International Conference on Computer Vision (ICCV) - October 11-17, 2021


Jinwoo Nam, Daechul Ahn, Dongyeop Kang (assistant professor), Seong Jong Ha, Jonghyun Choi


Understanding videos to localize moments with natural language often requires large expensive annotated video regions paired with language queries. To eliminate the annotation costs, we make a first attempt to train a natural language video localization model in zero-shot manner. Inspired by unsupervised image captioning setup, we merely require random text corpora, unlabeled video collections, and an off-the-shelf object detector to train a model. With the unpaired data, we propose to generate pseudo-supervision of candidate temporal regions and corresponding query sentences, and develop a simple NLVL model to train with the pseudo-supervision. Our empirical validations show that the proposed pseudo-supervised method outperforms several baseline approaches and a number of methods using stronger supervision on Charades-STA and ActivityNet-Captions.

Link to full paper

Zero-shot Natural Language Video Localization


natural language processing