You're currently viewing an old version of this dataset. To see the current version, click here.

Point-VOS

Point-VOS: Pointing Up Video Object Segmentation. Current state-of-the-art Video Object Segmentation (VOS) methods rely on dense per-object mask annotations both during training and testing. This requires time-consuming and costly video annotation mechanisms. We propose a novel Point-VOS task with a spatio-temporally sparse point-wise annotation scheme that substantially reduces the annotation effort.

Data and Resources

This dataset has no data

Cite this as

Idil Esen Zulfikar, Sabarinath Mahadevan, Paul Voigtlaender, Bastian Leibe (2024). Dataset: Point-VOS. https://doi.org/10.57702/2xtfc4k3

Private DOI This DOI is not yet resolvable.
It is available for use in manuscripts, and will be published when the Dataset is made public.

Additional Info

Field Value
Created December 3, 2024
Last update December 3, 2024
Defined In https://doi.org/10.48550/arXiv.2402.05917
Author Idil Esen Zulfikar
More Authors
Sabarinath Mahadevan
Paul Voigtlaender
Bastian Leibe
Homepage https://pointvos.github.io