You're currently viewing an old version of this dataset. To see the current version, click here.

Real-Time All-Purpose Segment Anything Model

Advanced by transformer architecture, vision foundation models (VFMs) achieve remarkable progress in performance and generalization ability. Segment Anything Model (SAM) is one remarkable model that can achieve generalized segmentation. However, most VFMs cannot run in real-time, which makes it difficult to transfer them into several products.

Data and Resources

This dataset has no data

Cite this as

Shilin Xu, Haobo Yuan, Qingyu Shi, Lu Qi, Jingbo Wang, Yibo Yang, Yining Li, Kai Chen, Yunhai Tong (2024). Dataset: Real-Time All-Purpose Segment Anything Model. https://doi.org/10.57702/mhx97qjd

Private DOI This DOI is not yet resolvable.
It is available for use in manuscripts, and will be published when the Dataset is made public.

Additional Info

Field Value
Created December 16, 2024
Last update December 16, 2024
Defined In https://doi.org/10.48550/arXiv.2401.10228
Author Shilin Xu
More Authors
Haobo Yuan
Qingyu Shi
Lu Qi
Jingbo Wang
Yibo Yang
Yining Li
Kai Chen
Yunhai Tong
Homepage https://xushilin1.github.io/rap_sam/