You're currently viewing an old version of this dataset. To see the current version, click here.

Video-LLaMA: An instruction-tuned audio-visual language model for video understanding

A video-LLaMA model for video understanding, comprising 100k videos with detailed captions.

Data and Resources

This dataset has no data

Cite this as

Hang Zhang, Xin Li, Lidong Bing (2024). Dataset: Video-LLaMA: An instruction-tuned audio-visual language model for video understanding. https://doi.org/10.57702/ztz8frfm

Private DOI This DOI is not yet resolvable.
It is available for use in manuscripts, and will be published when the Dataset is made public.

Additional Info

Field Value
Created December 3, 2024
Last update December 3, 2024
Defined In https://doi.org/10.48550/arXiv.2306.07207
Author Hang Zhang
More Authors
Xin Li
Lidong Bing
Homepage https://arxiv.org/abs/2306.02858