You're currently viewing an old version of this dataset. To see the current version, click here.

Video-LLaMA: An instruction-tuned audio-visual language model for video understanding

A video-LLaMA model for video understanding, comprising 100k videos with detailed captions.

Data and Resources

Cite this as

Hang Zhang, Xin Li, Lidong Bing (2024). Dataset: Video-LLaMA: An instruction-tuned audio-visual language model for video understanding. https://doi.org/10.57702/ztz8frfm

DOI retrieved: December 3, 2024

Additional Info

Field Value
Created December 3, 2024
Last update December 3, 2024
Defined In https://doi.org/10.48550/arXiv.2306.07207
Author Hang Zhang
More Authors
Xin Li
Lidong Bing
Homepage https://arxiv.org/abs/2306.02858