You're currently viewing an old version of this dataset. To see the current version, click here.

MELFUSION: Synthesizing Music from Image and Language Cues using Diffusion Models

MELFUSION is a text-to-music diffusion model that can synthesize music conditioned on both visual and textual modality.

Data and Resources

Cite this as

Sanjoy Chowdhury, Sayan Nag, K J Joseph, Balaji Vasan Srinivasan, Dinesh Manocha (2024). Dataset: MELFUSION: Synthesizing Music from Image and Language Cues using Diffusion Models. https://doi.org/10.57702/nddrtxsb

DOI retrieved: December 2, 2024

Additional Info

Field Value
Created December 2, 2024
Last update December 2, 2024
Defined In https://doi.org/10.48550/arXiv.2406.04673
Author Sanjoy Chowdhury
More Authors
Sayan Nag
K J Joseph
Balaji Vasan Srinivasan
Dinesh Manocha
Homepage https://schowdhury671.github.io/melfusion_cvpr2024/