Constructing Multi-Modal Dialogue Dataset by Replacing Text with Semantically Relevant Images

A multi-modal dialogue dataset created by replacing text with semantically relevant images.

Data and Resources

Cite this as

Nyoungwoo Lee, Suwon Shin, Jaegul Choo, Ho-Jin Choi (2024). Dataset: Constructing Multi-Modal Dialogue Dataset by Replacing Text with Semantically Relevant Images. https://doi.org/10.57702/8vfnfia7

DOI retrieved: December 16, 2024

Additional Info

Field Value
Created December 16, 2024
Last update December 16, 2024
Defined In https://doi.org/10.48550/arXiv.2212.04119
Author Nyoungwoo Lee
More Authors
Suwon Shin
Jaegul Choo
Ho-Jin Choi