KVRET

Dialogue contexts are proven helpful in the spoken language understanding (SLU) system and they are typically encoded with explicit memory representations. However, most of the previous models learn the context memory with only one objective to maximizing the SLU performance, leaving the context memory under-exploited.

Data and Resources

Cite this as

He Bai, Yu Zhou, Jiajun Zhang, Chengqing Zong (2025). Dataset: KVRET. https://doi.org/10.57702/pwhxznjt

DOI retrieved: January 2, 2025

Additional Info

Field Value
Created January 2, 2025
Last update January 2, 2025
Defined In https://doi.org/10.48550/arXiv.1906.01788
Citation
  • https://doi.org/10.48550/arXiv.2103.06010
Author He Bai
More Authors
Yu Zhou
Jiajun Zhang
Chengqing Zong
Homepage https://arxiv.org/abs/1705.00134