You're currently viewing an old version of this dataset. To see the current version, click here.

Stacked Wasserstein Autoencoder

The proposed model is built on the theoretical analysis presented in [30,14]. Similar to the ARAE [14], our model provides flexibility in learning an autoencoder from the input space at the first stage.

Data and Resources

Cite this as

Wenju Xu, Shawn Keshmiri, Guanghui Wang (2024). Dataset: Stacked Wasserstein Autoencoder. https://doi.org/10.57702/zona57lw

DOI retrieved: December 16, 2024

Additional Info

Field Value
Created December 16, 2024
Last update December 16, 2024
Defined In https://doi.org/10.48550/arXiv.1910.02560
Author Wenju Xu
More Authors
Shawn Keshmiri
Guanghui Wang
Homepage https://arxiv.org/abs/1909.00114