UVCGAN: UNet Vision Transformer cycle-consistent GAN for unpaired image-to-image translation

Unpaired image-to-image translation has broad applications in art, design, and scientific simulations. One early breakthrough was CycleGAN that emphasizes one-to-one mappings between two unpaired image domains via generative-adversarial networks (GAN) coupled with the cycle-consistency constraint, while more recent works promote one-to-many mapping to boost diversity of the translated images. Motivated by scientific simulation and one-to-one needs, this work revisits the classic CycleGAN framework and boosts its performance to outperform more contemporary models without relaxing the cycle-consistency constraint.

Data and Resources

Cite this as

Dmitrii Torbunov, Yi Huang, Haiwang Yu, Jin Huang, Shinjae Yoo, Meifeng Lin, Brett Viren, Yihui Ren (2024). Dataset: UVCGAN: UNet Vision Transformer cycle-consistent GAN for unpaired image-to-image translation. https://doi.org/10.57702/nlojan2y

DOI retrieved: December 16, 2024

Additional Info

Field Value
Created December 16, 2024
Last update December 16, 2024
Author Dmitrii Torbunov
More Authors
Yi Huang
Haiwang Yu
Jin Huang
Shinjae Yoo
Meifeng Lin
Brett Viren
Yihui Ren
Homepage https://github.com/ls4gan/uvcgan