Exit-Ensemble Distillation

This paper proposes a novel knowledge distillation-based learning method to improve the classification performance of convolutional neural networks (CNNs) without a pre-trained teacher network, called exit-ensemble distillation.

Data and Resources

Cite this as

Hojung Lee, Jong-Seok Lee (2024). Dataset: Exit-Ensemble Distillation. https://doi.org/10.57702/ff18oyz5

DOI retrieved: December 16, 2024

Additional Info

Field Value
Created December 16, 2024
Last update December 16, 2024
Author Hojung Lee
More Authors
Jong-Seok Lee
Homepage https://arxiv.org/abs/2006.05525