ZeroQ: A Novel Zero Shot Quantization Framework

Quantization is a promising approach for reducing the inference time and memory footprint of neural networks. However, most existing quantization methods require access to the original training dataset for retraining during quantization. This is often not possible for applications with sensitive or proprietary data. Existing zero-shot quantization methods use different heuristics to address this, but they result in poor performance, especially when quantizing to ultra-low precision. Here, we propose ZEROQ, a novel zero-shot quantization framework to address this. ZEROQ enables mixed-precision quantization without any access to the training or validation data. This is achieved by optimizing for a Distilled Dataset, which is engineered to match the statistics of batch normalization across different layers of the network.

Data and Resources

Cite this as

Yaohui Cai, Zhewei Yao, Zhen Dong, Amir Gholami, Michael W. Mahoney, Kurt Keutzer (2024). Dataset: ZeroQ: A Novel Zero Shot Quantization Framework. https://doi.org/10.57702/vjhbr41t

DOI retrieved: December 2, 2024

Additional Info

Field Value
Created December 2, 2024
Last update December 2, 2024
Author Yaohui Cai
More Authors
Zhewei Yao
Zhen Dong
Amir Gholami
Michael W. Mahoney
Kurt Keutzer
Homepage https://github.com/amirgholami/ZeroQ