Low-rank compression of neural nets: Learning the rank of each layer

Model compression is generally performed by using quantization, low-rank approximation or pruning, for which various algorithms have been researched in recent years.

Data and Resources

Cite this as

Yerlan Idelbayev (2024). Dataset: Low-rank compression of neural nets: Learning the rank of each layer. https://doi.org/10.57702/lk90s50g

DOI retrieved: December 3, 2024

Additional Info

Field Value
Created December 3, 2024
Last update December 3, 2024
Author Yerlan Idelbayev
Homepage https://github.com/UCMerced-ML/LC-model-compression