Neural Tangent Kernel

The neural tangent kernel Jacot et al. (2018) for fully-connected and convolutional networks describes the behavior and asymptotic performance of these networks under the assumption that they are trained by gradient descent with an infinitesimally small learning rate, are initialized randomly, and have layers of infinite width.

Data and Resources

Cite this as

Du et al. (2024). Dataset: Neural Tangent Kernel. https://doi.org/10.57702/fpe226b1

DOI retrieved: December 16, 2024

Additional Info

Field Value
Created December 16, 2024
Last update December 16, 2024
Defined In https://doi.org/10.48550/arXiv.2208.09309
Author Du et al.