Tensor and Matrix Low-Rank Value-Function Approximation in Reinforcement Learning

Value-function (VF) approximation is a central problem in Reinforcement Learning (RL). Classical non-parametric VF estimation suffers from the curse of dimensionality. As a result, parsimonious parametric models have been adopted to approximate VFs in high-dimensional spaces, with most efforts being focused on linear and neural-network-based approaches.

Data and Resources

Cite this as

Sergio Rozada, Santiago Paternain, Antonio G. Marques (2024). Dataset: Tensor and Matrix Low-Rank Value-Function Approximation in Reinforcement Learning. https://doi.org/10.57702/rhr9lbg1

DOI retrieved: December 16, 2024

Additional Info

Field Value
Created December 16, 2024
Last update December 16, 2024
Defined In https://doi.org/10.48550/arXiv.2201.09736
Author Sergio Rozada
More Authors
Santiago Paternain
Antonio G. Marques