You're currently viewing an old version of this dataset. To see the current version, click here.

Action-Quantized Offline Reinforcement Learning for Robotic Skill Learning

Offline reinforcement learning (RL) paradigm provides a general recipe to convert static behavior datasets into policies that can perform better than the policy that collected the data.

Data and Resources

Cite this as

Jianlan Luo, Perry Dong, Jeffrey Wu, Aviral Kumar, Xinyang Geng, Sergey Levine (2024). Dataset: Action-Quantized Offline Reinforcement Learning for Robotic Skill Learning. https://doi.org/10.57702/8zbesigl

DOI retrieved: December 2, 2024

Additional Info

Field Value
Created December 2, 2024
Last update December 2, 2024
Author Jianlan Luo
More Authors
Perry Dong
Jeffrey Wu
Aviral Kumar
Xinyang Geng
Sergey Levine
Homepage https://saqrl.github.io