You're currently viewing an old version of this dataset. To see the current version, click here.

Action-Quantized Offline Reinforcement Learning for Robotic Skill Learning

Offline reinforcement learning (RL) paradigm provides a general recipe to convert static behavior datasets into policies that can perform better than the policy that collected the data.

Data and Resources

This dataset has no data

Cite this as

Jianlan Luo, Perry Dong, Jeffrey Wu, Aviral Kumar, Xinyang Geng, Sergey Levine (2024). Dataset: Action-Quantized Offline Reinforcement Learning for Robotic Skill Learning. https://doi.org/10.57702/8zbesigl

Private DOI This DOI is not yet resolvable.
It is available for use in manuscripts, and will be published when the Dataset is made public.

Additional Info

Field Value
Created December 2, 2024
Last update December 2, 2024
Author Jianlan Luo
More Authors
Perry Dong
Jeffrey Wu
Aviral Kumar
Xinyang Geng
Sergey Levine
Homepage https://saqrl.github.io