You're currently viewing an old version of this dataset. To see the current version, click here.

Universal and transferable adversarial attacks on aligned language models

AdvBench is a dataset for evaluating the safety of large language models.

Data and Resources

Cite this as

Andy Zou, Zifan Wang, J. Zico Kolter, Matt Fredrikson (2024). Dataset: Universal and transferable adversarial attacks on aligned language models. https://doi.org/10.57702/2oo2r02d

DOI retrieved: December 3, 2024

Additional Info

Field Value
Created December 3, 2024
Last update December 3, 2024
Defined In https://doi.org/10.48550/arXiv.2405.14125
Author Andy Zou
More Authors
Zifan Wang
J. Zico Kolter
Matt Fredrikson