You're currently viewing an old version of this dataset. To see the current version, click here.

Black Box Differential Privacy Auditing

We present a practical method to audit the differential privacy (DP) guarantees of a machine learning model using a small hold-out dataset that is not exposed to the model during the training.

Data and Resources

This dataset has no data

Cite this as

Antti Koskela, Jafar Mohammadi (2025). Dataset: Black Box Differential Privacy Auditing. https://doi.org/10.57702/d6cxenv2

Private DOI This DOI is not yet resolvable.
It is available for use in manuscripts, and will be published when the Dataset is made public.

Additional Info

Field Value
Created January 2, 2025
Last update January 2, 2025
Defined In https://doi.org/10.48550/arXiv.2406.04827
Author Antti Koskela
More Authors
Jafar Mohammadi