Black Box Differential Privacy Auditing

We present a practical method to audit the differential privacy (DP) guarantees of a machine learning model using a small hold-out dataset that is not exposed to the model during the training.

Data and Resources

Cite this as

Antti Koskela, Jafar Mohammadi (2025). Dataset: Black Box Differential Privacy Auditing. https://doi.org/10.57702/d6cxenv2

DOI retrieved: January 2, 2025

Additional Info

Field Value
Created January 2, 2025
Last update January 2, 2025
Defined In https://doi.org/10.48550/arXiv.2406.04827
Author Antti Koskela
More Authors
Jafar Mohammadi