Black Box Differential Privacy Auditing

We present a practical method to audit the differential privacy (DP) guarantees of a machine learning model using a small hold-out dataset that is not exposed to the model during the training.

BibTex: