We present a practical method to audit the differential privacy (DP) guarantees of a machine learning model using a small hold-out dataset that is not exposed to the model during the training.
BibTex:
Before browse our site, please accept our cookies policy