Abstract:
Differential privacy is a notion of confidentiality that protects the privacy of individuals while allowing useful computations on their private data. Deriving differential privacy guarantees for real
programs is a difficult and error-prone task that calls for principled approaches and tool support. Approaches based on linear types and static analysis have recently emerged; however, an increasing number of programs achieve privacy using techniques that cannot be analyzed by these approaches. Examples include programs that aim for weaker, approximate differential privacy guarantees, programs that use the Exponential mechanism, and randomized programs that achieve differential privacy without using any standard mechanism. Providing support for reasoning about the privacy of such programs has been an open problem.
We report on CertiPriv, a machine-checked framework for reasoning about differential privacy built on top of the Coq proof assistant.
The central component of CertiPriv is a quantitative extension of a probabilistic relational Hoare logic that enables to derive differential privacy guarantees for programs from first principles. We demonstrate the expressiveness of CertiPriv using a number of examples whose formal analysis is out of the reach of previous techniques. In particular, we prove (rather than assume) the correctness of the Laplacian and Exponential mechanisms and analyze the privacy of randomized and streaming algorithms from the recent literature.