Bayesian privacy

Ran Eilat, Kfir Eliaz, Xiaosheng Mu

Research output: Contribution to journalArticlepeer-review

5 Scopus citations


Modern information technologies make it possible to store, analyze, and trade unprecedented amounts of detailed information about individuals. This has led to public discussions on whether individuals' privacy should be better protected by restricting the amount or the precision of information that is collected by commercial institutions on their participants. We contribute to this discussion by proposing a Bayesian approach to measure loss of privacy in a mechanism. Specifically, we define the loss of privacy associated with a mechanism as the difference between the designer's prior and posterior beliefs about an agent's type, where this difference is calculated using Kullback–Leibler divergence, and where the change in beliefs is triggered by actions taken by the agent in the mechanism. We consider both ex post (for every realized type, the maximal difference in beliefs cannot exceed some threshold κ) and ex ante (the expected difference in beliefs over all type realizations cannot exceed some threshold κ) measures of privacy loss. Applying these notions to the monopolistic screening environment of Mussa and Rosen (1978), we study the properties of optimal privacy-constrained mechanisms and the relation between welfare/profits and privacy levels.

Original languageEnglish (US)
Pages (from-to)1557-1603
Number of pages47
JournalTheoretical Economics
Issue number4
StatePublished - Nov 2021

All Science Journal Classification (ASJC) codes

  • General Economics, Econometrics and Finance


  • D47
  • D82
  • Privacy
  • mechanism-design
  • relative entropy


Dive into the research topics of 'Bayesian privacy'. Together they form a unique fingerprint.

Cite this