Privacy Mechanism Design Through the Lens of Information Theory : With Applications to Compression, Caching, and Semantic Communication

Sammanfattning: Privacy mechanism design is an important research area and is receiving increased attention in recent years. This field focuses on addressing the challenge of revealing general data that might have correlations with sensitive information, all while protecting this sensitive information against unauthorized access. This multidisciplinary field brings together principles from computer science, cryptography, and information theory to develop innovative solutions that allow for the controlled disclosure of information. By designing mechanisms that incorporate robust encryption, anonymization techniques, and access controls, privacy mechanism design empowers individuals and organizations to securely navigate the complex landscape of data sharing in today's digital age. Particularly, privacy mechanism design plays a pivotal role in communication systems, wherein the transmitted data frequently holds sensitivity and faces elevated risks of eavesdropping or data breaches.  Developing privacy mechanism designs that possess both low complexity and reliable theoretical assurances can yield advantages across practical implementations and theoretical domains alike. As a result, these designs hold utility across numerous applications. On the other hand, information theory provides plenty measures that allow risk assessments, develop privacy by design approaches, and establish proved guarantees. Furthermore, information theoretic privacy mechanism design plays an important role in numerous problems in information theory such as wireless networks, network information theory, compression algorithm designs, cache-aided networks, and semantic communication.  Thereby, we in this thesis, focus on designing privacy mechanisms through the lens of information theory and study multiple problems.In Chapter 3, we divide the results into two main parts: I. Invertible leakage matrix, II. Non-invertible leakage matrix.In the first part, we study a stochastic disclosure control problem where the useful data to be disclosed depend on private data that should be protected. Thus, we design a privacy mechanism to produce new data which maximizes the disclosed information about the useful data under a strong χ2-privacy criterion. We introduce a per-letter (point-wise) privacy leakage measure which must hold for every data point and we propose its benefits over on average measures such as mutual information. We have shown that for sufficiently small leakage, the privacy mechanism design problem can be geometrically studied in the space of probability distributions by a local approximation of the mutual information.  By using methods from Euclidean information geometry, the original highly challenging optimization problem can be reduced to a problem of finding the principal right-singular vector of a matrix, which characterizes the optimal privacy mechanism. Moreover two extensions are studied considering different scenarios.In the second part,  we consider a similar problem as the first part. We study the privacy mechanism design that maximizes the revealed information about useful data while satisfying a strong l1-privacy criterion. In contrast with the first part, the privacy mechanism is designed based on non-invertible leakage matrix. When a sufficiently small leakage is allowed, we show that the optimizer distributions of the privacy mechanism design problem have a specific geometry, i.e., they are perturbations of fixed vector distributions. This geometrical structure allows us to use a local approximation of the conditional entropy. By using this approximation the original optimization problem can be reduced to a linear program so that an approximate solution for the optimal privacy mechanism can be easily obtained. We study the obtained designs in different examples based on practical applications  such as medical applications. In Chapter 4, we study an information theoretic privacy mechanism design problem for two scenarios where the private data is either observable or hidden. To design the privacy mechanism, we first extend the Functional Representation Lemma and Strong Functional Representation Lemma by relaxing the independence condition and thereby allowing a certain leakage.We then find lower and upper bounds on the privacy-utility trade-offs in both scenarios. In particular, for the case where no leakage is allowed and the private data is observable, our upper and lower bounds improve previous bounds. Moreover, we strengthen the lower bounds derived in this chapter considering observable private data scenario using separation technique.In Chapter 5, an agent observes useful data Y=(Y1,...,YN) that is correlated with private data X=(X1,...,XN) which is assumed to be also accessible by the agent. Here, we consider K users where user $i$ demands a sub-vector of Y, denoted by Ci. A privacy mechanism is designed to generate disclosed data which maximizes a linear combinations of the users utilities while satisfying a bounded privacy constraint in terms of mutual information. We study the applications of privacy mechanism design in cache-aided networks (Chapter 6), private compression designs (Chapter 7), and semantic communication (Chapter 8).In Chapter 6, we generalize the previous caching and delivery problems considering correlation between the database and the private data. In Chapter 7, we generalize the existing compression techniques considering zero and non-zero leakage. Considering zero leakage, we improve the previous results by designing new codes which have less average length. Moreover, we generalize the previous results relaxing the zero leakage constraint and allowing small leakage.Moreover in Chapter 7, we consider a private compression design problem with a sequential encoder and study the obtain designs in cache-aided networks.Finally, in Chapter 8 we study a semantic communication problem with a privacy constraint.

  KLICKA HÄR FÖR ATT SE AVHANDLINGEN I FULLTEXT. (PDF-format)