Differentially Private
Gaussian Processes

Presented by Mike Smith
m.t.smith@sheffield.ac.uk

Differential Privacy

A Quick Introduction

See The Algorithmic Foundations of Differential Privacy by Dwork and Roth for a rigorous introduction to the framework.

Anonymity Gone Wrong

In the mid-1990s the Massachusetts Group Insurance Commission released 'anonymised' health records for state employees.

Anonymity Gone Wrong

The Governor assured the public that GIC had protected patient privacy by deleting identifiers.
Broken Promises of Privacy, Paul Ohm

Anonymity Gone Wrong

The data was 'anonymised' by removing names. Other identifying columns remained.

Anonymity Gone Wrong

Latanya Sweeney used the voter rolls to find the Governor's medical records...

...which she posted to him.

This is a linkage attack: it uses auxiliary information to compromise privacy in a database.

Example

We might want to find out whether people hold a controversial view, that they wouldn't normally express:

The trams were actually a good idea

Many people in this room might secretly agree, but maybe wouldn't like to say 'Yes' to this question.

If flips are different, tell the truth.

If they're the same...

say

YES for heads

and

NO for tails

012345678910111213141516171819
05038271913964321110000000
1635038282014107432111000000
27362503929211511753221110000
381726250403022161186432111000
4878072615040312317129643211100
591867971615141322418131075322111
6949185797061514132251914108543211
79694908578696051423326201511864322
897969389847769605142342721161297533
99897959288837668605142352822171310754
1099989795928782756859514335282318141086
11999998969491878175675951433629231814118
129999989796949086807467595143363024191512
1310099999897959390858073665951443730252016
14100100999998979592898479736658514437312521
151001001009999989694928884787265585144383226
161001001009999989796949187837871655851443832
1710010010010099999897969391878277716558514538
18100100100100100999998979593908682767164585145
19100100100100100999999989795928986817670645851

Summary

We want to protect a user from a linkage attack...

...while still performing inference over the whole group.

Making a dataset private needs more than just erasing names.

To achieve a level of privacy one needs to add randomness to the data.

This is a fundamental feature of differential privacy.

Differential Privacy for GPs

Differential Privacy for GPs

We have a dataset in which the inputs, $X$, are public. The outputs, $\mathbf{y}$, we want to keep private.

Data consists of the heights and weights of 287 women from a census of the !Kung

Vectors and Functions

Hall et al. (2013) showed that one can ensure that a version of $f$, function $\tilde{f}$ is $(\varepsilon, \delta)$-differentially private by adding a scaled sample from a GP prior.

Applied to GPs

We applied this method to the GP posterior.

The covariance of the posterior only depends on the inputs, $X$. So we can compute this without applying DP.

The mean function, $f_D(\mathbf{x_*})$, does depend on $\mathbf{y}$. $f_D(\mathbf{x_*}) = \mathbf{k}_*^\top K^{-1} \mathbf{y}$

We are interested in finding $|| f_D(\mathbf{x_*}) - f_{D^\prime}(\mathbf{x_*}) ||_H^2$

...how much the mean function (in RKHS) can change due to a change in $\mathbf{y}$.

Applied to GPs

Using the representer theorem, we can write $|| f_D(\mathbf{x_*}) - f_{D^\prime}(\mathbf{x_*}) ||_H^2$
as:

$\Big|\Big|\sum_{i=1}^n k(\mathbf{x_*},\mathbf{x}_i) \left(\alpha_i - \alpha^\prime_i\right)\Big|\Big|_H^2$

where $\mathbf{\alpha} - \mathbf{\alpha}^\prime = K^{-1} \left(\mathbf{y} - \mathbf{y}^\prime \right)$

$\Big|\Big|\sum_{i=1}^n k(\mathbf{x_*},\mathbf{x}_i) \left(\alpha_i - \alpha^\prime_i\right)\Big|\Big|_H^2$

where $\mathbf{\alpha} - \mathbf{\alpha}^\prime = K^{-1} \left(\mathbf{y} - \mathbf{y}^\prime \right)$

We constrain the kernel: $-1\leq k \leq 1$ and we only allow one element of $\mathbf{y}$ and $\mathbf{y}'$ to differ (by at most $d$).

So only one column of $K^{-1}$ will be involved in the change of mean (which we are summing over).

The distance above can then be shown to be no greater than $d\;||K^{-1}||_\infty$

Applied to GPs

This 'works' in that it allows DP predictions...

But to avoid too much noise, the value of $\varepsilon$ is too large (here it is 100!)

Inducing Inputs

Using sparse methods (i.e. inducing inputs) can help reduce the sensitivity a little.

Effect of perturbation

Previously I mentioned that the noise is sampled from the GP's prior.

This is not necessarily the most 'efficient' covariance to use.

Effect of perturbation

Cloaking

Left: Ideal covariance. Right: actual covariance

Cloaking

Hall et al. (2013) also presented a bound on vectors.

We need to find a bound ($\Delta$) on the scale of the output change, in term of its Mahalanobis distance with respect to the added noise covariance.

$\sup_{D \sim {D'}} ||M^{-1/2} (\mathbf{y}_D - \mathbf{y}_{D'})||_2 \leq \Delta$

Then use this to add noise to our vector $\mathbf{v}_D$:

$\tilde{\mathbf{v}_D} = \mathbf{v}_D + \frac{\text{c}(\delta)\Delta}{\varepsilon}Z$

Cloaking

Intuitively we want to construct M so that it has greatest covariance in those directions most affected by changes in training points, so that it will be most able to mask those changes.

The posterior mean predictions are,

$\mathbf{y}_* = K_{*f} K^{-1} \mathbf{y}$

The effect of perturbing each training point on each test point is represented in the cloaking matrix, $C = K_{*f} K^{-1}$

Cloaking

We assume we only need protect the change in one training input at a time.

...introduce y-y' earlier

sub into bound

optimisation

Example: House prices

Example: citibike

Also need to do intro re uganda

The go-to book on differential privacy, by Dwork and Roth;
Dwork, Cynthia, and Aaron Roth. "The algorithmic foundations of differential privacy." Theoretical Computer Science 9.3-4 (2013): 211-407. link

I found this paper allowed me to start applying DP to GP;
Hall, Rob, Alessandro Rinaldo, and Larry Wasserman. "Differential privacy for functions and functional data." The Journal of Machine Learning Research 14.1 (2013): 703-727. link

Articles about the Massachusetts privacy debate
Barth-Jones, Daniel C. "The're-identification'of Governor William Weld's medical information: a critical re-examination of health data identification risks and privacy protections, then and now." Then and Now (June 4, 2012) (2012). link

Ohm, Paul. "Broken promises of privacy: Responding to the surprising failure of anonymization." UCLA Law Review 57 (2010): 1701. link

Narayanan, Arvind, and Edward W. Felten. "No silver bullet: De-identification still doesn’t work." White Paper (2014). link

Howell, N. Data from a partial census of the !kung san, dobe. 1967-1969. https://public.tableau. com/profile/john.marriott#!/vizhome/ kung-san/Attributes, 1967.

Images used: BostonGlobe: Mass Mutual, Weld. Harvard: Sweeney. Rich on flickr: Sheffield skyline.