differential privacy - cbl.eng.cam.ac.ukcbl.eng.cam.ac.uk/pub/intranet/mlg/readinggroup/... ·...
Post on 28-Jul-2020
2 Views
Preview:
TRANSCRIPT
Differential PrivacyStudy Group March 2017
Abadi, M., Chu, A., Goodfellow, I., McMahan, H.B., Mironov, I., Talwar, K. and Zhang, L., 2016, October. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security (pp. 308-318). ACM.
Deep Learning with Differential Privacy
MotivationAbadi, M. et al. Deep learning with differential privacy. 2016
Fredrikson, M., Jha, S. and Ristenpart, T., 2015, October. Model inversion attacks that exploit confidence information and basic countermeasures. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security (pp. 1322-1333). ACM.
ε,𝛿
ε,𝛿
Theorem 1: Abadi, M. et al. Deep learning with differential privacy. 2016
Algorithm
Algorithm 1: Abadi, M. et al. Deep learning with differential privacy. 2016
N
Algorithm 1: Abadi, M. et al. Deep learning with differential privacy. 2016
Algorithm 1: Abadi, M. et al. Deep learning with differential privacy. 2016
N
Algorithm 1: Abadi, M. et al. Deep learning with differential privacy. 2016
N
gt(x3)gt(x3)gt(x8)gt(x11)
Algorithm 1: Abadi, M. et al. Deep learning with differential privacy. 2016
N
gt(x3)gt(x3)gt(x8)gt(x11)
clipping also seen in normal SGD for non-privacy reasons — but then done on the batch level
Algorithm 1: Abadi, M. et al. Deep learning with differential privacy. 2016
N
gt(x3)gt(x3)gt(x8)gt(x11)
Algorithm 1: Abadi, M. et al. Deep learning with differential privacy. 2016
N
lot
batch
Algorithm 1: Abadi, M. et al. Deep learning with differential privacy. 2016
N
lotbatch
Implementation
class DPSGD_Optimizer(): def __init__(self, accountant, sanitizer): self._accountant = accountant self._sanitizer = sanitizer
def Minimize(self, loss, params, batch_size, noise_options): # Accumulate privacy spending before computing # and using the gradients. priv_accum_op = self._accountant.AccumulatePrivacySpending( batch_size, noise_options) with tf.control_dependencies(priv_accum_op): # Compute per example gradients px_grads = per_example_gradients(loss, params) # Sanitize gradients sanitized_grads = self._sanitizer.Sanitize( px_grads, noise_options) # Take a gradient descent step return apply_gradients(params, sanitized_grads)
def DPTrain(loss, params, batch_size, noise_options): accountant = PrivacyAccountant() sanitizer = Sanitizer() dp_opt = DPSGD_Optimizer(accountant, sanitizer) sgd_op = dp_opt.Minimize( loss, params, batch_size, noise_options) eps, delta = (0, 0) # Carry out the training as long as the privacy # is within the pre-set limit. while within_limit(eps, delta): sgd_op.run() eps, delta = accountant.GetSpentPrivacy()
Figure 1: Abadi, M. et al. Deep learning with differential privacy. 2016
class DPSGD_Optimizer(): def __init__(self, accountant, sanitizer): self._accountant = accountant self._sanitizer = sanitizer
def Minimize(self, loss, params, batch_size, noise_options): # Accumulate privacy spending before computing # and using the gradients. priv_accum_op = self._accountant.AccumulatePrivacySpending( batch_size, noise_options) with tf.control_dependencies(priv_accum_op): # Compute per example gradients px_grads = per_example_gradients(loss, params) # Sanitize gradients sanitized_grads = self._sanitizer.Sanitize( px_grads, noise_options) # Take a gradient descent step return apply_gradients(params, sanitized_grads)
def DPTrain(loss, params, batch_size, noise_options): accountant = PrivacyAccountant() sanitizer = Sanitizer() dp_opt = DPSGD_Optimizer(accountant, sanitizer) sgd_op = dp_opt.Minimize( loss, params, batch_size, noise_options) eps, delta = (0, 0) # Carry out the training as long as the privacy # is within the pre-set limit. while within_limit(eps, delta): sgd_op.run() eps, delta = accountant.GetSpentPrivacy()
Figure 1: Abadi, M. et al. Deep learning with differential privacy. 2016
class DPSGD_Optimizer(): def __init__(self, accountant, sanitizer): self._accountant = accountant self._sanitizer = sanitizer
def Minimize(self, loss, params, batch_size, noise_options): # Accumulate privacy spending before computing # and using the gradients. priv_accum_op = self._accountant.AccumulatePrivacySpending( batch_size, noise_options) with tf.control_dependencies(priv_accum_op): # Compute per example gradients px_grads = per_example_gradients(loss, params) # Sanitize gradients sanitized_grads = self._sanitizer.Sanitize( px_grads, noise_options) # Take a gradient descent step return apply_gradients(params, sanitized_grads)
def DPTrain(loss, params, batch_size, noise_options): accountant = PrivacyAccountant() sanitizer = Sanitizer() dp_opt = DPSGD_Optimizer(accountant, sanitizer) sgd_op = dp_opt.Minimize( loss, params, batch_size, noise_options) eps, delta = (0, 0) # Carry out the training as long as the privacy # is within the pre-set limit. while within_limit(eps, delta): sgd_op.run() eps, delta = accountant.GetSpentPrivacy()
Figure 1: Abadi, M. et al. Deep learning with differential privacy. 2016
class DPSGD_Optimizer(): def __init__(self, accountant, sanitizer): self._accountant = accountant self._sanitizer = sanitizer
def Minimize(self, loss, params, batch_size, noise_options): # Accumulate privacy spending before computing # and using the gradients. priv_accum_op = self._accountant.AccumulatePrivacySpending( batch_size, noise_options) with tf.control_dependencies(priv_accum_op): # Compute per example gradients px_grads = per_example_gradients(loss, params) # Sanitize gradients sanitized_grads = self._sanitizer.Sanitize( px_grads, noise_options) # Take a gradient descent step return apply_gradients(params, sanitized_grads)
def DPTrain(loss, params, batch_size, noise_options): accountant = PrivacyAccountant() sanitizer = Sanitizer() dp_opt = DPSGD_Optimizer(accountant, sanitizer) sgd_op = dp_opt.Minimize( loss, params, batch_size, noise_options) eps, delta = (0, 0) # Carry out the training as long as the privacy # is within the pre-set limit. while within_limit(eps, delta): sgd_op.run() eps, delta = accountant.GetSpentPrivacy()
Figure 1: Abadi, M. et al. Deep learning with differential privacy. 2016
class DPSGD_Optimizer(): def __init__(self, accountant, sanitizer): self._accountant = accountant self._sanitizer = sanitizer
def Minimize(self, loss, params, batch_size, noise_options): # Accumulate privacy spending before computing # and using the gradients. priv_accum_op = self._accountant.AccumulatePrivacySpending( batch_size, noise_options) with tf.control_dependencies(priv_accum_op): # Compute per example gradients px_grads = per_example_gradients(loss, params) # Sanitize gradients sanitized_grads = self._sanitizer.Sanitize( px_grads, noise_options) # Take a gradient descent step return apply_gradients(params, sanitized_grads)
def DPTrain(loss, params, batch_size, noise_options): accountant = PrivacyAccountant() sanitizer = Sanitizer() dp_opt = DPSGD_Optimizer(accountant, sanitizer) sgd_op = dp_opt.Minimize( loss, params, batch_size, noise_options) eps, delta = (0, 0) # Carry out the training as long as the privacy # is within the pre-set limit. while within_limit(eps, delta): sgd_op.run() eps, delta = accountant.GetSpentPrivacy()
Figure 1: Abadi, M. et al. Deep learning with differential privacy. 2016
class DPSGD_Optimizer(): def __init__(self, accountant, sanitizer): self._accountant = accountant self._sanitizer = sanitizer
def Minimize(self, loss, params, batch_size, noise_options): # Accumulate privacy spending before computing # and using the gradients. priv_accum_op = self._accountant.AccumulatePrivacySpending( batch_size, noise_options) with tf.control_dependencies(priv_accum_op): # Compute per example gradients px_grads = per_example_gradients(loss, params) # Sanitize gradients sanitized_grads = self._sanitizer.Sanitize( px_grads, noise_options) # Take a gradient descent step return apply_gradients(params, sanitized_grads)
def DPTrain(loss, params, batch_size, noise_options): accountant = PrivacyAccountant() sanitizer = Sanitizer() dp_opt = DPSGD_Optimizer(accountant, sanitizer) sgd_op = dp_opt.Minimize( loss, params, batch_size, noise_options) eps, delta = (0, 0) # Carry out the training as long as the privacy # is within the pre-set limit. while within_limit(eps, delta): sgd_op.run() eps, delta = accountant.GetSpentPrivacy()
Figure 1: Abadi, M. et al. Deep learning with differential privacy. 2016
Code on GitHub: • More logic for dealing with batches • Two accountants:
• AmortizedAccountant • GaussianMomentsAccountant
• Per example gradient code (including for convolutional layers)
• MNIST example
Also has code for Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data https://github.com/tensorflow/models/tree/master/differential_privacy
Results
Figure 2: Abadi, M. et al. Deep learning with differential privacy. 2016
Figure 3: Abadi, M. et al. Deep learning with differential privacy. 2016
Figure 3: Abadi, M. et al. Deep learning with differential privacy. 2016
“compare to 98.3% without
Figure 3: Abadi, M. et al. Deep learning with differential privacy. 2016
Figure 4: Abadi, M. et al. Deep learning with differential privacy. 2016
Figure 5: Abadi, M. et al. Deep learning with differential privacy. 2016
86% Accuracy
80 % accuracy
retrain
fix from cifar-100
Figure 6: Abadi, M. et al. Deep learning with differential privacy. 2016
Hall, R., Rinaldo, A. and Wasserman, L., 2013. Differential privacy for functions and functional data. Journal of Machine Learning Research, 14(Feb), pp.703-727.
Differential Privacy for Functions and Functional Data
Motivation
1. Useful if results naturally function valued
2. May want to have a data summary that is a function
1. Useful if results naturally function valued
2. May want to have a data summary that is a function
1. Useful if results naturally function valued
2. May want to have a data summary that is a function
x xx x x xx
Proofs
Hall, R., et. al. Differential privacy for functions and functional data. 2016
Hall, R., et. al. Differential privacy for functions and functional data. 2016
Skip rest of proof of Prop3
Hall, R., et. al. Differential privacy for functions and functional data. 2016
Hall, R., et. al. Differential privacy for functions and functional data. 2016
Hall, R., et. al. Differential privacy for functions and functional data. 2016
Hall, R., et. al. Differential privacy for functions and functional data. 2016
Hall, R., et. al. Differential privacy for functions and functional data. 2016
Hall, R., et. al. Differential privacy for functions and functional data. 2016
Hall, R., et. al. Differential privacy for functions and functional data. 2016
Hall, R., et. al. Differential privacy for functions and functional data. 2016
Hall, R., et. al. Differential privacy for functions and functional data. 2016
Applied to KDEs
Hall, R., et. al. Differential privacy for functions and functional data. 2016
Hall, R., et. al. Differential privacy for functions and functional data. 2016
Hall, R., et. al. Differential privacy for functions and functional data. 2016
Hall, R., et. al. Differential privacy for functions and functional data. 2016
Hall, R., et. al. Differential privacy for functions and functional data. 2016
Hall, R., et. al. Differential privacy for functions and functional data. 2016
Hall, R., et. al. Differential privacy for functions and functional data. 2016
Hall, R., et. al. Differential privacy for functions and functional data. 2016
Hall, R., et. al. Differential privacy for functions and functional data. 2016
Hall, R., et. al. Differential privacy for functions and functional data. 2016
varying alpha
varying alpha
varying beta
varying beta
varying beta
demo maybe…
https://gist.github.com/john-bradshaw/e63d2a20537beda035b32224a1be8831
references
Abadi, M., Chu, A., Goodfellow, I., McMahan, H.B., Mironov, I., Talwar, K. and Zhang, L., 2016, October. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security (pp. 308-318). ACM.
Hall, R., Rinaldo, A. and Wasserman, L., 2013. Differential privacy for functions and functional data. Journal of Machine Learning Research, 14(Feb), pp.703-727.
appendix
top related