Go to JKU Homepage
Institute of Applied Statistics
What's that?

Institutes, schools, other departments, and programs create their own web content and menus.

To help you better navigate the site, see here where you are at the moment.


Research Seminar at the Institute of Applied Statistics

March 17th - Stefan Rass, Lehrstuhl Secure Systems LIT Secure and Correct Systems Lab, JKU: On Privacy in Machine Learning by Plausible Deniability

[Translate to Englisch:]
[Translate to Englisch:]

Abstract: When a machine learning model is trained from data, the data may be subject to security requirements and even be classified as sensitive. If the trained model is intended for use by untrusted parties, this raises the question of how much information about the training data is extractable from the machine learning model, once it is given away. The talk presents two results in this regard, based on the security notion of plausible deniability. We show that a model of finite size will retain a nonzero residual entropy if the training data has a size beyond a (model-dependent) threshold. Second, we show that for a certain class of models, and any artificially chosen training data, we can craft a topological norm that gives an error metric under which the training recovers exactly the given model. The order of quantifiers is what enables plausible deniability here, since we can, for any given model, claim this to have risen from an arbitrary training set that can have any distribution and can be completely unrelated to the original sensitive training data. We illustrate the method on examples from normal and logistic regression and some examples of neural networks and discuss the practical implications of these results.


Time & date

March 17, 2022

15:30 - 17:00 PM

Add to my calendar


S2 Z74, Science Park 2