Go to JKU Homepage
Institute of Computational Perception
What's that?

Institutes, schools, other departments, and programs create their own web content and menus.

To help you better navigate the site, see here where you are at the moment.

Detail

How Does Data Augmentation Help Adversarial Robustness?

CP researcher Hamid Eghbal-zadeh gave an invited talk at the Cyber-physical systems group, opens an external URL in a new window at TU Wien

In this talk, first we look at how different inductive biases can be incorporated into machine learning systems, and what are the consequences of such inductive biases.
We then review some of the widely-used inductive biases in deep learning models, and try to understand how they help models improve their prediction performance.
More specifically, we look at popular data augmentation techniques, and their inductive biases.
Further, we look at how the decision boundaries of deep neural networks can cause deep neural networks to be more vulnerable to adversarial attacks, and how this can be systematically and objectively measured.
And in the end, we look at how data augmentation techniques affect decision boundary of deep neural networks, from two different aspects: 1) incorporating an inductive bias, and 2) incorporating stochasticity into training data.
Using tools from statistics, we disentangle these factors, and try to analyze their effects independently, using our proposed analysis framework.