Go to JKU Homepage
Institute of Computational Perception
What's that?

Institutes, schools, other departments, and programs create their own web content and menus.

To help you better navigate the site, see here where you are at the moment.

Detail

CP researchers present their latest work on Trustworthy Machine Learning

[Translate to Englisch:]
[Translate to Englisch:]

Despite the successes of Deep Neural Networks in many domains, it has been shown that they are very brittle when encountered with adversarial examples, that are instances with small, intentional perturbations causing these models to make false predictions.
In their latest work, CP researchers Hamid Eghbal-zadeh, Khaled Koutini, Paul Primus, Verena Haunschmid, and Gerhard Widmer look for the causes of this brittleness, by studying the common inductive biases used in Deep Learning.
They investigate Data Augmentation, that is a widely-used technique in deep learning for extending the training data by using inductive biases and domain expertise.
Their study reveals that although these methods have been proposed to improve performance, they can result in severe adversarial vulnerabilities.
Read their full story here:

click here to view the paper, opens an external URL in a new window

click here to watch the talk, opens an external URL in a new window

click here to see the workshop, opens an external URL in a new window