We are a (still) young and growing institute with innovative projects and motivated young researchers. The focus of our research and teaching is on Artificial Intelligence and Machine Learning.
We develop and study computational models and algorithms that permit computers to perceive and 'understand' aspects of the external world, where we interpret 'perception' in the widest sense of the word, as the extraction of useful high-level information and knowledge from complex, possibly low-level data (audio, video, images, sensor data, texts, or even the Internet).
This requires fundamental and applied research on AI, machine learning, pattern recognition, text / data / social media mining, signal processing, statistical data modeling and classification.
Our current research has a particular focus on intelligent audio and music processing. Further research topics include biometrics, cryptography, personalization, and recommender systems.
Our goal is to offer state-of-the-art research and teaching in this area, and to provide a teaching environment that permits students to get involved in real research projects as early as possible.
CP submission wins the "Judges Award" in DCASE 2020 Challenge
"Low-Complexity Cross-Device Acoustic Scene Classification with RF-Regularized CNNs" by Khaled Koutini, Florian Henkel, Hamid Eghbal-zadeh and Gerhard Widmer wins the "Judges Award" in DCASE 2020 Challenge.
CP researchers receive Best Demo Award
Markus Schedl, together with Michael Mayr and Peter Knees, receive the Best-Demo-Award at the ACM International Conference on Multimedia Retrieval (ICMR) for our music exploration interface "Music Tower Blocks". The paper can be found here
How Does Data Augmentation Help Adversarial Robustness?
In this talk, first we look at how different inductive biases can be incorporated into machine learning systems, and what are the consequences of such inductive biases.
We then review some of the widely-used inductive biases in deep learning models, and try to understand how they help models improve their prediction performance.
More specifically, we look at popular data augmentation techniques, and their inductive biases.
Further, we look at how the decision boundaries of deep neural networks can cause deep neural networks to be more vulnerable to adversarial attacks, and how this can be systematically and objectively measured.
And in the end, we look at how data augmentation techniques affect decision boundary of deep neural networks, from two different aspects: 1) incorporating an inductive bias, and 2) incorporating stochasticity into training data.
Using tools from statistics, we disentangle these factors, and try to analyze their effects independently, using our proposed analysis framework.