CP researcher invited to speak at the Amirkabir Artificial Intelligence Summer Summit
"On the Inductive Biases in Data Augmentation and Adversarial Robustness"
Data augmentation techniques have become standard practice in deep learning, as it has been shown to greatly improve the generalisation abilities of models. These techniques rely on different inductive biases and ideas such as invariance-preserving transformations (e.g, expert-defined augmentation), statistical heuristics (e.g, Mixup), and learning the data distribution (e.g, GANs). However, in the adversarial settings it remains unclear under what conditions such data augmentation methods reduce or even worsen the misclassification risk. In this talk, I discuss the effect of different inductive biases imposed by data augmentation techniques on the adversarial risk, the smoothness of the decision boundaries, and the influence of the augmented data on the final models.