Go to JKU Homepage
Institute for Machine Learning
What's that?

Institutes, schools, other departments, and programs create their own web content and menus.

To help you better navigate the site, see here where you are at the moment.

AI-SNN - Self-Normalizing Networks as Enabler of Functional Modularity in Large AI Systems

Term: 11/2019 – 10/2021 (36 months)

Topic:
Self-normalizing neural networks (SNNs) have had a large impact on machine learning and yielded many successes in other research fields. Deep feed-forward neural networks could finally be trained more effectively and yielded higher predictive performances. However, self-normalization currently cannot be used in other successful architectures, such as recurrent neural networks and residual networks, which are the best performing modules for visual or auditory perception. We aim at investigating whether selfnormalizing deep architectures could be robust and stable modules for large AI systems. In large AI systems, information from one module should be transferred into another module neither flooding it with information nor starving it out from information. Thus, large AI systems cannot be built if gradients are vanishing or exploding. This exact problem is ameliorated by SNNs, which keep gradients constant. The self-normalizing property guarantees that the modules steadily converge to states which are optimal for learning across modules, even if they are distracted by other modules that are varying during learning. The goal of this project is enable robust learning of large AI systems with multiple self-normalizing modules and evaluate their application in drug discovery, self-driving cars, medical imaging and web content.