Go to JKU Homepage
LIT Artificial Intelligence Lab
What's that?

Institutes, schools, other departments, and programs create their own web content and menus.

To help you better navigate the site, see here where you are at the moment.

LIT AI Lab/ELLIS Unit Linz Seminar.

The LIT AI Lab kicks off the “LIT AI Lab/ ELLIS Unit Linz Seminar” in summer 2021, which shall bring together researchers and students interested in the field of Artificial Intelligence.

You can follow the talks online via zoom and/or in person (3G rules apply) as mentioned below.

Abstract
Flatness of the loss curve is conjectured to be connected to the generalization ability of machine learning models, in particular neural networks. While it has been empirically observed that flatness measures consistently correlate strongly with generalization, it is still an open theoretical problem why and under which circumstances flatness is connected to generalization, in particular in light of reparameterizations that change certain flatness measures but leave generalization unchanged. We investigate the connection between flatness and generalization by relating it to the interpolation from representative data, deriving notions of representativeness, and feature robustness. The notions allow us to rigorously connect flatness and generalization and to identify conditions under which the connection holds. Moreover, they give rise to a novel, but natural relative flatness measure that correlates strongly with generalization, simplifies to ridge regression for ordinary least squares, and solves the reparameterization issue.

Physical place: Seminar Room SP3 318

Zoom:
https://jku.zoom.us/j/99116069385?pwd=VnZJN3Ntd2wyWGRSSU5HSVY5My9GQT09, opens an external URL in a new window

Meeting-ID: 991 1606 9385
Password: 0815

Physical place: Seminar Room SP3 318

 

Online via https://jku.zoom.us/j/98246734423?pwd=QXJ4ekJBUkNHb3M0bXFKVEVuQ1h1dz09, opens an external URL in a new window

 

Abstract
Many real-world problems can be formulated as the task of counting the models of a propositional formula, called #SAT. A model of a formula is an assignment to its variables such that the formula evaluates to true. Related to model counting is model enumeration, or All-SAT, in which the models of a formula are recorded.

In this talk we are going to present our work on the formalization of methods for propositional model counting and enumerating with focus on finding short partial models under projection. In partial models not all variables of the formula occur. Partial models therefore represent a set of total models and enable a concise representation of a formula. Our methods return a disjoint Sum-of-Products (DSOP), which is a formula in disjunctive normal form (DNF) whose disjuncts are pairwise disjoint. This ensures models are enumerated only once. We also devised a variant in which this uniqueness constraint is relaxed to serve tasks in which repetitions cause no harm. This relaxation results in detecting even shorter models.

The focus in our work is on formalization and proofs. Preliminary results, either theoretical or experimental, show that the methods presented in this thesis enable us to find short partial models. Propositional model counting has been used, e.g., in frequent itemset mining, and there is a substantial amount of work concerned with learning DNFs from examples. The aim of this talk is to identify possible applications of our work in Artificial Intelligence.
 

Video of the Talk: https://youtu.be/Q9R6qm5iv1k, opens an external URL in a new window

 

To participate enter with the following Zoom-Link:

https://jku.zoom.us/j/91383246249?pwd=eVcydG1mT1FZazNqUUcxeFFqeEYrdz09, opens an external URL in a new window

Meeting-ID: 913 8324 6249
Passwort: 271828


Abstract
In machine learning and especially in deep learning there is one algorithm that, including many of its variations, is used almost universally for training large and non-linear models: stochastic gradient descent (SGD).
Applying a SGD method for minimizing an objective gives rise to a discrete-time process of estimated parameter values. While the mathematical description is fairly simple, the behavior of the algorithm generally is not. In order to better understand the dynamics of the estimated values it is reasonable to approximate the discrete-time process with the solution of a differential equation. The resulting gradient flow equation describes the mean evolution of the SGD process very well. However, it does not account for the noise inherent in the SGD method.
For example it does not see the difference between different mini-batch sizes or between having an infinite list of fresh data versus a finite sample of data. To rectify this issue one can introduce a noise term to the gradient flow equation, turning it into a so called stochastic differential equation. A solution to the resulting equation is called a diffusion approximation to SGD.

In this talk we describe how to explicitly calculate and compare the errors of gradient flow and the so called first-order diffusion approximation. Further, we show that one can find an even better, second-order diffusion approximation. Finally, some applications of diffusion approximations are explored.

Abstract: Despite the recent successful application of Artificial Intelligence (AI) to games, the performance of cooperative agents in imperfect information games is still far from surpassing humans. Cooperating with teammates whose play-styles are not previously known poses additional challenges to current state-of-the-art algorithms. In the Swiss card game Jass, coordination within the two opposing teams is crucial for winning. Since verbal communication is forbidden, the only way to transmit information within the team is through a player’s play-style. This makes the game a particularly suitable candidate subject to continue the research on AI in cooperation games with hidden information. In this work, we analyse the effectiveness and shortcomings of several state-of-the-art algorithms (Monte Carlo Tree Search (MCTS) variants and Deep Neural Networks (DNNs)) at playing the Jass game.

 

Zoom link: https://jku.zoom.us/j/98246734423?pwd=QXJ4ekJBUkNHb3M0bXFKVEVuQ1h1dz09 , opens an external URL in a new window

Abstract: Convolutional Neural Networks (CNNs) have been solely dominating the field of computer vision for nearly a decade. In this talk I will present two recent papers that propose new and highly competitive architecture classes for computer vision. In the first part I will present the Vision Transformer model (ViT), which is almost identical to the standard transformer model used in natural language processing, but happens to work surprisingly well for vision applications. In the second part of the talk, I will present the MLP-mixer model: an all-MLP architecture for vision. It can be seen as a simplified ViT model without the self-attention layer. Nevertheless, it also demonstrates strong results across a wide range of vision applications.

 

Zoom link: https://jku.zoom.us/j/98246734423?pwd=QXJ4ekJBUkNHb3M0bXFKVEVuQ1h1dz09, opens an external URL in a new window (Meeting-ID: 982 4673 4423, Passwort: 550113)

Abstract: Classical machine learning (ML) provides a potentially powerful approach to solving challenging problems in quantum physics and chemistry. However, the advantages of ML over more traditional methods have not been firmly established. We prove that classical ML algorithms can efficiently predict ground state properties of a physical system, after learning from data obtained by measuring related systems. We also prove that classical ML algorithms can efficiently classify a wide range of quantum phases of matter. Our arguments are based on the concept of a classical shadow, a succinct classical description of a quantum state that can be constructed in feasible quantum experiments and be used to predict many properties of the state.
 
This is joint work with Robert Huang (Caltech), Giacomo Torlai (AWS), Victor Albert (University of Maryland) and John Preskill (Caltech+AWS).

 

Zoom link: https://jku.zoom.us/j/98246734423?pwd=QXJ4ekJBUkNHb3M0bXFKVEVuQ1h1dz09, opens an external URL in a new window (Meeting-ID: 982 4673 4423, Passwort: 550113)