Tuesday 5th of May, 2020, 9am
Due to the current COVID-19 situation, the presentations will be via Zoom.
Join Zoom Meeting and Meeting-ID 655 792 5533.
There will be
(A) a 20 min. presentation on the current and future research plus 5 min. for discussions and
(B) a 10 min. lecture example on "Monte Carlo Tree Search" followed by 5 min. questions
09:00 Dorothea Baumeister
Collective Decision Making Through Scoring Systems
This talk will give a short introduction into the field computational social choice, an interdisciplinary research area at the interface between computer science and social choice theory. The use of mechanisms originating from social choice theory in computer science applications, especially in artificial intelligence (e.g., recommender systems and multi-agent systems), emerges diverse research questions. A formal mathematical specification and analysis of these mechanisms is required and different forms of influence arise a new importance since the outcome may have major consequences. A central topic in computational social choice is the study of elections, where votes are given as ordered lists over candidates. A common way of determining winners is then to apply some scoring system, where each position is associated with a specific score. This setting is transferable to other situations such as sports tournaments. The design of such systems, i.e., the choice of score values, may have a crucial influence. This talk will summarize some theoretical results for this problem, as well as a case study of data from Formula 1.
10:45 Anna Fensel
Artificial Intelligence with Knowledge Graphs: Interoperable, Transparent, Actionable
Modern Artificial Intelligence (AI) systems are increasingly built with machine learning and data analytics techniques, basing on massive data sets. They have a heavy impact on human behavior and quality of life, and thus need to deliver a sufficiently transparent and controllable experience for the users. I showcase my work on making AI systems interoperable and explainable, involving knowledge management and communication techniques, sensor data processing, as well as the associated intelligent data value chain production and consumption ecosystems. In such ecosystems, the data and the information can be shared, as envisioned and practiced with semantic technology and linked (open) data. Here, knowledge graphs are becoming a key technology for large-scale information processing systems containing massive collections of interrelated dozens of billions of facts. Examples include the Google Knowledge Graph, dataCommons, DBPedia, YAGO, NELL, and Knowledge Vault, a very large scale probabilistic knowledge graph created with information extraction methods for unstructured or semi-structured information. In particular, knowledge graphs help to infer new relationships out of existing facts, giving context and meaning to the content, and are used in applications. For example, the data generated by a human sensing system can be semantically represented, shared and made interoperable and actionable through contracting across numerous AI systems, taking into account the aims and requirements of these systems, as well as the context, provenance, consent and licensing aspects of the generated data. I demonstrate knowledge graphs-based methods in advanced applications from domains such as energy efficiency, building automation and construction, transport, media.
12:45 Martina Seidl
Automated Reasoning with Quantifiers
Many reasoning problems emerging from applications in artificial intelligence can be efficiently encoded as quantified Boolean formulas (QBF), the extension of propositional logic with universal and existential quantifiers over the propositional variables. Over the last years, much progress has been made in theory and practice of QBF solving.
In this talk, we give a short tour across the contributions we have made to the field of QBF solving ranging from theoretical results on proof systems, to novel solving techniques, to the co-organization of the annual QBF Evaluation.
14:30 Ansgar Scherp
About Extreme Analyzes on Texts
The talk presents the task of eXtreme multi-label classification on very large document corpora and recommender systems for scientific publications based on extremely sparse input data. We look in more detail into a recent paper at the International Conference on Learning Representations, where I introduced CMOW (Continual Multiplication of Words), an algorithm for efficient training of sentence embeddings. CMOW complements the successful CBOW (Continuous Bag of Words) model for word embeddings and captures the linguistic properties of texts better than CBOW. Motivated by this, I introduced the hybrid CBOW-CMOW model that combines the strengths of both and improves capturing linguistic properties by 8 %.