Theoretical Concepts of Machine Learning (2VL)

Lecture notes:

PDF (5MB, 2014-03-02)

Slides:

Part1 (7MB)

Part2 (4MB)

 

Motivation:

Machine learning is concerned inferring models/relationships by learning from data. Machine learning methods are gaining importance in various fields, such as, process modeling, speech and image processing, and so forth. In recent years, bioinformatics has become one of the most prominent application areas of machine learning methods: The massive data amounts produced by recent and currently emerging high-throughput biotechnologies provide unprecedented potentials, but also pose yet unseen computational challenges in the analysis of biological data. Despite all potentials and successes of machine learning, one has to acknowledge that machine learning methods may produce poor or misleading results if they are applied inappropriately.

This course provides a look at the theoretical background of machine learning. The goal is to make students acquainted with the mathematical theories underlying machine learning methods in order to have a more profound understanding of the potentials and limits of machine learning. Topics:

  • Generalization error
  • Bias-variance decomposition
  • Error models
  • Model comparisons
  • Estimation theory
  • Statistical learning theory
  • Worst-case and average bounds on the generalization error
  • Structural risk minimization
  • Bayes framework
  • Evidence framework for hyperparameter optimization
  • Optimization techniques
  • Theory of kernel methods

(Practical course Theoretical Concepts of Machine Learning (1UE))