Go to JKU Homepage
Institute for Mathematical Methods in Medicine and Data Based Modeling
What's that?

Institutes, schools, other departments, and programs create their own web content and menus.

To help you better navigate the site, see here where you are at the moment.

Research areas.

Biomedical Modelling and Numerical Simulation

In silico models are today a regular support for the investigative activity of medical doctors and life scientists, alongside the in vivo and in vitro experiments. Medical doctors can benefit from effective and reliable non-invasive, patient-specific, instruments to improve diagnosis and prognosis. In return, mathematical and numerical models can provide rigorous tools for quantitative analyses with a diagnostic and prognostic content, and patient specific simulations are made possible by integrating such models with data and medical images. Still, problems from biomedical research are extremely complex and challenging from the modeling viewpoint. On the one hand, they are typically characterised by remarkable heterogeneities and multi-scale dynamics, both in space and time. On the other hand, they are imbued with uncertainty, whose primary sources may result from input variability (aleatory/irreducible uncertainty), such as the anatomical definition, the tissue characteristics and unknown boundary conditions, or from a lack of knowledge (epistemic/reducible uncertainty), such as the modeling assumption, or the influence of yet unknown physical phenomena. A reliable predictive mathematical model should be able to soundly cope with all these aspects.

In this perspective, our main activities are based on 

  • Numerical approximation of Partial and Fractional Differential Equations
  • Uncertainty quantification
  • Multiscale modeling
  • Patient-specific modeling from medical imaging.

The current fields of application are 

  • Endocardiac radiofrequency ablation
  • Hemodynamics and oxygen transport
  • Fractional diffusion and applications to electrocardiology
  • Glioma modeling
  • Infectious diseases and ecology.

Machine Learning,
Data- and Knowledge-Based Modelling

In our institute, machine learning, data- and knowledge-based modelling techniques serve as important methodologies for solving mathematical problems in different application scenarios within funded research projects and industrial projects. Typical such application comprise:

  • Fault detection in quality control systems
  • Modelling, prediction and decision support in complex systems
  • Grouping of data and knowledge discovery.

We apply these methodologies to data from different types of sources, namely measurement data, signal streams, spectroscopic data as well as image data. The applied methodologies can loosely be divided into:

  • Regression (supervised): system identification, approximation and prediction
  • Pattern recognition and classification (supervised): decision support for operators, fault detection
  • Clustering (unsupervised): grouping, image segmentation, data compression
  • Feature selection (un- or supervised): elimination of superfluous information
  • Knowledge-based modelling with fuzzy systems: discovery of interpretable models from data, coding of expert knowledge.

New developments and publications are conducted in the fields of evolving fuzzy systems, incremental learning, clustering, fault detection and fault prediction. Hereby, a special emphasis is set on on-line learning methods, which are able to refine, improve and adapt models during on-line production in real-time.

Aggregation Functions and Copulas

In its most general meaning, aggregation denotes the process of determining a representative value or structure from a given set of several inputs or input structures. Typical aggregation processes are for example:

  • in data analysis the computation of the mean or median value of a data set;
  • in multi-criteria decision making the determination of a final evaluation of some alternative based on evaluations w.r.t. single objectives;
  • in social choice theory or voting the determination of a group consensus or preference based on individual preferences;
  • in many-valued logics the determination of the truth degree of a compound expression based on the truth degrees of the single terms;
  • in risk analysis the determination of a coefficient of riskiness based on singular risky factors (not necessarily independent).

In many cases aggregation processes are represented by real-valued functions, called aggregation functions, which, depending on the actual context, have to fulfill characteristic properties. The determination and characterization of such functions is therefore of interest. Another challenging focus of our basic research work is related to the preservation of properties in aggregation processes most often leading to functional equations (like, e.g., bisymmetry) and inequalities (like, e.g., dominance). The (in)equalities in turn allow to define binary relations on the set of aggregation functions whose properties reveal insight in the structure of (sub)sets of aggregation functions.

The study of dependences in  data or between random variables leads to a particular class of aggregation functions, namely the class of copulas. According to Sklar’s theorem, any multivariate joint distribution function can be expressed by a copula and the univariate marginal distribution functions, whereas the copula captures the dependence structure between the variables.The investigation, classification, construction and determination of such functions (and their generalizations) with respect to a given set of properties demanded by actual applications is therefore of interest and part of the research work.

Image Processing

In the field of image processing we develop general algorithms as well as customer specific solutions adapted to real life problems in the fields of, e.g., microscopy imaging, OCT imaging, and surface inspection quality control. We are specialised in combining classical image processing methods for data pre-processing and feature extraction with machine learning methods for classification and evaluation tasks. The applied techniques comprise:

  • Statistical and classical signal processing methods for denoising and pre-processing
  • Multiscale approaches with applications to detection and recognition
  • Computation of aggregated features for object characterization
  • Pattern recognition and machine learning.

Application fields are:

  • Industrial image processing for surface and material inspection software
  • Biological image processing for microbiological image analysis software
  • Image processing for OCT images in industrial and medical applications.

Microbiological and medical image processing

Biological and medical images and data recorded by microscopy or optical coherence tomography (OCT) techniques pose series of particular challenges in image processing. These are mainly caused by properties of the imaging techniques and samples themselves leading, e.g., to transparent structures without sharp edges, speckles, high variability of objects in the image, a low signal-to-noise ratio, as well as the required expert knowledge for a suitable image understanding.

The main tasks we are focusing on are:

  • Detection of typical or salient structures (e.g. in Microscopy or OCT Images)

  • Classification of structures and recognition of patterns

  • Image reconstruction for optical coherence tomography

  • Learning in imaging and image processing.

The typical steps in solving the above mentioned tasks are:

  • Mathematical formalization based on the (physical) image formation model

  • Model learning based on data-driven approaches

  • Validation by simulations as well as on recorded image data.

Fuzzy Logic and Fuzzy Systems

Fuzzy logic and fuzzy systems have been a main research focus at this department since the mid-seventies. After 15 years of purely theoretical research, the FLLL (Fuzzy Logic Laboratorium Linz-Hagenberg) was founded in the Softwarepark Hagenberg, aiming at applying these technologies to solve industrial problems.

Fuzzy logic is a many-valued logic, replacing the two classical truth values true (= 1) and false (= 0) by a continuum of truth values, usually represented by the unit interval [0,1]. Fuzzy sets, based on this many-valued logic, can be used to model linguistic vagueness which is intrinsically hidden in attributes like “large” and “small” and, in particular, the gradual transition between them. A main application of fuzzy logic is human-like reasoning in situations where vague, incomplete and/or (partially) contradictory knowledge is available, often in the form of rule-based systems as in fuzzy systems.

The theoretical work at the department is related to foundations of fuzzy logic and fuzzy systems with a special emphasis on logical operations, in particular triangular norms (see also the monograph Triangular norms, opens an external URL in a new window). A special feature is the Linz Seminar on Fuzzy Set Theory, opens an external URL in a new window, organized annually since 1979.

Practical applications of fuzzy logic and fuzzy control include industrial processes, image and signal processing, fault detection and knowledge-based modelling.

Hosted by our institute, scientists from the area of Fuzzylogic and Fuzzy Systems from around the world meet for a few days at the LinzSeminar for Fuzzy Set Theory.