Go to JKU Homepage
Institute of Computer Graphics
What's that?

Institutes, schools, other departments, and programs create their own web content and menus.

To help you better navigate the site, see here where you are at the moment.

AOS RTK

Topics: drones, positioning systems, differential GPS
Supervision: Oliver Bimber, Rakesh John Amala Arokia Nathan
Contact: oliver.bimber@jku.at
Type: BSc Practicum, BSc Thesis

Airborne Optical Sectioning (AOS) is a wide synthetic-aperture imaging technique that employs manned or unmanned aircraft, to sample images within large (synthetic aperture) areas from above occluded volumes, such as forests. Based on the poses of the aircraft during capturing, these images are computationally combined to integral images by light-field technology. These integral images suppress strong occlusion and reveal targets that remain hidden in single recordings.

AOS supports DJI drones through a DJI Mobile SDK implementation on DJI smart controllers and smart phones being used with DJI remote controllers. These drones are currently operated through conventional GPS positioning with limited precision. Real-Time Kinematic (RTK) uses additional base stations or virtual reference networks (VRN) to improve GPS precision. This project will investigate if RTK together with the Austrian Positioning Service (APOS) can improve AOS in quality and/or speed. On the drone, a dedicated RTK module will be used. Position corrections are then received online through the APOS service. The DJI Mobile SDK 5 supports RTK and VRN.   
Many researchers and students have contributed to JKU’s innovative AOS framework to explore novel solutions as search and rescue, wildlife observation, and wildfire detection. Several high-impact publications in Nature and Science have covered our scientific findings. See https://github.com/JKU-ICG/AOS/, opens an external URL in a new window

More details on APOS:

https://www.bev.gv.at/portal/page?_pageid=713,1571538&_dad=portal&_schema=PORTAL, opens an external URL in a new window

More details on the Mobile DJI AOS implementation:

https://github.com/JKU-ICG/AOS/tree/stable_release/AOS%20for%20DJI, opens an external URL in a new window

MAVIC3 ©DJI

AOS Cloud

Topics: drones, cloud programming, aerial imaging
Supervision: Oliver Bimber, Rakesh John Amala Arokia Nathan
Contact: oliver.bimber@jku.at
Type: BSc Practicum, MSc Practicum, BSc Thesis, MSc Thesis

Airborne Optical Sectioning (AOS) is a wide synthetic-aperture imaging technique that employs manned or unmanned aircraft, to sample images within large (synthetic aperture) areas from above occluded volumes, such as forests. Based on the poses of the aircraft during capturing, these images are computationally combined to integral images by light-field technology. These integral images suppress strong occlusion and reveal targets that remain hidden in single recordings.

Thus far, AOS supports DJI drones through a DJI Mobile SDK implementation on DJI smart controllers and smart phones being used with DJI remote controllers. Processors and memory are limited on such Android systems. DJI’s Could API supports the transfer of live video streams and flying parameters into custom clouds, which can make this data then available to software systems running on more powerful hardware. The goal of this project is to investigate the capabilities of DJI’s Cloud API to run AOS in a cloud system that operates the drone(s).

Many researchers and students have contributed to JKU’s innovative AOS framework to explore novel solutions as search and rescue, wildlife observation, and wildfire detection. Several high-impact publications in Nature and Science have covered our scientific findings. See https://github.com/JKU-ICG/AOS/, opens an external URL in a new window.

More details on DJI Cloud API: https://developer.dji.com/cloud-api/, opens an external URL in a new window

More details on the Mobile DJI AOS implementation: github.com/JKU-ICG/AOS/tree/stable_release/AOS%20for%20DJI, opens an external URL in a new window

AOS Cloud ©DJI

Application of Visualization and Machine Learning Techniques for Knowledge Graphs

Topics:            Knowledge Graphs, Information Visualization, Machine Learning
Supervision:   Christian Steinparz, Marc Streit
Contact:          christian.steinparz@jku.at
Type:               BSc Practicum, BSc Thesis, MSc Practicum, MSc Thesis

Knowledge graphs are widely used to explicitly model and represent knowledge in various domains including semantic web, natural language, and biomedical data. However, due to the potentially large number of nodes and edges, the visualization and navigation of knowledge graphs prove to be challenging. They require the use of advanced techniques such as user interactivity, aggregation, automated analysis, machine learning, and the combination thereof.

Projects in this area can address tasks such as:

  • prototyping a visualization workflow for the exploration of KGs
  • exploring the use of machine learning techniques to identify and extract relevant information from KGs
  • experimenting with advanced ways of positioning nodes and edges such as KG embedding and down-projection
  • visualizing constraint violation information in KGs
  • applying interpretable ML techniques to predict information in the KG and utilizing the model explanations
  • comparison of KGs by developing visual fingerprints or by applying ML-based similarity prediction
  • visualizing provenance data of how KGs have changed over time (due to update/insert/delete of triples)

Students are encouraged to propose their own ideas in relation to the topic.

Application of Visualization and Machine Learning Techniques for Knowledge Graphs

Real-time Storyline Visualization for Spectating League of Legends Matches

Topics:             information visualization, esports, spectatorship
Supervision:     Günter Wallner, Letian Wang
Contact:            guenter.wallner@jku.at, letian.wang@jku.at
Type:                MSc Practicum, MSc Thesis

Storyline visualizations depict the relationships between entities over time and have thus become a useful means to summarize stories. With games being narrative constructs, such visualizations could be helpful to summarize matches to help spectators develop a sense of the ‘story’ of a match. The goal of this project is to develop a storyline visualization for League of Legends (LoL) which is updated in real-time during a match. This involves capturing data from the real-time API provided by LoL and developing a storyline algorithm which is capable of constructing the storyline on the fly based on movement and event information.

Storyline Visualization ©Image Source: https://xkcd.com/657/

FPV AOS

Topics: drones, first-person view flying, user interface, HCI
Supervision: Oliver Bimber
Contactoliver.bimber@jku.at
Type:  BSc Practicum, MSc Practicum

Airborne Optical Sectioning (AOS, https://github.com/JKU-ICG/AOS/, opens an external URL in a new window) is a wide synthetic-aperture imaging technique that employs manned or unmanned aircraft, to sample images within large (synthetic aperture) areas from above occluded volumes, such as forests. Based on the poses of the aircraft during capturing, these images are computationally combined to integral images by light-field technology. These integral images suppress strong occlusion and reveal targets that remain hidden in single recordings.

We have ported AOS to DJI platforms, as they are mainly applied by bluelight organizations, such as police, red cross, mountain rescue, or fire brigades. The latest release can be found here, opens an external URL in a new window.

An initial survey, however, revealed that the current user-interface is too complex for quick rescue missions, being operated by technically inexperienced drone pilots that have to focus on the rescue task rather than on operating the AOS interface. The goal of this project is to redesign and reimplement the DJI-AOS user-interface for practical usage. This includes, for instance, the integration of a first-person view (FPV) mode being supported by googles, and an intuitive operation of GUI elements and free camera control while wearing the googles.

(c) DJI ©DJI

Learning Background Models for Anomaly Detection

Topics: drones, image processing, object detection, signal processing, machine learning
Supervision: Oliver Bimber
Contactoliver.bimber@jku.at
Type:  BSc Practicum, MSc Practicum, BSc Thesis, MSc Thesis

Airborne Optical Sectioning (AOS) is a wide synthetic-aperture imaging technique that employs manned or unmanned aircraft, to sample images within large (synthetic aperture) areas from above occluded volumes, such as forests. Based on the poses of the aircraft during capturing, these images are computationally combined to integral images by light-field technology. These integral images suppress strong occlusion and reveal targets that remain hidden in single recordings.

Recently, we have shown that not only classification benefits from image integration, but also color and thermal anomaly detection. In contrast to classification where, individual classes have to be known and often must be trained, color/thermal anomaly detection indefinites pixels that have an unnormal color with respect to the background’s color statistics. We also have presented AOS approaches that are able to detect and track moving targets (such as walking people) through densely occluding foliage.

In this project, we want to investigate learning options for anomaly detectors' background model. Thus far, these background models are re-computed for each individual image - independently of other images being captured with a similar background. This requires adapting the anomaly threshold from image to image, which is impossible without information about the anomalies themselves. For our use case (forests), however, the background model should be fairly similar or even identical over many images. Learning a background model(s) that is/are representative for forest (or different forest types) would allow stable
anomaly thesholding over all images.

Many researchers and students have contributed to JKU’s innovative AOS framework to explore novel solutions as search and rescue, wildlife observation, and wildfire detection. Several high-impact publications in Nature and Science have covered our scientific findings. See https://github.com/JKU-ICG/AOS/, opens an external URL in a new window,

Previous (BSc.) student work on color/thermal anomaly detection: www.mdpi.com/2673-4117/3/4/38, opens an external URL in a new window

We would like to point out that when playing the video, data may be transmitted to external parties. Learn more by reading our data privacy policy
Data protection information