Go to JKU Homepage
Institute of Pervasive Computing
What's that?

Institutes, schools, other departments, and programs create their own web content and menus.

To help you better navigate the site, see here where you are at the moment.


Guidance and Assistance

Pro2Future Area 1

Duration 2018-2021
Consortium Johannes Kepler Universität Linz*, KEBA AG, TRUMPF Maschinen Austria GmbH + Co. KG, Wacker Neuson Beteiligungs GmbH, FRONIUS International GmbH, Pro2Future GmbH, Technische Universität Graz
Role Project Lead

GUIDE will investigate novel worker context recognition, assistance and guidance technologies for use in the manufacturing- and production setting. The project is motivated by the fact that suboptimal outcomes of interactions between man and machine (measured via product quality and manufacturing efficiency) are correlated with (i) the cognitive load and level of attention of the factory worker, (ii) the frequency and degree of disruptions during the manufacturing workflow, as well as (iii) misalignments between the worker’s skill and the task complexity. GUIDE will attempt to model the level of cognitive load of human workers in the execution of tasks and is thus related to fundamental research in the interdisciplinary fields of cognitive science, computer science, industrial research, and behaviorism. Further, GUIDE will extend beyond current interactive assistance systems that are mainly restricted to presence detection and collision avoidance by not only recognizing production step, but as well interpreting underlying motivation, level of engagement and goal for optimal activity support and guidance. Finally, GUIDE aims at triggering context-sensitive, multi-modal feedback and instructions, depending on the respective current workflow task, adaptable in sensory channel modality as well as with respect to the workers current skill level and wish for guidance.

Typical industrial work processes involve information from a variety of sources, usually spread out over a multitude of locations. Inexperienced workers are thus often forced to divide their attention between performing the manufacturing tasks and seeking information from different sources – a task that has been shown to adversely affect cognitive load and ultimately result in decreased productivity. As part of DP1, GUIDE will contribute towards the development of a prototype cognitive head gear (HMD) that aims at providing all necessary guidance information directly to the worker at the time of need. Specifically, GUIDE will provision a data preprocessing and machine learning pipeline to be executed on the HMD’s mobile processing unit. The key investigators aim to achieve this by utilizing an already previously developed framework for opportunistic, multi-sensor fusion in the domain of embedded activity recognition and optimizing it for the requirements of mobile execution in conjunction with the created HMD. This pipeline will be able to utilize low-level sensor data and transform it into high-level classification results related to (i) industrial workflow steps or the (ii) current cognitive state of the user. The produced HMD along with the context recognition framework will be integrated with state of the art industrial work equipment.


GUIDE has as its goal the detection and reaction to threats for desirable interaction outcomes, thereby preventing the degradation of product quality and manufacturing efficiency. Specifically, GUIDE will monitor (i) human factors such as the vital state (fatigue, fitness), skill and experience, cognitive load (stress), ongoing social interactions, distractions and peripheral interrupts (background noise), (ii) workflow complexity and state with respect to activities, processes, timeliness, exception handling, robustness, and quality assurance, (iii) apparatus complexity and worker skill, especially when interacting with complex manufacturing tools, (iv) support human decision making with respect to complex information (machine- or workflow related), unreliable or uncertain situations, and achieving transparency concerning offered assistance, and (v) minimize cognitive load. In that, GUIDE aims at a modular, opportunistic worker state recognition architecture, that makes use of dedicated mobile sensors (eye-trackers, body pose, and hand movements) and actuators (visual, vibro-tactile, auditory), and at the development of feedback trigger mechanisms, able to support, warn and sense workers during manufacturing, assembly and production tasks.



GUIDE will (i) build on- and advance over state-of-the-art methods for cognitive load estimation, (ii) machine learning and recognition architectures tailored to industrial machines and workflow processes, (iii) build on existing opportunistic sensing architectures, and (iv) formulate triggers for worker feedback mechanisms. To this end, GUIDE will derive a measure of cognitive load and human attention and incorporating the same into a user-centric and redesigned version of the context recognition chain, extended by high-level reasoning and recognition architectures. Subsequently, fused decision classes are matched to workflow process models derived from domain-specific expert knowledge, thereby supporting prediction and support.

Expected and Achieved Results

GUIDE aims at the development of a holistic framework to be used in multiple industrial manufacturing and production use cases as agreed upon with our respective industrial partners. This framework will encompass and consist of the following components: (i) a sensor fusion module used to create viable sensor ensembles in an opportunistic manner. This will go beyond the results of the WorkIT project and be integrated with the context recognition chain as implemented by the GUIDE project, (ii) purpose-build models of human attention and cognitive load, each based on existing research from the fields of psychology and cognition. (iii) a modelling methodology for the abstraction of workflow processes, encompassing expert domain knowledge, activity recognition, context recognition and forecasting, (iv) a skill and experience module, able to infer worker skill and to model experience gain, and (v) a feedback trigger module, which will determine the need for feedback and formulate a recommendation for the feedback modality to be used by the SeeIT project. This framework was integrated and exemplified in a head mounted display prototype developed as part of the demonstrator project 1. For demonstration purposes, the produced HMD along with the context recognition framework provides state of the art industrial work equipment to form a stand-alone, visual support- and guidance system, which we deployed e.g. during shop floor studies at company Fronius. In cooperation with sibling project WorkIT project, and sophisticated machine learning pipelines were developed especially for supervised methods, we developed both a classical feature based machine learning pipeline as well as multi layered deep neural networks, based on state-of-the-art paradigms like CNN (e.g. UNet or Inception) and RNNs in form of LSTMs. Further we also investigated the cognitive state of workers by building a gaze analytics pipeline or using data from a wrist-worn GSR unit, which we to this end used to investigate cognitive load during welding or for skill recognition during welding.

Status / Progress

This project officially started in January 2018. In it, Pro²Future is working with our Company Partners Fronius, KEBA, Wacker Neuson, Trumpf, the Institute of Computer Engineering and the Institute of High Frequency Technology at TU Graz towards the creation of Industrial Worker Guide and Assistance systems. We have held the project Kick-Off, and conducted meetings with all Company Partners at their respective locations to gauge the possibilities and most promising approaches on site. Preliminary data investigation was used to define the initial system models for the guidance and assistance architectures. To this day they are extended using data collected in the WorkIT project, and by results with respect to feedback triggers to the SeeIT project.

So far the machine learning software framework created, in conjunction with sibling project WorkIT, features a classical featured based machine learning pipeline as well as a deep neural network based one additionally the overall toolkit also features interfaces and implementation for abstraction of a workers’ cognitive state. The trained machine learning models have been trained for each industrial application of our company partners and extends on all steps of the context recognition chain. The developed tools and components still need to be extended for detecting the need for adaptive feedback to industry workers, however so far, they were able to spot task inefficiency, performance variations and return usage information towards product optimization. Further we also implemented a gaze analytics pipeline used for creating features from a mobile eye tracking unit, which are examined for detecting the workers cognitive state, i.e. skill level or cognitive load. So far, we investigated skill level for company partner Trumpf during bending and cognitive load for Fronius during welding. Additionally, in conjunction with the machine learning and multi-sensor fusion pipeline from WorkIT and the feedback system from SeeIT, we already implemented a new guiding interface for novices helping them during metal bending bending. The work will be extended by the key investigators to all the use cases and showcases the cognitive reasoning capacity of GUIDE.