Visual Computing - Bridging Real and Digital Domain
December 6th, 2016, 2:00pm, Science Park 3, Room 063
Any luminous or illuminated object continuously emits images of itself, in all directions, and over huge distances. At the speed of light images convey a wealth of information about their origins which our visual system can decipher almost without effort, in real time, extremely efficient, and enormously robust. Consequently, the visual sense has evolved to become our prime modality for gathering information of our environs. In my talk I will outline how we have only just begun to exploit visual information as the ideal interface between the real and digital world, as well as between the digital domain and human brains. Visual Computing constitutes a fundamental Enabling Technology of the 21. century that paves the way for various technological advances of substantial economic as well as social impact.
About the Speaker:
Marcus Magnor heads the Computer Graphics Lab of the Computer Science Department at Technische Universität Braunschweig (TU Braunschweig). He received his BA (1995) and MS (1997) in Physics from Würzburg University and the University of New Mexico, respectively, and his PhD (2000) in Electrical Engineering from Erlangen University. For his post-graduate studies, he joined the Computer Graphics Lab at Stanford University. In 2002, he established the Independent Research Group Graphics-Optics-Vision at the Max-Planck-Institut Informatik in Saarbrücken. He completed his habilitation in 2005 and received the venia legendi for Computer Science from Saarland University. In 2009, he was Fulbright Scholar at the University of New Mexico, USA, where he holds an appointment as Adjunct Professor at the Physics and Astronomy Department. In 2011, Marcus Magnor was elected member of the Engineering Class of the Braunschweigische Wissenschaftliche Gesellschaft. He is laureate of the Wissenschaftspreis Niedersachsen 2012, awardee of an ERC Starting Grant, and PI of a DFG Reinhart Koselleck Project. His research interests concern Visual Computing, i.e. visual information processing from image formation, acquisition, and analysis to image synthesis, display, perception, and cognition. Areas of research include, but are not limited to, computer graphics, computer vision, visual perception, image processing, computational photography, astrophysics, imaging, optics, visual analytics, and visualization.
Applying Visual Analytics to Real-World Problems
November 8th, 2016, 4:00pm, Room SP2 120
Visual analytics is not an emerging topic anymore. Domain experts are increasingly convinced of the potential of visual analytics in the big data era as we move from basic to applied research in this area. We are working with users in several application fields, including medical doctors, climate researchers, business analysts and security analysts. In any of these domains, visual analytics solutions not only have to create user-, data- and task-centered visualizations: they use automated techniques and have to explain results to domain experts to allow a reasonable analytical discourse between human and visual analytics system. In this talk, I will discuss the intersection of visual analytics and user-centered design using several examples from applied research projects with domain experts. My talk will also cover current research topics in applied visual analytics taking a look forward to the role of visual analytics in these domains.
About the Speaker:
Jörn Kohlhammer is head of the Competence Center for Information Visualization and Visual Analytics, and Professor for User-Centered Visual Analytics at TU Darmstadt. He has a PhD from TU Darmstadt and an MSc from the Ludwig-Maximilian University in Munich. His competence center develops solutions for several application domains, including visual business analytics, medical data analysis of electronic health records, decision support in the public sector, and cyber-security. Jörn is regular member of program committees for conferences like IEEE VAST and EuroVis, and acts as reviewers in many conferences and journals. His personal research interests include decision-centered information visualization based on semantics, and visual business analytics.
Efficient motion compensation for fetal MRI
October 13th, 2016, 10:30am, Science Park 3, Room 063
The recent advent of T2-weighted single shot fast spin echo (ssFSE) sequences has enabled magnetic resonance imaging (MRI) to play an essential role in fetal diagnosis, in particular where ultrasound fails to provide sufficient information to diagnose specific prenatal conditions. Fetal MRI is able to provide good contrast for individual fetal structures such as brain, lung, kidney and liver, and pregnancy related anatomy such as placenta, umbilical cord and amniotic sac.
MRI is considered to be safe after the first trimester for 1.5T and 3T without the use of contrast agents, which may have teratogenic effects. Sedation is not used during a scan and fetuses move freely while the mother breathes normally. Individual ssFSE slices can be acquired fast enough to freeze motion in time, however, motion between slices are likely to corrupt 3D scans by reducing image quality, hiding pathology and causing an overlap between different anatomical regions. ssFSE allows to acquire high resolution `in-plane' slices from the uterus with large field of view and good tissue contrast. However, there are still inter-slice artifacts in the `out-of-plane' views, which consequently limit reliable diagnostics to individual slices.
Slice-to-volume registration techniques (SVR) and super-resolution image reconstruction methods can be applied to compensate for 3D motion artefacts between single slices. This is currently enabled by oversampling the target input space with multiple orthogonal stacks of 2D slices. Oversampling a target volume with multiple orthogonal stacks of 2D slices allows to apply slice-to-volume registration techniques (SVR) and super-resolution methods.
In this talk we will review a simple but very effective and parallelisable SVR model similar to x_i = W_i y + n_i for 1 <= i < N, where x_i denotes a low resolution (LR) image stack of total N stacks, and y being a high resolution target image. W_i=DBT_i combines motion compensation, sub-sampling and degradation effects, where $D$ is a sub-sampling matrix, B is a blurring matrix, and T_i is a transformation matrix of observation i. n_i adds noise. Thus, the LR image can be considered as a down-sampled, motion corrupted, blurred, and noisy version of a high-resolution output space. The motion compensation problem can be divided into two main parts: (1) motion correction (estimating W_i) and (2) super-resolution (estimating y). Intensity-based image registration is used for estimating W_i, MRI point-spread function-informed super-resolution for obtaining a uniformly spaced motion-free high-resolution image. Expectation-maximization supports the iterative optimisation of (1) and (2) with an outlier rejection and noise mitigation model.
This is joint work with King's College London: M. Murgasova, M. Rutherford, and J. V. Hajnal.
About the Speaker:
Bernhard Kainz is a Lecturer in the Department of Computing at Imperial College London. He is researching translational high-performance medical data analysis and interactive real-time image processing techniques as member of the Biomedical Image Analysis, BioMedIA Group in the section of Visual Information Processing. He collaborates intensively with King’s College London, Division of Imaging Sciences and Biomedical Engineering, St. Thomas Hospital London and the department of Bioengineering at Imperial.
In 2013 he moved from Graz University of Technology in Austria to Imperial College London with a EU FP7 Marie-Curie Fellowship, working together with Prof. Daniel Rueckert and Prof. Jo Hajnal (KCL). Since then he is also supporting the EPSRC/Wellcome trust project iFind with high-performance computing methods.
He finished his Ph.D. in the field of medical visualization, volume graphics and GPU algorithms in the team of Prof. Dieter Schmalstieg at the Institute for Computer Graphics and Vision at Graz University of Technology in Austria. This shaped his research interests towards the interconnection of medical image data acquisition, fast hardware accelerated medical image processing and interactive real-time visualization.
Learning better models for computer vision
June 14th, 2016, 4:30pm, Science Park 3, Room 048
Until recently, computational imaging, learning was seldomly used in practical applications of machine vision. Recent progress in computing power as well as new algorithmic insights makes these techniques now feasible and exploitable. According to Bayes' theorem, the posterior distribution of a certain vision problem is proportional to the product of a prior distribution and a data likelihood distribution. The classical maximum a-posterior (MAP) estimate is given by the sample that maximizes the posterior probability, or equivalently minimizes the negative logarithm of the posterior probability. This leads to the minimization of a cost function that is given by the sum of a regularization term (prior) and a data fidelity term (data likelihood). Rather than using handcrafted models for these terms, we make use of machine learning techniques to learn "better" models. In a first application we show how to learn a powerful regularization term for high-quality image reconstruction from compressed sensing MRI. Our learned algorithm allows to speed-up the MRI acquisition time by a factor 4-6. In a second application, we show how to learn the data fidelity term for a stereo algorithm. Our learned stereo algorithm yields state-of-the-art results on a variety of depth estimation benchmarks while running in real-time.
About the Speaker:
Thomas Pock, born 1978 in Graz, received his MSc (1998-2004) and his PhD (2005-2008) in Computer Engineering (Telematik) from Graz University of Technology. After a Post-doc position at the University of Bonn, he moved back to Graz University of Technology where he has been an Assistant Professor at the Institute for Computer Graphics and Vision. In 2013 Thomas Pock received the START price of the Austrian Science Fund (FWF) and the German Pattern recognition award of the German association for pattern recognition (DAGM) and in 2014, Thomas Pock received an starting grant from the European Research Council (ERC). Since June 2014, Thomas Pock is a Professor of Computer Science at Graz University of Technology (AIT Stiftungsprofessur "Mobile Computer Vision") and a principal scientist at the Department of Safety and Security at the Austrian Institute of Technology (AIT). The focus of his research is the development of mathematical models for computer vision and image processing in mobile scenarios as well as the development of efficient algorithms to compute these models.
(Low) coherent imaging: Current State, Trends and Perspectives
May 10th, 2016, 4:00pm, Science Park 2, Room 059
(Low) coherent imaging techniques, like optical coherence tomography (OCT) and optical coherence microscopy (OCM), show as label-free imaging techniques their justification for manifold applications with medical or technical background. Over years, in many modifications they have demonstrated their usefulness. But also the developments of new light sources, detectors, illumination and reconstruction concepts have enabled new possibilities for the coherent imaging world. In particular also programmable optics, like spatial light modulators, became important elements for these imaging techniques as well. We will discuss trends and perspectives which can arise for (low) coherent imaging in future, and show some examples in connection with OCT and OCM imaging modalities.
About the Speaker:
Bettina Heise is a senior scientist at the Christian Doppler Laboratory for Microscopic and Spectroscopic Material Characterisation, a research assistant at the Institute for Knowledge-based Mathematical Systems in Linz-Hagenberg and a key-researcher at RECENDT GmbH, Linz. She received her diploma in Technical Physics from the Technical University Chemnitz, Germany, and the PhD degree related to Image Processing, Mathematics and Optics from the Johannes Kepler University Linz, Austria, in 2010. Her research interests include areas of Optics, Signal Processing, Mathematics and in particular programmable optics and imaging concepts.
Eye Tracking and Visualization
April 26th, 2016, 4:00pm, Science Park 3, Room 048
There is a growing interest in eye tracking as a research method in many communities, including information visualization, scientific visualization, visual analytics, but also in human-computer interaction, applied perception, psychology, cognitive science, security, and mixed reality. Progress in hardware technology and the reduction of costs for eye tracking devices have made this analysis technique accessible to a large population of researchers. Recording the observer’s gaze can reveal how dynamic graphical displays are visually accessed and which information are processed in real time. However, the analysis, interaction, and visualization of spatiotemporal gaze data becomes a challenging factor in this emerging discipline. In this talk, I will briefly discuss the history and background of eye-tracking research. The main parts of my talk will cover (1) the use of visualization and visual analytics as a means of understanding gaze recordings and (2) examples of how eye tracking helps evaluate the quality of visualization techniques.
About the Speaker:
Daniel Weiskopf is a professor at the Visualization Research Center (VISUS) of the University of Stuttgart, Germany. He received the Dr. rer. nat. (PhD) degree in physics from the University of Tübingen, Germany, in 2001, and the Habilitation degree in computer science at the University of Stuttgart, Germany, in 2005. His research interests include all areas of visualization, visual analytics, GPU methods, perception-oriented computer graphics, and special and general relativity.
Interactive Visual Analysis to Aid Data-informed Analytical Problem Solving
March 15th, 2016, 4:30pm, Science Park 2, Room 054
The rapidly expanding application of experimental high-throughput and high-resolution methods in many scientific and industrial domains is creating enormous challenges for analysts who aim to make data-informed observations and decisions. Data sets are often highly heterogeneous, multi-scalar, and have significant levels of uncertainty. Visual analysis methods and seamlessly integrated computational analysis tools offer powerful solutions to empower analysts in coping with these challenges. This talk will walk you through such integrated methodologies over a number of examples of collaborative research projects.
About the Speaker:
Cagatay Turkay is a Lecturer (Asst. Prof.) in Applied Data Science at giCentre in the Computer Science Department at City University London. He has a PhD in visualization from University of Bergen, an MSc from Sabanci University, and served as a visiting research fellow at the Visual Computing group at Harvard University in 2013. His research mainly focuses on designing visualisations, interactions and processes to enable the tight integration of interactive data analysis techniques with exploratory capabilities of analysts. He works with experts in various domains from biomedicine, to transport, to intelligence. He co-organises, acts as paper chair, as committee member, and reviewer for conferences and journals in visualisation and computer graphics. He currently leads and contributes to a number of EU, Innovate UK and industrially-funded research projects.
Robotics and Augmented Reality for Patient and Process Specific Imaging and Visualization
January 19th, 2016, 4:00pm, Science Park 3, Room 0063
In this talk, I will first discuss the needs for developing novel intra-operative personalized imaging solutions. I will present my views on the future of intra-operative imaging and in particular on the important role robotics, control, imaging and visualization need to play. I will then focus on some of our latest results in patient and process specific multi-modal imaging and visualization. I will introduce the novel concept of “desired view” control for intra-operative X-ray, SPECT and Ultrasound imaging. I will introduce: 1) the first intra-operative SPECT/CT imaging solution, its design, development as well as experimental validation, 2) An MR-based desired view control for Robotic Ultrasound imaging, augmented by advance servoing control guaranteeing a successful high quality image acquisition, and 3) the deployment of desired view control concept for clinical applications in X-ray angiography. Finally, I will show some of our latest results in relevance-based augmented reality visualization and advance UI for computer assisted Interventions.
About the Speaker:
Nassir Navab is a Professor of Computer Science and founder and director of the Computer Aided Medical Procedures (CAMP) Laboratories at TU Munich and Johns Hopkins University. He is a fellow and member of board of directors of the MICCAI society and a member of steering committee of IEEE ISMAR. He is an associated editor for IEEE transactions on Medical Imaging and member of the editorial board of Medical Image Analysis and International Journal of Computer Vision. He received the IEEE ISMAR '10 year lasting impact award' in October 2010, and Siemens Inventor of the Year award in 2001 and SMIT medical Innovation award in 2010. He is the inventor for 45 US and over 50 European patents. He has published hundreds of papers and has co-authored over twenty papers awarded in most prestigious international conferences. Nassir acted as Area Chair for ICCV 2015, Program Board for IPCAI 2014-2016 and General Chair for IEEE ISMAR 2014. He was the General Chair for MICCAI 2015 held in Munich, October 2015. His current fields of interest include Patient and Process Specific Robotic Imaging, Medical Augmented Reality and Computer Vision. (For more details please visit: campar.in.tum.de, opens an external URL in a new window and camp.lcsr.jhu.edu, opens an external URL in a new window )