Model Guided Multimodal Imaging and Visualization for Computer Assisted Interventions |
Prof. Nassir NAVAB Technical University of Munich
Abstract: In this talk, I will focus on the problem of design and development of advance imaging and visualization solutions for computer assisted interventions. One major scientific challenge is the recovery and modeling of surgical procedures. The second one is the analysis of large amount of heterogeneous data and their intelligent real-time fusion. The third one is the advanced visualization of such data during focused, high-intensity surgical procedures. Throughout this presentation, I use clinical applications and our recent results, obtained in our real-world laboratories within several clinics in Munich, to demonstrate the issues and to provide exemplary paths towards possible solutions. Such examples include real-time Ultrasound/CT registration, Freehand SPECT reconstruction, dynamic cone-beam reconstruction, Camera-Augmented Mobile C-arm (CAMC) and HMD based AR for intra-operative visualization and medical training. |
---|
Prof. James M. REHG Georgia Institute of Technology
Abstract: In this talk I will describe current research efforts in Behavior Imaging, a new research field which encompasses the measurement, modeling, analysis, and visualization of social and communicative behaviors from multi-modal sensor data. Beginning in infancy, individuals acquire the social and communicative skills which are vital for a healthy and productive life, through face-to-face interactions with caregivers and peers. However, children with developmental delays face great challenges in acquiring these skills, resulting in substantial lifetime risks. Autism, for example, affects 1 in 110 children in the U.S. and can lead to substantial impairments, resulting in a lifetime cost of care of $3.2M per person. The goal of research in Behavior Imaging is to develop computational methods that can support the fine-grained and large-scale measurement and analysis of social behaviors, with the potential to positively impact the diagnosis and treatment of developmental disorders such as autism. A key aspect is the integration of multiple sensing modalities, including vision, speech, and wearable sensors, to obtain a comprehensive, integrated portrait of expressed behavior. In this context, machine vision technology can play a crucial role as a noninvasive means for measuring eye, face, and body movements, which can support the development of new computational models for social interactions. This talk will provide an overview of several on-going research activities, ranging from eye gaze analysis to the content-based retrieval of social games from unstructured video collections. |
Stereo Vision System on Automobile for Collision Avoidance |
Prof. Keiji SANEYOSHI Tokyo Institute of Technology
Abstract: Several kinds of sensors such as radar, a LIDAR, an ultrasonic sensor, a monocular vision and a stereo vision to avoid a collision for automobile are on the market. To avoid a collision in the crowding traffic environment an intelligent sensor which can detect not only the distance to an obstacle but also an occupied area of the obstacle, a traffic lane, positions and motions of other cars and pedestrians and so on must be used. A stereo vision has a suitable performance because of the wide field of vision, simultaneous detection of multiple objects with size, position, relative velocity for each object as well as road shape measurement and lane marks detection on the road. But the stereo vision has several weak points: (1) an enormous amount of calculations, (2) a problem of mismatching and (3) sensitiveness to the weather condition. We have overcome these problems by many techniques such as a new hardware system for problem (1), precise rectification for (2) and proper exposure control for (3). The first our stereo vision system was presented on Tokyo Motor Show in 1991 and the performance was 10 fps with the size of 512 x 200 pixels and depth of 100 pixels. In 1999 the first stereo vision system on automobile for collision avoidance was on the market. Recently we developed a new stereo vision system which performance was 160fps with the size of 1312 x 688 pixels and depth of 176pixels. I will introduce several stereo vision systems developed up to now and these applications with a demonstration and movies. |
---|