IAPR Distinguished Lectures

Tentative Schedule
May 8Daniel CremersDense & Direct Methods for 3D Reconstruction & Visual SLAM
May 9Jianbo ShiConnecting the Dots: Embodied Visual Perception from First-person Cameras
May 10Hiroshi MuraseImage Recognition for Assistance in Intelligent Vehicles

Dense & Direct Methods for 3D Reconstruction & Visual SLAM

Daniel Cremers
IAPR Distinguished Speaker
Prof. Daniel Cremers
Departments of Informatics and Mathematics, Technical University of Munich, Germany

The reconstruction of the 3D world from images is among the central challenges in computer vision. Starting in the 2000s, researchers have pioneered algorithms which can reconstruct camera motion and sparse feature-points in real-time. In my talk, I will introduce spatially dense methods for camera tracking and 3D reconstruction which do not require feature point estimation, which exploit all available input data and which recover dense or semi-dense geometry rather than sparse point clouds. Applications include 3D photography, 3D television, and autonomous vehicles.

Connecting the Dots: Embodied Visual Perception from First-person Cameras

Jianbo Shi
IAPR Distinguished Speaker
Prof. Jianbo Shi
School of Engineering and Applied Science, University of Pennsylvania, USA

A computer has a complete photographical memory. It creates massive but isolated sensory moments. Unlike such fragmented photographic memory, human memories are highly connected through episodes that allow us to relate past experiences and predict future actions. How to computationally model a human like episodic memory system that connects photographically accurate sensory moments? Our insight is that an active interaction is a key to link between episodes because sensory moments are fundamentally centered on an active person-self. Our experiences are created by and shared through our social and physical interactions, i.e., we connect episodes driven by similar actions and, in turn, recall these past connected episodes to take a future actions. Therefore, connecting the dotted moments to create an episodic memory requires understanding the purposeful interaction between human (person-self) and world.

Photographs are only half of our world experience: it records what are out there. What are in our head, our intention-attention-physiological states during the social and physical interactions, are missing from the memory recording. This needs creating an embodied memory link between our inner ‘selves’ with the external episode, and a first person camera is an ideal sensor to capture, model, and predict the embodied memory link because it encodes a complete visual audio sensation of the camera wearer’s interaction with the world. We leverage purposeful actions measured by first person cameras to reveal the internal states of the camera wearer, and use the similar internal states to connect the wearer’s episodic sensations of the world.

Image Recognition for Assistance in Intelligent Vehicles

Hiroshi Murase
IAPR Distinguished Speaker
Prof. Hiroshi Murase
Graduate School of Information Science Nagoya University, Japan