Invited Talks

May 28, 10:00–11:00
From SLAM to Spatial AI

IAPR Distinguished Speaker
Prof. Andrew Davison
Director of the Dyson Robotics Laboratory at Imperial College London

Biography

Andrew Davison is Professor of Robot Vision and Director of the Dyson Robotics Laboratory at Imperial College London. His long-term research focus is on SLAM (Simultaneous Localisation and Mapping) and its evolution towards general `Spatial AI': computer vision algorithms which enable robots and other artificial devices to map, localise within and ultimately understand and interact with the 3D spaces around them. With his research group and collaborators he has consistently developed and demonstrated breakthrough systems, including MonoSLAM, KinectFusion, SLAM++ and CodeSLAM, and recent prizes include Best Paper at ECCV 2016 and a Best Paper Honourable Mention at CVPR 2018. He has also had strong involvement in taking this technology into real applications, in particular through his work with Dyson on the design of the visual mapping system inside the new Dyson 360 Eye robot vacuum cleaner and as co-founder of applied SLAM start-up SLAMcore. He was elected as Fellow of the Royal Academy of Engineering in 2017.

Abstract:
To enable the next generation of smart robots and devices which can truly interact with their environments, Simultaneous Localisation and Mapping (SLAM) will progressively develop into a general geometric and semantic `Spatial AI' perception capability. I will give many examples from our work on gradually increasing visual SLAM capability over the years. However, much research must still be done to achieve true Spatial AI performance. A key issue is how estimation and machine learning components can be used and trained together as we continue to search for the best long-term scene representations to enable intelligent interaction.  Further, to enable the performance and efficiency required by real products, computer vision algorithms must be developed together with the sensors and processors which form full systems, and I will cover research on vision algorithms for non-standard visual sensors such as event cameras as well as concepts for the longer term future of coupled algorithms and computing architectures.

May 29, 11:0012:00
AI Embedded in the Real World

IAPR Distinguished Speaker
Hideki Asoh
Deputy Director of Artificial Intelligence Research Center at National Institute of Advanced Industrial Science and Technology

Biography

Hideki Asoh received his B.Eng. in mathematical engineering and M.Eng. in information engineering from the University of Tokyo, in 1981 and 1983 respectively. In April 1983, he joined in Electrotechnical Laboratory as a researcher. From 1993 to 1994 he stayed at German National Research Center for Information Technology as a visiting research scientist. He is currently a deputy director of Artificial Intelligence Research Center at National Institute of Advanced Industrial Science and Technology (AIST). His research interests are in constructing intelligent systems which can learn through interactions with the real-world including humans.

Abstract:
Artificial intelligence technology has been rapidly expanding mainly by utilizing the vast amount of data and knowledge collected through Internet services such as search, e-commerce, and social network, so far. However, as the next phase of development, AI embedded in the real-world, that is, AI based on data collected through various kinds of services and life activities in the real world is becoming important recently in concert with the development of IoT device technologies and robot technologies. Artificial intelligence research center (AIRC) under AIST was established in 2015 as one of the largest open-innovation-hub of AI technologies in Japan and is developing innovative AI technologies which will solve difficult problems through collaborating with humans in the real world, and promoting their implementation into the real world. In this talk, I will introduce some topics related to machine vision and applications from our activities, and also discuss on future direction of AI research towards implementation in the real-world.

May 30, 15:1016:10
Machine Learning for Autonomous Driving: The Ingredients for a Safe and Scalable System

IAPR Distinguished Speaker
Prof. Amnon Shashua
President & CEO, Mobileye,
Senior Vice President at Intel Corp.,
Sachs Professor of Computer Science at the Hebrew University

Biography

Prof. Amnon Shashua holds the Sachs chair in computer science at the Hebrew University of Jerusalem. His field of expertise is computer vision and machine learning. Shashua has published over 120 papers in the field of machine learning and computational vision and holds over 45 patents. Shashua has founded three startups in the fields of computer vision and machine learning. In 1995 he founded CogniTens which specializes in industrial metrology and is today a division of the Swedish Corporation Hexagon. In 1999, he cofounded Mobileye with Ziv Aviram which develops system-on-chips and computer vision algorithms for driving assistance systems and is developing a platform for autonomous driving to be launched in 2021. Today, over 30 million cars rely on Mobileye technology to make their vehicles safer. In August 2014, Mobileye claimed the title for largest Israeli IPO ever, by raising $1B at a market cap of $5.3B. In August 2017, Mobileye became an Intel company in the largest Israeli acquisition deal ever, for a total of $15.3B. Today, Prof. Shashua is the President & CEO of Mobileye and a Senior Vice President at Intel Corporation. In 2010 he co-founded OrCam which harnesses computer vision and artificial intelligence to assist the visually impaired and blind. And in 2019 Shashua was recognized as the Electronic Imaging (EI) Scientist of the Year by the Society for Imaging Science and Technology (IS&T) for his pivotal contributions to computer vision and machine learning, and for advancing autonomous driving and wearable assistive devices for the blind and visually-impaired.

Abstract:
Autonomous driving is being developed around the globe by dozens of tech companies – the majority of which rely on the primacy of Lidar. I will present the work being done at Mobileye, an Intel company, which addresses the issues from a novel and unique way. I will explain why cameras should be the prime sensing modality, why redundancy is crucial and how to achieve it, the need for a formal safety model living outside of statistical reasoning, and the need for scalability and how to achieve it.

https://www.youtube.com/watch?v=URIrWETDO8U