Certifiably Robust Spatial Perception for Robots and Autonomous Vehicles

November 13, 2020, Zoom

Luca Carlone

MIT, Aeronautics and Astronautics


Spatial perceptionis concerned with the estimation of a “world model” --that describes the state of the robot and the environment-- using sensor data and prior knowledge. As such, it includes a broad set of robotics and computer vision problems, ranging from object detection and pose estimation, to robot localization and mapping. Most perception algorithms require extensive and application-dependent parameter tuning and often fail in off-nominal conditions. A main cause of failure is the presence of outliers, which are incorrect measurements produced by signal processing and machine learning techniques in charge of extracting features from the sensor data. In this talk, I present recent advances in the design ofcertifiably robustspatial perception algorithms that are robust to extreme amounts of outliers and afford performance guarantees. I present fast certifiable algorithms for object pose estimation in 3D point clouds and RGB images: our algorithms are “hard to break” (e.g., they are robust to the case where 99% of the measurements are outliers) and succeed in localizing objects where an average human would fail. Moreover, they come with a “contract” that guarantees their input-output performance. I discuss the foundations of certifiable perception and motivate how these foundations can lead to safer systems, while circumventing the intrinsic computational intractability of typical perception problems.

Speaker's Bio

Luca Carlone is the Leonardo Career Development Assistant Professor in the Department of Aeronautics and Astronautics at the Massachusetts Institute of Technology, and a Principal Investigator in the MIT Laboratory for Information & Decision Systems (LIDS). He received his PhD from the Polytechnic University of Turin in 2012. He joined LIDS as a postdoctoral associate (2015) and later as a Research Scientist (2016), after spending two years as a postdoctoral fellow at the Georgia Institute of Technology (2013-2015). His research interests include nonlinear estimation, numerical and distributed optimization, and probabilistic inference, applied to sensing, perception, and decision-making in single and multi-robot systems. His work includes seminal results on certifiably correct algorithms for localization and mapping, as well as approaches for visual-inertial navigation and distributed mapping. He is a recipient of the Best Paper Award in Robot Vision at ICRA’20, the 2017 Transactions on Robotics King-Sun Fu Memorial Best Paper Award, the Best Paper Award at WAFR’16, the Best Student Paper Award at the 2018 Symposium on VLSI Circuits, and was best paper finalist at RSS’15. He is also a recipient of the RSS Early Career Award (2020), the Google Daydream (2019) and the Amazon Research Award (2020), and the MIT AeroAstro Vickie Kerrebrock Faculty Award (2020). At MIT, he teaches “Robotics: Science and Systems,” the introduction to robotics for MIT undergraduates, and he created the graduate-level course “Visual Navigation for Autonomous Vehicles”, which covers mathematical foundations and fast C++ implementations of spatial perception algorithms for drones and autonomous vehicles.