Autonomy is relatively easy when the task and environment are well structured and predictable. However, most situations are not like that - tasks are vaguely specified, the environment is poorly modeled, and the world is unpredictable. To reliably operate in such situations, autonomous systems need to explicitly reason about uncertainty, understand their own capabilities and limitations, and adapt their behaviors and models based on experience. This talk will present several approaches to robust autonomy that use statistical reasoning, machine learning, and a hierarchy of models to make systems more robust in poorly structured, unpredictable scenarios. We will also present an approach for the autonomous systems to explain why it chose certain actions, in an effort to increase trust with humans.
Reid Simmons is a Research Professor in the Robotics Institute and Director of the new Bachelor of Science in Artificial Intelligence program at Carnegie Mellon University. He recently returned from a 2.5 year stint at the National Science Foundation where he instituted the Smart and Autonomous Systems program. His research interests include autonomous systems that can operate robustly and reliably in uncertain, dynamic environments, human-robot social interaction, and collaborative multi-robot systems. His technical foci are in the areas of planning under uncertainty, modeling and reasoning about human activity, robot architectures, and adaptive systems. Dr. Simmons has developed over a dozen autonomous robots and has published over 200 articles in the areas of AI and Robotics.