Over the last decade, the robotics community has developed highly efficient and
robust state estimation solutions to problems such as robot localization, people
tracking, and map building. With the availability of various techniques for
spatially consistent sensor integration, an important next goal is to enable
robots to reason about the many objects located in our everyday environments and
to reason about spatial concepts such as different rooms and hallways. An
additional requirement for successful operation in populated environments is the
ability to interact with people in a natural way.
In this talk I will present some recent work aimed at making progress toward
these goals. Our work is mainly based on machine learning and probabilistic
state estimation techniques. I will cover examples from areas such as 3D
mapping, object modeling and recognition, natural language direction
understanding, and navigation through crowded areas. These examples will also
include preliminary work using depth cameras, a new breed of vision systems that
provide per pixel color and depth information.
Back to Machine Reasoning Workshops I & II: Mission-Focused Representation & Understanding of Complex Real-World Data