Machine Reasoning Workshops I & II: Mission-Focused Representation & Understanding of Complex Real-World Data

September 20 - 24, 2010


These two workshops will address two topics important for efficiently obtaining and utilizing the information inherent in complex real-world data, namely Representation and Understanding. Each is described in detail below. Workshop I on Data Representation will begin on Monday, September 20 in the morning and continue until lunch on Wednesday, September 22. Workshop II on Understanding of Data commences after lunch on Wednesday, September 22 and continues through Friday, September 24. Those wishing to participate in one or both workshops must apply and be accepted. There will be a combine reception on Tuesday evening.

Workshop I (Sept 20 – Sept 22): Representation of real-world information sources involves development of automated systems for supporting efficient storage, retrieval, conflation and deflation of heterogeneous data. The representations must be linked to the goal of mission-focused autonomy, and therefore must be computationally efficient. Representation products must be useable by machines, humans, and an integrated human-machine system. Compact representations are needed that support conventional sensor data, intelligence, and open-source information; these data might be highly correlated and therefore jointly compressible, but each source is often originally represented in a different alphabet. Additionally, portions of the data vector are often missing or incomplete. The uncertainty and imprecision of the representation must be quantified, while also exploiting all metadata, or side information.

Representations are also required for activities, events, and other sources of information that are typically described qualitatively. These representations may be associated with other data types and be supported/composed by/from the representation of sensor data. The data and information must be constituted at multiple scales and fidelities, linked to specific inference goals and missions. Examples of techniques that may be employed include dimensionality reduction, exploiting the fact that data with high dimensionality may reside on a low-dimensional subspace or manifold, with the latent manifold shared across heterogeneous data types.

Workshop II (Sept 22 – Sept 24): Understanding addresses how, for a given mission, data and information should be combined or fused to achieve mission-aware cognition of the environment, accounting for uncertainty, incompleteness, imprecision, and contradictory data from a disparate variety of sources. This includes methods for aligning in space and time heterogeneous data sets that are statistically related, but often employ distinct alphabets. These heterogeneous data sources must be fused to support mission-focused autonomy. The system must be adaptive, with the ability to support acquisition of new data or information, to improve both representation and understanding, with required fidelity or precision linked to the mission and inference task. In a multi-scale framework, one must quantify how uncertainties and imprecision at a given scale propagate, and how they impact data interpretation at other scales of analysis. The fidelity and scale of required understanding must incorporate context and mission knowledge. The ability to combine and interpret the data and information must be timely so that important activities and events are not missed; the definition of “important” and the appropriate scale/resolution is linked to the mission. Relative to the mission, methods that define context and importance of particular data and information must be developed. For a given context the system must be capable of providing multiple hypotheses/explanations of the data and information that are consistent with the mission and the context.

Organizing Committee

James Allen (University of Rochester)
Lawrence Carin (Duke University, Elec and Computer Engineering)
Pedro Domingos (University of Washington, Computer Science & Engineering)
Leslie Greengard (New York University)
Carlos Guestrin (Carnegie-Mellon University)
John Laird (University of Michigan, Computer Science and Engineering)
Josh Tenenbaum (Massachusetts Institute of Technology, Brain and Cog Sc, CS, and AI)
Bob Tenney (BAE Systems)
Claire Tomlin (University of California, Berkeley (UC Berkeley))