Structure and Randomness in System Identification and Learning

January 15 - 18, 2013


Machine learning and system identification communities are faced with similar problems where one needs to construct a model from limited or noisy observations. The challenge lies in solving an ill-posed inverse problem, where the number of available measurements is much smaller than the dimension of the model to be estimated. However, typically two ingredients, (i) simple model structures and (ii) random observations, go hand in hand to allow learning or recovery guarantees.workshop-graphic1

For example, in applications, model complexity can be expressed as the cardinality of a vector, the rank of a matrix or tensor, or more generally, the number of simple units or “atoms” needed to build a model consistent with the observations. This general notion of complexity covers recently studied problems in compressed sensing, low-rank matrix and tensor recovery, matrix completion, recovering low-rank structures corrupted by outliers, and sparse graphical models. Considering time-dynamics requires notions of (temporal) model structure such as the McMillan degree (the order of the minimal realization) of a linear system representing given input-output observations.

Exploiting the structure in a system often leads to significant computational efficiency, and the workshop will also examine the role of structured convex optimization.

This workshop will bring together researchers from machine learning and control theory, as well as high dimensional statistics and convex optimization, to explore this research area.

The workshop will also include a poster session; a request for posters will be sent to registered participants in advance of the workshop.

Organizing Committee

Maryam Fazel, Chair (University of Washington)
Mehran Mesbahi (University of Washington)
Nathan Srebro (University of Chicago)
Lieven Vandenberghe (University of California, Los Angeles (UCLA))