Learning Interaction kernels in agent-based systems

Mauro Maggioni
Duke University
Mathematics and Computer Science

We consider a system of interacting agents: given only observed trajectories of the system, we are interested in estimating the interaction laws between the agents. We consider both the mean-field limit (i.e. the number of agents going to infinity) and the case of a finite number of agents, with an increasing number of observations, and will mostly focus on the latter case. We show that at least in particular setting where the interaction is governed by an (unknown) function of distances, under a suitable coercivity condition that guarantees well-posedness of the problem of recovering the interaction kernel, statistically and computationally efficient estimators exist. In particular, we show that the high-dimensionality of the state space of the system does not affect the learning rates, and in fact our estimators achieve the optimal learning rate for the interaction kernel, equal to that of a one-dimensional regression problem (the variable being pairwise distance). We exhibit efficient algorithms for constructing our estimator for the interaction kernels, with statistical guarantees, and demonstrate them on various simple examples, including extensions to agent systems with different types of agents, and second-order systems. This is joint work with F. Lu, S. Tang and M. Zhong.

Back to Workshop III: Geometry of Big Data