Architectural constraints on recurrent network dynamics

Carina Curto
Penn State University
Mathematics

Recurrent neural networks are capable of producing a wide variety of nonlinear dynamics, including fixed point attractors, limit cycles, quasiperiodic attractors, and chaos. How does the network architecture enable and constrain these dynamics? This question is key to understanding the role of connectomes in neural computation: while a connectome cannot fully determine the function of a network, it creates and constrains the space of possibilities. We explore these constraints in the context of threshold-linear networks, a family of toy models that are simple enough to be studied mathematically while exhibiting the full range of nonlinear dynamics. We study the bifurcation theory as a function of both synaptic weights and neuromodulation, and find that different architectures provide different types of constraints. Mathematically, this can be understood via the combinatorial geometry of certain hyperplane arrangements associated to the model.


Back to Mathematical Approaches for Connectome Analysis