Continual learning is a machine learning setting where data from different tasks are presented sequentially to the learner. A key challenge in continual learning is the catastrophic forgetting phenomenon, where adaptation of models to fit new tasks often leads to degradation in performance on previous tasks. Despite recent advances in theoretical understanding of continual learning, catastrophic forgetting is not fully understood even in simple models. Here, we study fitting an overparameterized linear model to a sequence of tasks with different input distributions. We analyze how much the model forgets the true labels of earlier tasks after training on subsequent tasks, obtaining exact expressions and bounds and revealing the importance of task similarity and task ordering. We discuss connections and differences between forgetting and classical convergence bounds for alternating projections and the Kaczmarz method.
Back to Workshop IV: Multi-Modal Imaging with Deep Learning and Modeling