Should we care about linguistics?

Ellie Pavlick
University of Pennsylvania

There are countless examples of how deep learning has shattered previously state-of-the-art results on language processing tasks, including machine translation, question answering, text classification, and parsing. The current optimism surrounding these new techniques has led us to set more ambitious goals, to not just perform text-processing tasks well, but to encode "meaning" in some fundamental, application-independent sense that can prove useful across tasks, architectures, or objective functions.

I argue that while the models we have at our disposal are new, the questions that arise as we attempt to build such task-independent representations are age-old. I will survey a variety of competing models of knowledge representation and inference that have been proposed in the fields of linguistics and cognitive science. I will present some experimental results involving both human subjects and computational NLU systems which illustrate weaknesses in our current models, and highlight why our decisions with respect to these theoretical models matter in practice. I will offer a speculative discussion on why paying better attention to the linguistic and cognitive assumptions we make as we develop new ML architectures can help us make better, faster progress.

Presentation (PDF File)