Learning the Structure of Human Languages

Chris Manning
Stanford University

While there is certainly debate about how much inbuilt linguistic bias ("Universal Grammar") human language learners possess and as to whether they receive useful feedback during learning, children nevertheless definitely acquire language in a primarily unsupervised fashion. In contrast, most computational approaches to language processing are almost exclusively supervised, relying on hand-labeled corpora for training. This reflects the fact that despite the promising rhetoric of machine learning, attempts at unsupervised grammar induction have been seen as largely unsuccessful, and supervised training data remains the practical route to high performing systems. To the extent that this remains the state-of-the-art, the Chomskyan position on the poverty of the stimulus available for learning is at least weakly supported.



In this talk I will present work that comes close to solving the problem of inducing tree structure or surface dependencies over language -- that is, providing the primary descriptive structures of modern syntax. While this work uses modern learning techniques, the primary innovation is not in learning methods but in finding appropriate representations over which learning can be done. Overly complex models are easily distracted by non-syntactic correlations (such as topical associations), while overly simple models aren't rich enough to capture important first-order properties of language (such as directionality, adjacency, and valence). We describe several syntactic representations which are designed to capture the basic character of natural language syntax as directly as possible. With these representations, high-quality parses can be learned from surprisingly little text, with no labeled examples and no language-specific biases.



(This talk covers work done with Dan Klein, now at UC Berkeley.)


Back to Probabilistic Models of Cognition: The Mathematics of Mind