Toward natural language semantics in learned representations

Samuel Bowman
New York University

Deep learning methods now represent the state of the art in most large-scale applied language understanding tasks, but there is little evidence that they have attained the kind of robust, task-independent language understanding ability that humans demonstrate. This talk presents experiments which are meant to both measure and advance progress toward this goal, organized around the task of recognizing textual entailment.

Presentation (PDF File)

Back to Long Programs