If you are interested in enrolling in this course, please send me an email by April 11, 2022 with the following information:
- Area of study (BSc, MSc)
- Research interests and background in semantics
- Why do you want to take this course?
- Would you like a virtual option to be possible for the course?
Meeting time: Tuesdays, 12:15-13:45, Seminarraum 1.12 (start date: 19.4.22)
Compositionality is a fundamental principle of natural language semantics: “The meaning of a whole [expression] is a function of the meanings of the parts and of the way they are syntactically combined” (Partee, 1984). Compositional generalization is a basic and essential linguistic capability of human beings, which allows speakers to recombine known linguistic elements dynamically to create new linguistic structures. A growing body of research is investigating the ability of neural language models to generalize compositionally in ways akin to humans, with results suggesting that this task is challenging and complex for machines.
In this seminar, we will review the fundamentals of compositionality and why it is a hallmark of human linguistic competence and performance. We will look at compositionality through the lens of formal syntax and semantics, approaches to language that emphasise non-compositional structures, child language acquisition, and adult L2 learning. We will then investigate the current research probing compositional generalization for machines, looking carefully at datasets and evaluation metrics that have been developed for the task; system architectures designed for the task; and extensions that ask whether augmenting machine learning with additional information such as vision or grounding helps machine performance. Throughout, we will draw connections between current research on machine compositional generalization with linguistic research on human language and ask what the goals of our machines in understanding language should be.
Prerequisites: Background in formal syntax and semantics is recommended.