Language, Action, and Perception

Software Project, Winter Semester 2020-2021

As intelligent agents become more integrated with everyday life, interacting with them will be not be limited to speech and language. Instead, interactions will be inherently multimodal, drawing on spoken and typed language; head, hand, and facial gestures from image and video captures; and contextualized awareness of objects and actions in the local environments.

This course will focus on understanding and modelling situated meaning by developing a semantics for multimodal communication. Students will design and carry out their own projects to generate representations for common ground between human and computer, with the goal to present a working system for a particular aspect of multimodal communication.

As this course may be limited in size, please send me an email to express your interest as soon as possible and before October 19.