$$Events$$

Jun. 28, 2022
14:00
-15:30

Building 74, room 516

Over the last few years, the field of natural language processing (NLP) has gained much traction in both the computer science research community and in the tech sphere, making significant headway in tasks such as machine translation and question answering. However, the large systems (known as "models") built to perform these tasks are not necessarily motivated by our knowledge of human language, and present behavior which casts doubt on claims that they "learn language", or "understand" it, or whether or not these are even claims that can be rigorously defined and tested.

In this talk, I will explore aspects of this ongoing debate by examining models' treatment of lexical innovation in the English language, with focus on the phenomenon of blend formation: can cues from word form and usage context help a model infer that "innoventor" is a new word derived from "innovator" and "inventor"? Throughout, I will include presentation and discussion of NLP as a field and its intertwining relations with linguistics and its branches, identifying common ground and potential for cooperation.​

​​​