Skip to main content
Menu

How do Language Models like ChatGPT Process Complex Words?

Location

Seminar Room 3, Wolfson College, Linton Road, OX2 6UD & Online

Date & Time

Tuesday 21 Feb 2023 14:30 - 16:00

Availability

Open to all. There will be cake and coffee/tea available for in-person attendees

Valentin Hofmann is a final-year DPhil student at the University of Oxford and a research assistant at LMU Munich. His work broadly focuses on the intersection of natural language processing, linguistics, and computational social science, with specific interests in tokenization, socially and temporally aware language models, and graph-based methods. He has previously spent time as a research intern at DeepMind and as a visiting scholar at Stanford University.

Valentin Hofmann
Language models (LMs) like ChatGPT have achieved unprecedented levels of performance in natural language processing. One common characteristic of these models is that they segment text into a sequence of tokens from a fixed-size vocabulary, a step commonly referred to as tokenization. In this talk, I will take a closer look at how linguistic properties of the tokenization impact how LMs process complex words (e.g., "superbizarre"). I will first give an overview of different forms of complex word processing in humans and AI systems. I will then present recent computational studies showing that the tokenization of LMs can lead to linguistically invalid segmentations (e.g., "superb-iza-rre") that severely affect how LMs interpret complex words. Finally, I will discuss potential solutions of this problem.

Open to all. There will be cake and coffee/tea available for in-person attendees.

Microsoft Teams link to join the talk online is available here