Skip to main content

Exploring AI Foundations: Understanding Large Language Models through their history: Developments in AI since 2017

Location

Online

Date & Time

Tuesday 09 Jun 2026 12:00 - 13:30

Audience exposure level - Open to all

The period from 2017 to the present is one of the most significant in the history of computing. This 90-minute deep dive follows the development of Large Language Models from the word embeddings and attention mechanisms of 2013–2017 through the transformer revolution, the scaling era, the emergence of GPT-3 and ChatGPT, and the current age of reasoning and agentic systems. The session provides the conceptual grounding needed to make sense of why LLMs work the way they do and what their capabilities actually mean.

Objectives

  • Why LLMs have the strengths and limitations they do
  • Key concepts behind LLMs: embeddings, transformers, attention, scaling
  • Attention, transformers, and why they changed everything (2017–2019)
  • The scaling era: GPT-2, GPT-3, emergent capabilities (2019–2022)
  • ChatGPT, reasoning models, and the multimodal turn (2022–2025)
  • The agentic turn: tool use, coding agents, MCP (2025–2026)
  • Changes in model architecture, training, and data

 

Event materials will be published on Canvas following the session