Skip to main content

Evaluating claims about Artificial Intelligence

Location

In-person

Date & Time

Friday 22 May 2026 13:30 - 15:00

Availability

This session will also run on Friday 5 June 2026 13:30 - 15:00. Book through the same link.

Audience exposure level - Open to all

AI coverage is full of claims — about capabilities, risks, timelines, and implications — and it is difficult to know how to evaluate them. This 90-minute workshop gives participants a practical framework for evaluating claims made about AI by both its promoters and critics. Through hands-on activities using real-world examples — from academic studies and media coverage to social media takes — participants will explore identifying various claims and evaluating the sources of evidence required to substantiate them. The workshop addresses both general critical skills and AI-specific considerations: which AI system was tested, which model, which prompts, and when the research was conducted relative to rapidly shifting capabilities.

Objectives

  • Key signals for relevance: which model, which prompts, when published
  • Key warning signs in AI claims: over generalisation, mismatch between types of AI, lack of context
  • The academic publishing timeline mismatch and its consequences
  • Rhetorical patterns in AI discourse: hype, dismissal, status quo bias
  • Positionality of AI commentators: who speaks, from what perspective, with what interests
  • Reliable sources and signals for staying informed about AI

 

Event materials will be published on Canvas following the session