Exploring AI in Academic Contexts | AI Competency Centre
Exploring AI in Academic Contexts
This is a section where our consultants and research software engineers share their thoughts and explorations in the space of AI in the the context of academic practice. You will find a more personal and philosophical takes on various topics as well as some more narrowly focused technical deep-dives.
The content in this section is not guidance or detailed advice but rather an invitation to explore ideas. Please continue the discussion with us in the channels of our Generative AI Special Interest Group Team.
Contents:
Exploring AI for Learning through Personas
by Kelly Webb-Davies
In this exploration, Kelly Webb-Davies our AI consultant takes a look at how we can think about the role of generative AI in learning through the lens of different personas.
What are the personas
Many people are worried about the impact of generative AI on learning both because it can be used as a shortcut but also because it is not consistently reliable in the same way we are used to from computers.
But in fact, we have many models of how AI behaves from our interactions with other people that can be useful to help us frame how we use generative AI tools such as ChatGPT in our learning.
If you are thinking about using generative AI to help your learning, you can use these personas to help you think about the different ways in which you can integrate it into your learning process.
How to use the personas
You cannot just choose one of these personas and assume that it will explain all aspects of how generative AI tools work for you. You should always keep all of them in mind.
The “Oxford AI Personas” GPT was created for you to interact directly with the personas. It encourages a discussion about responsible use of generative AI through the use of generative AI.
Personas
Note: These personas are not trying to saying about about how AI actually works under the hood. There are many controversies about to what extent Large Language Models that power generative AI tools are similar or different to how people think. These personas are here to help you think about the different roles the tools powered by these models can play in your learning.
The STRANGER
Key lesson: Generative AI can provide interesting starting points, but isn't a citable source. Always verify information through legitimate academic sources before including it in your work.
This persona helps you think about how much you should trust the information you get from generative AI tools. They may or may not be perfectly reliable but you just don’t know ahead of time. Spend time with them but always check their work.
- You met someone at a pub who seems very knowledgeable about your field.
- They share impressive insights with complete confidence.
- But you don’t know their credentials or even where the information comes from.
- Would you cite (Pub Stranger, 2025) in your academic work?
The INTERN
Key lesson: Your AI intern can act as a helpful assistant, but you must develop and maintain your critical thinking skills and subject expertise.
The Intern persona is meant to help you think about what sort of assistance you can get from tools like ChatGPT. They can help you with many practical tasks. But you still need to make sure you give them clear instructions and check their work.
- AI is like an enthusiastic, always-available intern.
- They have many skills like summarising, transcription, scheduling, giving examples, creating visuals.
- But they need clear, detailed, specific instructions to work effectively.
- Keep in mind, they make mistakes, so you need enough expertise to check and correct their work.
The TUTOR
Key lesson: Use AI to enhance your learning process, not to bypass it.
You can use tools like ChatGPT or Gemini to help you learn new things. But imagine you’re meeting the tutor for the first time. You need to be clear about what you want to learn and how you want to learn it.
- AI can help you learn through interactive dialogue and personalised explanations.
- It can break down complex ideas, ask you questions, and engage you with discussions, working at your pace.
- Ask questions like you would with a new human tutor – be specific about your subject, level, and learning goals.
The TRANSLATOR
Key lesson: AI can help polish your expression, but the core ideas and understanding must be your own.
For many people, learning a new subject is like learning a new language. Generative AI tools can help you translate between the language of your subject and the language you speak right now.
- Generative AI acts like a “semantic translator” – changing form while preserving meaning.
- It can simplify complex texts, define specialised terms, and summarise dense information.
- It can even do this across different modes like text, speech, and images.
- AI can help you to communicate academically, but the core ideas and arguments must be your own.
The PEOPLE PLEASER
Key lesson: Don’t let AI flatter you into complacency. Growth comes from challenge.
The People Pleaser persona is here to serve as a reminder that AI tools often aim to be agreeable and supportive. They adapt to your tone and assumptions, which can make them feel encouraging but also less critical than you need. If you only let AI reassure you, it may reinforce your existing biases and prevent growth. To get real value, you need to actively ask it for critique, counterarguments, and challenges.
- AI models are usually trained to be helpful and agreeable, so they often tell you what you want to hear, not what you need to hear.
- They’re encouraging and supportive, adapting to your tone and assumptions.
- But they can avoid critical feedback and may reinforce your biases.
- Make sure you ask AI for critique, counterarguments, and challenges instead of just letting it validate you.
Introducing the AI Inherent Risk Scale (AIIRS)
Mark A. Bassett, AI strategy, governance, and integrity in higher education, Associate Professor and Academic Lead for AI, Charles Sturt University, EDSAFE AI Catalyst Fellow
With Kelly Webb-Davies and Ella Wicks
Overview
The AI Inherent Risk Scale (AIIRS) provides a structured approach for classifying tasks that use generative artificial intelligence (GenAI) into LOW, MEDIUM, or HIGH inherent-risk bands.
Classification is determined via three criteria—epistemic dependence, verifiability, and consequences of error—that define the nature and significance of a task’s reliance on GenAI. These criteria consider the extent to which GenAI is expected to supply information, the degree to which the output can be independently verified, and the seriousness of any potential errors.
AIIRS provides a consistent and defensible basis for assessing the inherent risk associated with GenAI-assisted tasks.
Purpose
The purpose of AIIRS is not to determine whether GenAI should be used, but to establish the level of inherent risk associated with a task that may require active management. AIIRS focuses on the inherent characteristics of a task, not on individual behaviour or user intent. Once a task’s inherent risk is understood, any additional safeguards, mitigations, or design choices may be applied where warranted, in line with any applicable governance arrangements.
AIIRS does not replace or override institutional policy, regulatory obligations, assessment design decisions, or the exercise of human judgement.
Scope
AIIRS is a classification instrument only, which indicates the level of risk that should be actively managed for a task that uses GenAI. It does not determine whether GenAI use is permitted, prohibited, ethical, compliant, or appropriate in any given context. AIIRS is designed for task-bounded human use of GenAI and does not cover autonomous or agentic AI systems, which introduce additional risks beyond the scope of this classification instrument.
Alignment
AIIRS does not replace or override institutional policy, regulatory obligations, assessment design decisions, or the exercise of human judgement. Classification outcomes must be interpreted and acted upon within existing governance, policy, and decision-making frameworks.
The Australian Higher Education Standards Framework (HESF) requires providers to identify risks to academic quality and integrity and to manage those risks through informed judgement and established governance processes. AIIRS supports this requirement by providing a shared, task-focused method for classifying the inherent risk of GenAI use that can be applied by staff and students, while ensuring that decisions about safeguards, assessment design, and integrity responses remain within existing institutional governance, policy, and quality-assurance frameworks.
Classifications


Classification criteria
Epistemic dependence
Epistemic dependence captures whether a task requires the system’s representations of the world to be correct in order for the task outcome to be usable. Tasks with lower epistemic dependence rely only on user-provided material, without requiring the system’s representations of the world to be correct for the task outcome to be usable. Tasks with higher epistemic dependence require the system’s representations of the world to be correct for the task outcome to be usable.

Verifiability
Verifiability captures the basis on which the correctness of a GenAI system’s output can be verified for the task. Verifiability is assessed independently of consequences. A task may be high risk due to the requirement for expert verifiability, even where the immediate consequences of error are limited. Tasks with embedded verifiability enable quick, reliable verification by the user or the surrounding process, without requiring specific domain expertise. Tasks requiring expert verifiability depend on specialised expertise or external investigation that requires evaluative judgement.

Consequences of error
The consequences of error reflect the extent to which incorrect, misleading, or incomplete GenAI outputs affect decisions, records, or outcomes related to the task. Tasks with minimal consequences of error are those in which errors have minimal impact on understanding or outputs and do not affect decisions, records, or outcomes relating to people beyond the task. Tasks with significant consequences of error are those in which errors affect decisions about people, alter records relating to them, or compromise outputs that have consequences for individuals or groups beyond the task.

Classification model
AIIRS uses a max-dominant classification model that supports proportionate risk management by ensuring that any single high-risk feature of a task is not offset by lower-risk features elsewhere.
The AIIRS decision flowchart.

When a task is classified as HIGH risk
Tasks classified as HIGH must not proceed in their current form. One or more of the following interventions are required:

When a task is classified as MEDIUM risk
Tasks classified as MEDIUM require proportionate controls to manage identified risk. The following controls and conditions apply:

When a task is classified as LOW risk
Tasks classified as LOW require routine care appropriate to the task and context. The following routine practices apply:

Licensing
AIIRS is released under a Creative Commons Attribution–NonCommercial–ShareAlike 4.0 International (BY-NC-SA 4.0) license. Users may remix, transform, and build upon the work, provided they give appropriate attribution, do not use it for commercial purposes, and distribute any derivative works under the same CC BY-NC-SA 4.0 licence.
Download
Visit the official AIIRS website at http://aiirs.ai to download the slide deck.
AI Inherent Risk Scale (AIIRS) – © 2026 Mark A. Bassett, Kelly Webb-Davies and Ella Wicks Licensed under CC BY-NC-SA 4.0