Skip to main content
Menu

Generative AI Tools | AI Competency Centre

Blue banner

Generative AI Tools

University-Provided AI Platforms

These tools have passed security reviews and include enterprise agreements for robust data protection. Use these for moderate and high-risk University data.

If you wish to purchase generative AI tools, please see our Buying Generative AI Licences page.

Quick Tool Selection Guide

The following comparison helps you choose the right tool based on your task and data requirements:

Platform Primary Use Case Data Security Level Suitable for Confidential Data? Access Method Cost to User
ChatGPT Edu Advanced content generation, summarisation, and complex reasoning High-Risk Approved Yes University SSO Free upon request
Microsoft 365 Copilot AI assistance integrated with Microsoft 365 apps, files, and emails High-Risk Approved Yes University SSO £26 per user per month
Microsoft Copilot Chat Web-grounded chat, summarisation, drafting, and web-based information retrieval High-Risk Approved Yes (when signed in with SSO) University SSO Free
Google Gemini AI assistance integrated with Google Workspace and as a standalone chat Moderate-Risk Approved Yes (when accessed via Workspace) University SSO Free through the University Google Workspace
Nebula One Platform Secure, internal access to a range of different AI models with deployable AI chatbots High-Risk Approved Yes University SSO Free

For all licences provided through the University, users can be assured that any messages or prompts are private and cannot be accessed by administrators in the AI Competency Centre.

Your responsibilities with AI

Oxford embraces AI as a powerful tool for research, teaching, and administration while maintaining strict security standards. You can access leading AI platforms, but success depends on understanding your role in using them safely and effectively.

Your Key Responsibilities

You are responsible for:

  • Following University policies - All existing rules on academic integrity, data protection, and conduct apply to AI use
  • Verifying AI output - AI can produce inaccurate, biased, or completely false information ("hallucinations"). Always fact-check before using AI-generated content
  • Taking accountability - You own responsibility for any work that incorporates AI-generated material, including ensuring accuracy and proper citation

The Critical First Step: Know Your Data

Before using any AI tool, you must classify your data. This single decision determines which tools are safe to use and which could expose sensitive information.

The Golden Rule: The security level of your chosen AI tool must match or exceed the sensitivity of your data.

Data Classification Guide

Understanding these three categories is essential for safe AI use:

Low-Risk Data (Public Information)

Information already public or intended for public release.

Examples:

  • Published course descriptions
  • Public research abstracts
  • University press releases
  • Anonymised statistics
  • Content from Oxford's public website

Moderate-Risk Data (Internal Information)

Not intended for public release, but disclosure wouldn't cause significant harm.

Examples:

  • Draft research papers (non-confidential)
  • Internal department communications without personal data
  • Non-confidential operational documents
  • Meeting minutes without sensitive content

High-Risk Data (Confidential & Sensitive)

The university's most sensitive information. Unauthorised disclosure could cause severe financial, legal, or reputational damage.

Examples:

  • Any personal data - names, grades, personally identifiable information (PII)
  • Employee information
  • Unpublished research data, especially under Non-Disclosure Agreements (NDAs)
  • Grant proposals with sensitive budget details
  • Electronic Protected Health Information (ePHI)
  • Third-party data under restrictive agreements

Important: Data classification can change quickly. Interview transcripts become high-risk if they contain participant names. Meeting summaries become high-risk if they discuss specific student performance.

AI and Data Privacy at Oxford

Using AI Tools Safely with Confidential Data

Oxford University provides secure, enterprise-level access to AI tools that protect your data and comply with University policies. These tools have been thoroughly assessed and approved for use with confidential information.

Understanding Data Protection in AI

Your Data is Protected Under Enterprise Agreements

When you use Oxford's enterprise AI tools, your data is automatically excluded from training. This means your work remains confidential and won't be used to improve AI models that others could access. Your information is stored securely on company servers with the same protections as other University services like SharePoint and Outlook.

Training vs. Storage: What's the Difference?

Training on data refers to how AI models learn from large datasets to improve their capabilities. In consumer versions of AI tools, your conversations might be used for this training. However, Oxford's enterprise versions automatically opt out of training, ensuring your data stays private.

Storing data works exactly like other university digital services. Enterprise versions include additional security protections and legal agreements to ensure compliance with data protection laws.

Oxford-supported enterprise AI tools have been assessed through Third Party Security Assessment (TPSA) and Data Protection Impact Assessment (DPIA) processes and are approved for use with confidential University information.

Oxford's Approved AI Tools

The University currently offers three enterprise-grade AI tools that are safe for confidential information:

Microsoft Copilot

  • Access: Sign in with your Oxford email address through the normal Microsoft workspace
  • Verification: Look for the green shield symbol at the top of your screen
  • Availability: Free chatbot access for all staff and students
  • Enhanced features: Available with Microsoft Copilot 365 licence for full app integration

ChatGPT Edu

  • Access: Must sign in specifically to ChatGPT Edu (not the consumer versions - Free, Plus, Teams, or Pro)
  • Verification: Look for the blue Oxford logo next to your name in settings
  • Availability: Currently requires a purchased or assigned licence; will be available to all University members soon
  • Important: Signing into ChatGPT with an Oxford email address is not the same as accessing the enterprise-protected workspace

Google Gemini

  • Access: Sign in to Google using your SSO credentials (abcd1234@ox.ac.uk)
  • Verification: Check that @ox.ac.uk appears in your account settings
  • Availability: Free version available to all; Pro features available to pilot programme participants

Quick Reference Guide

Tool Sign-In Method Verification Current Status
Microsoft Copilot Oxford email address Green shield symbol Chatbot: all users
Full integration: purchase required
ChatGPT Edu Oxford SSO Blue Oxford logo Purchase only
(coming to all soon)
Google Gemini Oxford SSO @ox.ac.uk in settings
”Gemini for Oxford University” on screen in new chat
Free version: all users
Pro: pilot participants only

Oxford AI Tools: Complete Platform Guides

This guide details each AI platform available at Oxford, helping you choose the right tool for your specific needs and data sensitivity requirements.

ChatGPT Edu

ChatGPT Edu is Oxford's enterprise version of OpenAI's leading AI model, designed for sophisticated reasoning, content creation, and analysis.

What ChatGPT Edu Does Best

  • Advanced reasoning and analysis - Handles complex problem-solving tasks
  • Document drafting and editing - Creates professional content from scratch
  • Code writing and debugging - Supports multiple programming languages
  • Live web browsing - Incorporates current information into responses
  • Text summarisation - Processes lengthy, complex documents

Security Protection

Your data is contractually protected:

  • No model training - Your prompts and responses will never be used to train OpenAI's public models
  • Enterprise-grade encryption - AES-256 encryption at rest, TLS 1.2+ in transit
  • SOC 2 Type 2 compliance - Passes rigorous third-party security audits
  • Controlled access - Only authorised OpenAI personnel can access data for specific purposes like technical support

Access Options

  • Basic Tier (Starting October) - Available to all Oxford members with usage limits
  • Full Paid Version - Currently available for purchase with advanced features (existing licenses valid through December)

How to Access Safely

  1. Log in through Oxford's official portal using your university SSO
  2. Verify protection - Look for "Oxford University" branding in the interface
  3. Warning: If you don't see Oxford branding, you're in the public version - do not use confidential data

Microsoft 365 Copilot and Microsoft Copilot (Chat)

Microsoft 365 Copilot: Microsoft 365 Copilot integrates directly into your M365 applications, using your organizational data to boost productivity.

Microsoft Copilot (Chat): Microsoft Copilot (Chat) is a free limited AI tool offered by Microsoft. Unlike Microsoft 365 Copilot (paid), Microsoft Copilot (Chat) cannot integrate into your M365 applications and has a limited functionality. In order to access use Copilot across the Microsoft Suite, you will firstly need to purchase a Microsoft 365 Copilot licence. 

What Microsoft 365 Copilot Does Best

  • Email management - Summarises lengthy email threads
  • Document creation - Drafts documents using your meeting notes
  • Presentation building - Generates PowerPoint slides from Word documents
  • Data analysis - Analyses Excel spreadsheets and creates insights
  • Meeting summaries - Provides automatic Teams meeting recaps
  • Work data integration - "Chats with your work data" across all M365 apps

Security Protection

Your data stays within Oxford's M365 environment:

  • Microsoft 365 boundary - All data remains within your trusted M365 environment
  • Azure OpenAI processing - Uses secure Azure services, not public OpenAI
  • No model training - Your data is never used to train foundation models
  • Inherited security - Benefits from all M365 security features including GDPR compliance

Critical User Responsibility: Permissions Management

Important: Copilot can only access data you already have permission to see, but it can inadvertently facilitate oversharing. A request to "summarise all Project X documents" might pull from sensitive SharePoint sites you can access but shouldn't share widely. Maintain good "permissions hygiene."

Access Tiers

  • Default Protected Version - Available to all M365 users; cannot access work emails, chats, or documents
  • Full Enterprise Version - Complete integration with work data; identified by a green shield icon at the top of the chat

How to Access Safely

  1. Log in through your Microsoft 365 account
  2. Verify full protection - Look for the green shield icon with "Protected" text
  3. No shield = limited protection - Without the shield, you're not using the full enterprise version

Google Gemini

Google Gemini provides AI assistance both as a standalone tool and integrated within Google Workspace applications.

What Gemini Does Best

  • Educational content creation - Lesson plans, curriculum development
  • Grant proposal drafting - Academic and research funding applications
  • Document and email summarisation - Processes Google Workspace content
  • Research acceleration - Integrates with NotebookLM for knowledge sharing
  • Administrative tasks - Streamlines routine educational workflows

Security Protection (When Used Correctly)

Protection applies only through Oxford's Google Workspace:

  • No human review - Your data is not reviewed by Google staff
  • No model training - Your content is never used to train AI models
  • No sharing - Data is not shared with other users or institutions
  • Enterprise controls - Full administrative oversight and data governance

Critical Account Risk

High risk of user error: Many users are logged into both personal Gmail and University Google accounts simultaneously. Using Gemini through a personal account provides no protection and must not be used for university data. 

Access Options

  • Google Workspace (Secure) - Protected version through university accounts (signed in with SSO); safe for confidential data
  • Gemini Pro via Personal Account (Not Secure) - Available through personal Google accounts but not protected by enterprise agreements

How to Access Safely

  1. Verify University account - Ensure you're logged into your official Oxford Google Workspace account. This must be using your SSO email and not your department email.
  2. Check account profile - Accounts in the official Oxford workspace should say ‘Managed by ox.ac.uk’ under your email
  3. Warning: If prompted to sign up with personal Gmail, you're not in the protected environment

Free vs Pro features

Free (available to all via Oxford workspace)

  • Models - 2.5 Flash and limited 2.5 Pro
  • Deep research with 2.5 Flash
  • Image generation with Imagen
  • NotebookLM - upload 50 sources,
  • 15GB of storage in Drive
  • Stricter usage limitations

Pro (only available to Gemini pilot participants)

  • Everything in free plus:
  • Models - 2.5 Flash and more access to 2.5 Pro
  • Deep research with 2.5 Pro
  • Gemini integration into Google apps (Docs, Sheets, Slides etc.)
  • Video generation with Veo 3
  • NotebookLM - upload 300 sources
  • 2TB of storage in Drive

Nebula One Platform

Nebula One is Oxford's self-hosted platform providing secure access to multiple AI models through a single interface.

What Nebula One Offers

  • Multiple AI models - Access various AI providers through one secure interface
  • Custom chatbot creation - Build and share specialised tools within Oxford
  • Experimentation environment - Safe space to test different AI capabilities
  • Internal collaboration - Share AI innovations across the University

Security Protection

SharePoint-equivalent security:

  • Oxford Azure environment - Hosted within the University's secure cloud infrastructure
  • University data governance - All interactions remain within Oxford's managed systems
  • Model-specific policies - Each AI model follows the University's agreements with respective providers

Important caveat: Some models (like Anthropic) may not yet have full enterprise agreements. Check current status before using with highly sensitive data.

How to Access

Access through dedicated URL using University SSO credentials.

Developer and Research Tools

GitHub Copilot

What it does: Real-time code suggestions and completions directly in your code editor.

Access: Standard Pro tier available to all Oxford community members.

Security considerations:

  • Not covered by Oxford's central enterprise agreements
  • Mixed data policies - Business/Enterprise tiers don't use code for training; Pro tier policies may differ
  • User responsibility - Must manually opt out of code training in account settings

Recommendation: Excellent for development but exercise caution with highly sensitive intellectual property or code under restrictive licenses.

OpenAI API

What it does: Direct integration of OpenAI models into custom applications and research tools.

Access: University maintains credit pool; request access through AI and ML Competency Centre.

Security features:

  • No training by default - API data not used to improve OpenAI models
  • 30-day retention - Data held briefly for abuse monitoring, then deleted
  • Zero Data Retention (ZDR) - Available for highly sensitive research; enables HIPAA compliance

Recommendation: Secure platform for custom AI solutions, but developers must ensure their applications implement proper security measures.

Learn more about how to access OpenAI API credits via our API Access page.

Third-party tools

These popular tools can be valuable but require careful consideration of data privacy risks. Please refer to Information Security guidelines when using third-party GenAI tools.

Research and Analysis Tools

Elicit - AI research assistant for literature reviews

  • Privacy: Uploaded papers remain private to individual users
  • Use case: Literature research with non-sensitive academic content

Perplexity - AI search engine with source citations

  • Privacy risk: Uses search data to improve models (opt-out available)
  • Use case: General research questions using public information only

Consensus - Scientific research-focused search engine

  • Use case: Evidence-based research queries

Content Creation Tools

Claude (Free/Pro) - Anthropic's conversational AI

  • Privacy: Claims not to train on user data, but some staff may review content
  • Use case: Content creation with public information only

Gamma - AI presentation creator

HeyGen - AI video generation with avatars

Audio and Transcription

Meeting participants must not use unapproved bots in meetings. Microsoft Teams has inbuilt transcription features that are can be switched on if you have permission to record the meetings.

Eleven Labs - AI voice synthesis

Whisper-based apps - Audio transcription tools

Image Generation

Midjourney - AI image generation from text prompts