Skip to main content
Menu

Getting started with AI for Professional Services Staff | AI Competency Centre

Blue banner

Getting started with AI for Professional Services Staff

What can professional services staff use generative AI for?

Generative AI offers significant opportunities to enhance professional practice at the University when used with care, professional judgement, and an up-to-date understanding of how it works. These tools function by using complex models and a suite of background tools, like search, canvas, or data analysis, to respond to your prompts. They offer utility in that they can help disciplined users to eliminate legacy workflows and transition their active daily tasks to focus on work-streams related to strategy, innovation, and impact.

Staff can use generative AI for these and many other tasks:

  • Outlining plans and processes: Suggesting structures for project plans, meeting agendas, or process workflows. Do not consider the initial output a final version and always adapt it for context, as AI does not understand your department, stakeholders, or objectives as well as you do.
  • Processing survey questions: For creating follow-up survey question and subsequently gathering feedback on services or events, which you can refine to suit your specific aims.
  • Communication support: Writing or revising emails, reports, web copy, or other communication documents. This is particularly useful for staff who speak English as an additional language or for neurodivergent colleagues.
  • Differentiating materials: Adjusting existing content for different audiences, such as simplifying technical language for a non-specialist readership.
  • Reframing explanations: Brainstorming new ways to explain complex administrative processes or policies, which is especially helpful when trying to streamline existing workflows and identify bottlenecks.
  • Developing guidance: Creating SOPs or FAQs to support colleagues with new systems or procedures.
  • Bespoke support tools: With custom-built GPTs or tools like NotebookLM, you can develop contextually accurate AI assistants or role-play tools for training scenarios.

Professional staff should always review, revise, and contextualise AI-generated outputs.

Benefits

When used thoughtfully with critical oversight, generative AI can support:

  • Time savings: Lightening the load of drafting, summarising, or rewording repetitive materials to free up time for strategic work and building relationships with stakeholders.
  • Personalisation and Accessibility: Adapting communications for diverse audiences and offering multimodal formats (text, audio, visual) from the same base material.
  • Efficiency and Skill Development: Empowering staff to use AI responsibly improves workflows and helps develop valuable digital literacy skills. Asking ChatGPT to interview you about your job and teach you how to use it is a good way to begin to discover how it can help you eliminate administrative work.
  • Fostering Critical Thinking: When used as a dialogue partner, AI can be a tool for brainstorming solutions, challenging assumptions, or modelling different scenarios.

University-Supported AI Tools

These tools have enterprise agreements with the University of Oxford. When you are signed in with your SSO, they provide data security and privacy protections, making them suitable for work involving confidential information.

  • ChatGPT Edu: Developed by OpenAI, it is best all-rounder AI tool and is perhaps bested used as a co-ideator. It includes features such as CustomGPTs that can be configured for specific departmental tasks, a "deep research" function that can synthesise information from hundreds of sources including the web, and an advanced voice mode for verbal interaction.
  • Gemini: Developed by Google, Gemini is perhaps best used as a co-creator for producing outputs like infographics, audio overviews, webpages, and web-apps. It includes 'Gems' (the equivalent of CustomGPTs). A standout product within the Gemini workspace is NotebookLM.
    • NotebookLM: This tool acts as a dedicated research assistant that you can pre-load with your own documents, policies, youtube videos, or meeting notes. It is highly reliable for source-based questions because it exclusively uses the information you provide, linking its responses directly to your uploaded sources for easy verification. It can generate both audio and video overviews of your sources to brief you on their themes.
  • Microsoft 365 Copilot: This tool functions as your personal assistant within the M365 environment. All University staff, when login into M365 with their SSOs, automatically get access to a version of Copilot Chat that is data-protected. The licensed version which is also data-protected integrates directly into Microsoft 365 apps and can query your own Outlook emails, Teams chats, and Sharepoint files. This allows it to perform tasks like summarising long email threads, finding lost files based on context, and briefing you on upcoming meetings.

Other AI-powered applications

These tools can be useful for a range of tasks but do not have enterprise agreements with the University. In line with University information security policy, they must not be used for Confidential or Secret information. They should only be used for information classified as Public.

  • Claude: A general-purpose AI chatbot that performs well in long-form content generation, summarisation, coding, and for creating web-apps.
  • Elicit: An LLM-powered research assistant for finding and summarising academic literature from Semantic Scholar, useful for staff in research support or policy roles.
  • Consensus: Similar to Elicit, an AI search engine for finding information in peer-reviewed literature.
  • Gamma: A tool for creating presentations, documents, and visual explanations from simple text prompts.
  • HeyGen: A video-generation platform for creating AI-generated videos from scripts.
  • ElevenLabs: A voice synthesis tool that generates realistic speech from text.

10 Guidelines for Using Generative AI

1. AI Has Knowledge, But It Can Be Inattentive

It's a misconception that AI has no concept of knowledge or that it simply predicts the next word. It's more accurate to say that it chooses the most appropriate next token (a word or part of a word) using complex techniques. The model does have knowledge; however, like a human, it can be inattentive to the need to check its facts, which can lead to errors.

2. Its Power Comes From Tools

An AI model's ability to search the web, perform calculations, or recall past conversations isn't magic. It's achieved by calling on tools. Hidden instructions tell the model when to use a specific tool—like a search engine or a code interpreter—to find an answer or perform a task. Without tools, an AI is limited to its own internal, and sometimes outdated, knowledge.

3. Develop Your Intuition for Evaluation

Always verify AI outputs. Rather than aiming for a fixed "80% completion" from the AI, it's more effective to develop an intuition for when it is more efficient to stop prompting and start editing the output yourself. Errors are a normal part of the process; they often occur because the AI was "inattentive," its context window was too long, or it failed to use the right tool correctly. Your professional expertise is crucial for spotting these issues and finalising the work. Our trainings will help you develop this intuition.

4. It Can Be Both Creative and Comprehensive—With the Right Tools

The idea that AI is only for creative tasks and not comprehensive ones is becoming less true. An AI's ability to handle a comprehensive task where nothing can be missed depends almost entirely on whether it has been given a tool for the job. If a task requires a specific checklist or process, the AI will likely fail unless a tool is available to guide it.

5. Outputs Have Limited Reproducibility

The same prompt can produce different results for you and a colleague. This isn't entirely random. It's because an AI's context consults your entire chat history. Since your colleague has a different history, the AI is working with different information, leading to a different output.

6. Errors Are Normal and Often Explainable

It's normal for AI models to make mistakes or throw up errors. These failures are often explainable. They can happen because the conversation became too long and information fell out of the model's "context window," it failed to call the right tool, or it retrieved the wrong snippet of information to answer your question. Simply refresh the page and try again.

7. AI-Generated Text Cannot Be Reliably Detected (In Individuals)

AI detection tools may be useful for analyzing thousands of documents in aggregate, but they are not reliable for judging a single piece of work. Furthermore, these detectors can be biased against text written by neurodivergent individuals or non-native English speakers.

8. Use the Right AI Product for the Right Job

Different AI products, like ChatGPT and Microsoft 365 Copilot, may use the same underlying model, but they are not the same. Each product is a unique interface that gives the model access to different tools and system prompts. Always choose the product most suited to the task at hand.

9. Test the Limits (But Don't Ask the AI About Itself)

AI capabilities are constantly changing. If you wonder whether a tool can do something, the best way to find out is to try it. However, its not a good idea to ask the model about its own features or limitations. It is an unreliable manual for itself and will often provide incorrect information.

10. It Follows Hidden Instructions

In every chat, a hidden "system prompt" is working in the background. This prompt gives the AI its core instructions on how to behave, what personality to adopt, and, most importantly, which tools it can call and when to use them. This helps explain why it acts differently across various platforms and tasks.

Tips

Prompts

Email review prompt:

"Review this draft email to a University committee. Check for clarity, professional tone, and conciseness. Suggest improvements to ensure the key action points are unambiguous."

Project plan prompt:

"Create a detailed project plan for a six-month system implementation. Include clear criteria for each phase: discovery, configuration, user acceptance testing, and rollout. Ensure it outlines key deliverables and milestones. Ask me clarifying questions before you begin."

Pre-post-mortem Prompt:

"Review this detailed project plan. Assume that it is six months in the future and the project has failed. Investigate why this project has failed, make a list of its failure points, and rewrite the project plan to account for these to prevent failures of this type and secure project success.”

Metaprompting

If you are unsure how to frame your task, you can ask the AI to write the prompt for you. This is useful for creating clear instructions for CustomGPTs or for setting up complex tasks.

Example: “I am creating a CustomGPT to act as a helpdesk assistant that answers staff queries about the University's expenses policy. Write a detailed prompt that defines its role, its professional and helpful tone, and how it should respond if it cannot find an answer in the uploaded policy documents. Write the instructions using markdown so I can copy and paste them.”

Dictation

Use the microphone button to speak your ideas rather than typing. Speaking allows for faster brainstorming, and AI can work effectively with unstructured language.

Training Journey

Chart of the suggested training journey for Professional Services Staff

Policy & Guidance

The University of Oxford has established clear guidance for the use of generative AI. All professional services staff must adhere to these policies, particularly concerning information security and professional responsibility.

Key principles include:

  • Accountability: You are responsible for the work you produce, even when you use AI to help generate it. Always apply your own professional judgement and critically evaluate AI outputs for accuracy, bias, and appropriateness.
  • Information Security: You must handle University data according to its classification. Use University-approved, SSO-protected tools (Copilot, Gemini, ChatGPT Edu) for Confidential data. Never input Confidential or Secret University data into external AI tools that do not have a contract with the University.
  • Transparency: Be transparent about your use of AI with colleagues where appropriate. Do not represent AI-generated content as your own original work if it would be misleading to do so.

Please refer to the University's central guidance for comprehensive details: