King’s guidance on generative AI for teaching, assessment and feedback
Supporting the adoption and integration of generative AI.
Generative AI (GenAI) use is increasing throughout higher education and creating some profound changes in teaching practice. As educators, we know our students are actively engaging with GenAI tools with both positive and negative outcomes. They are keen to be taught and to learn more about its application, especially for their future careers. However, less is known about how we, as academic staff, are using the technology.
This guidance on AI-augmented feedback is focused on possible ways that AI could be used to enhance or smooth human oversight in feedback workflows.
This guidance is not a template for enabling AI evaluation of student work. Separately, it is also not recommending that AI is used to replace human insight, expertise and judgement. Importantly, it does make clear that we should never upload student work to any AI system without explicit consent and forethought.
The guidance itself is a result of a cross-institutional collaboration between King’s, LSE and the University of Southampton. This guidance takes a neutral stance on the use of GenAI, while encouraging staff to engage in continued professional development (CPD). It offers principles and guidelines to help educators responsibly integrate GenAI into assessment and feedback, recognising both its potential and limitations.
By establishing clear protocols, we aim to support safe experimentation that enhances learning outcomes and mitigates risks. GenAI is seen as a tool to support, not replace, human activity, judgement and oversight. Our ultimate goal is to harness its strengths while fostering trust-based relationships between staff and students.
An emerging picture suggests that while many staff are experimenting with GenAI tools, a significant minority are not or are making an active choice not to engage with the technology (Walker, et al 2025). Those who are using GenAI tools are reaping benefits but may be unaware of the implications of its use for feedback and assessment. The initial focus for institutions was to develop guidance for students. Our next task is to develop staff guidelines to help overcome challenges such as ethical dilemmas, legal ramifications, social implications and concerns about trust, data privacy and security. Without proper guidance, the integration of GenAI into educational practices could lead to unintended consequences that may undermine the integrity and quality of education provided.
This guidance assumes a neutral position on whether staff should or should not use GenAI, but does encourage staff to engage in continued professional development (CPD). In considering its use in practice, educators need to be able to recognise limitations as well as potential. It aims to provide educators with principles and guidelines for responsibly incorporating GenAI into their assessment and feedback processes. In establishing clear protocols and effective practices, we hope to encourage safe experimentation so our approach will enhance educational outcomes while safeguarding against potential risks.
GenAI technology is rapidly evolving. We acknowledge that experimental work is taking place to automate feedback processes beyond self-grading and feedback tools used with, for example, multiple choice quizzes. It is possible to use large language models (LLMs) to provide extensive feedback on written work to inform automated grading, but this is not within scope of this guidance as it relies on the uploading of student work which requires ethical clearance and student consent.
The focus of this guidance, therefore, is on AI-assisted feedback and marking. We use the term AI-assisted to describe the incorporation of GenAI tools such as Microsoft Co-pilot, ChatGPT, Claude and Gemini in our practice. These elements are limited by design to ensure sufficient human oversight and ensure no breach of existing institutional policy.
AI-assisted feedback and marking requires human oversight. There are a number of ways in which AI assisted feedback and marking can help academic staff to be more effective. These include:
Whilst GenAI offers efficiency and scalability in feedback provision, its use raises critical concerns about trust in educational relationships (Liu 2025). Some tools and applications promise immediacy, greater precision, consistency and time-saving benefits (AlBadarin et al., 2023). These affordances are leading to high rates of approval amongst staff and students (Barrett and Pack, 2023).
A recent Artificial Intelligence in Higher Education (AIinHE) survey of more than 6,000 students indicated students valued the feedback that GenAI produces which provides immediate, accessible, understandable, positive, objective responses to many questions. However, on trustworthiness they rated human feedback higher than GenAI feedback, noting that academics provide feedback that is relevant, contextualised, personalised and expert.
Survey results showed that participants question the authenticity of AI-generated comments, perceiving them as impersonal and ethically inconsistent, especially when their own use of GenAI may be discouraged. Trust is also influenced by educators' competence with GenAI and their openness to guiding students through its complexities.
The inconsistency of approach across modules, departments and faculties has the potential to create perceptions of inconsistent positions. To preserve trust, Barrett and Pack (2023) advise educators to prioritise transparency. Emerging research may be worth sharing and discussing with students. For example, Nazaretsky et al., (2024) highlight that students often state they prefer human feedback, particularly after disclosure of the feedback provider. However, those who mistakenly assume AI-generated feedback to be human feedback tend to rate it higher.
To develop trust between staff and students we propose the following principles for AI-assisted marking and feedback:
The following scenarios follow the above guidelines and offer insights into ways that academic staff can use AI transparently and in an assistive capacity, always ensuring human oversight and judgment remain central.
Lecturer A is responsible for marking over 100 essays within a two-week window.
Conscious of the limitations this workload places on the depth of individual feedback, they adopt a hybrid approach using their university’s approved or supported LLM tool, Copilot.
Without ever uploading student work directly, Lecturer A composes an anonymised summary for each student, noting which marking criteria were met and the approximate percentage achieved for each. They input this summary alongside the official rubric into Copilot, prompting it to generate supportive, criterion-referenced feedback. This feedback is then carefully reviewed, adapted, and personalised before being uploaded to the marking platform.
Students are made aware of this process in advance and shown a demonstration, reinforcing transparency and trust.
Lecturer B experiences recurring pain from repetitive strain injury, making traditional typing-intensive marking methods challenging.
To reduce physical strain, they have begun using the voice chat functionality of an AI tool while reviewing assignments. They verbally articulate their comments during the review, ensuring all input remains anonymised and free of identifying details. The AI transcribes the spoken reflections in real time and is then prompted to produce a concise summary, isolating one key strength and one or two developmental areas to support student progression.
Students are made aware of this process in advance and shown a demonstration, reinforcing transparency and trust.
Lecturer C prefers the immediacy and freedom of handwritten annotation when reviewing student work.
Historically, these handwritten notes were then typed into the feedback platform - a time-consuming duplication of effort. More recently, Lecturer C has started photographing their feedback notes and using an approved LLM to transcribe the content. The LLM is prompted to reframe the handwritten comments into a clear, structured feedback format organised into bullet points under three headings: strengths, areas for development and points for action. This preserves the authenticity of the lecturer’s voice while enhancing clarity and readability for students.
Students are made aware of this process in advance and shown a demonstration, reinforcing transparency and trust.
A new teaching assistant is enthusiastic about giving students a positive and encouraging experience in formative feedback. This backfires when students note a mismatch between the superlatives in the comments and the low indicative grades given, leading to appeals and disgruntlement among students who assumed their grade should have been higher given the highly positive comments.
In a subsequent formative assessment, the teaching assistant deploys a GenAI tool to input the official marking criteria and rubric against their anonymised feedback comments to validate alignment between the grade awarded and the feedback language. This helps nudge the teaching assistant to moderate their language to match the marking criteria more closely.
Students are made aware of this process in advance and shown a demonstration, reinforcing transparency and trust. In each of these vignettes, GenAI has been used in a responsible and transparent manner that demonstrates the benefits of AI-assisted feedback to students. Taking this approach does not compromise trust, academic integrity or fairness. Augmenting the clarity, depth and consistency of feedback and streamlining grading processes will enhance the quality of the education provided to our students.
Guidance first published in July 2025.
Supporting the adoption and integration of generative AI.
Overview of key terminology and contextual information for generative AI
Find out more about learning and teaching at King's