Skip to main content
KBS_Icon_questionmark link-ico
People walking around a leafy courtyard at Guy's Campus on a sunny day. ;

In conversation with Anna Verges: collaboration, choice and feedback in practice

From assessment choice to AI guidance, we spoke to Anna Verges from King’s Academy about how collaborative approaches are enhancing teaching, feedback and student engagement across the Faculty of Life Sciences & Medicine.

Anna Verges smiling against a plain grey backdrop

Can you tell us a bit about your role at King’s?

I work in King’s Academy, the central Teaching Development Unit. As a team, we support academic staff to enhance teaching, assessment, and feedback practices. Although we’re a central team, each of us has close links with specific Faculties.

In my case, a large part of my role focuses on assessment, and I work particularly closely with the School of Bioscience Education within the Faculty of Life Sciences & Medicine. We support colleagues both at a practical level and at a more strategic level. That can range from responding to individual queries, for example, a colleague emailing to say their students aren’t engaging, through to shaping longer-term assessment approaches across a school.

What does that work involve in practice?

Over the past couple of years, our work has spanned a wide range of activities included improving the timeliness of feedback, enhancing feedback quality, developing feedback literacy, increasing transparency in marking, and supporting innovation in assessment.

Rather than one single project, this has been an ongoing process of continuous improvement. One important strand has been ensuring that assessment criteria are clear and consistent across programmes. Where criteria were too generic for a given assessment, we’ve worked one-to-one with colleagues to refine them. This has led to the development of new rubrics for different assessment types, including podcasts, and groupwork, therefore making expectations clearer for both staff and students.

You mentioned feedback literacy — what does that mean?

Feedback is a shared responsibility and there really are two sides to it. Teaching staff must be clear about expectations and criteria, but students also need support to understand what feedback involves, how to interpret it, and how to use it beyond a single assessment.

We’ve been running initiatives focused on helping students develop this ability, understanding what “quality” looks like, recognising feedback from different sources, and learning how to use feedback forward into future work. For some students, particularly those coming from more exam-based educational backgrounds, this requires a shift towards a more reflective and proactive approach. Supporting that transition is a key part of our work.

What projects have you worked on that specifically strengthen student voice?

One example is our work on assessment choice, which we’re now running for the second year. Optionality means giving students a choice in how they best demonstrate their learning. For example, instead of requiring everyone to submit an essay, students might be able to choose between an essay or a presentation.

This is fundamentally about inclusivity. It recognises that students have different strengths and backgrounds, and that learning outcomes can be demonstrated in more than one way. Student voice has been absolutely essential to shaping this project.

We surveyed students at the start to understand not just whether they liked the idea of choice, but why. We repeated this at the end, ran focus groups, and held co-creation sessions where students helped us refine the guidance and rubrics. Some students even presented to the next cohort, explaining what assessment choice involved and how they had experienced it.

What we learned was really important. Initially, many students chose the more familiar option, often the essay, because, as they told us very clearly, grades matter. They wanted to succeed. Unless they were given guidance on less familiar methods of assessment, examples, and skills training, they were unlikely to take risks.

So, we acted on that feedback. We introduced dedicated sessions on presentation skills and shared examples. As a result, the number of students choosing presentations increased significantly the following year. The feedback loop continues: students then told us they wanted equal guidance for essays as well, so next year both options will be supported in the same way.

How does technology, including AI, feed into this work?

AI has been another area of innovation driven directly by student feedback. Students told us they could see the opportunities AI offers, but they were also anxious about what counts as acceptable use and the risk of academic misconduct.

In response, we developed a detailed guide on acceptable AI use for a specific assessment, known as the third-year library project. This project is currently running, and early feedback suggests that while students really value the clarity, the guidance may actually be too detailed. They’re asking for something more concise, which is exactly the kind of insight we need at this stage.

How do you collect and use student feedback overall?

We use a mix of surveys, focus groups, and co-creation activities. Focus groups and workshops allow us to go deeper. Funding from King’s Academy has enabled us to fairly recognise students’ time through vouchers, particularly for focus group participation, although interestingly we’re very lucky that students are very willing to engage regardless of incentives!

Why is co-creating assessment criteria so important?

Rubrics are designed to make assessment transparent, they show what we’re assessing and what quality looks like at different grade levels. But they can easily become inaccessible if they rely too much on technical or disciplinary jargon.

When students help co-create rubrics, they develop a much clearer understanding of expectations. They’re better able to self-evaluate their work before submission and to understand feedback afterwards. This supports fairness, confidence, and what we call evaluative judgement, the ability to judge quality, which is an essential skill beyond university.

Getting students involved in the act of assessing, whether through peer feedback or rubric development, opens students up to the complexity of assessment and enables a more meaningful engagement with it - beyond purely the grades.

In this story

Anna Verges Bausili

Anna Verges Bausili

Lecturer in Education (Assessment Advisor)

Latest news