We may end up widening the gap between academic training and workplace expectations.
On the other hand, allowing genAI into assessments is not without complications. If students are encouraged - or even allowed - to use it, we are implicitly assessing their ability to prompt, evaluate, and edit AI-generated content. That may well be a valuable skill. But we should only assess it if we also teach it. Otherwise, we risk rewarding prior digital literacy or socio-economic advantage, and penalising those who are still learning how to use the tools effectively or are concerned about the ethical implications of their use.
We are thus caught between two unsatisfactory options: banning genAI and missing an opportunity to develop essential skills, or allowing it and shifting the burden of AI literacy onto students who may not be adequately prepared. Neither route is ideal - and both underscore a deeper issue. Changing how we assess is difficult. It’s not just a question of redesigning an assignment. It’s about adjusting what we value, teach, and reward.
Finally, changing the assessment is risky. Trialling a new method without knowing how students will respond can result in unusually high or low marks. While high grades may raise concerns about standards, low ones can prompt formal complaints. In a time-pressed, risk-averse institutional environment, this makes experimentation hard to justify, especially when curriculum changes require long lead times and layers of approval. Often, we try to redesign a plane while it is in flight.
Still, change is necessary. The presence of genAI in students’ lives is not temporary. It is not a passing trend to be contained. It is a shift in the knowledge landscape that demands thoughtful, measured responses. It doesn’t mean rushing into fully AI-integrated assessments. But it does mean investing in staff development, creating space for trial and error, and, most importantly, engaging students in the process. They, too, are navigating this shift, and without explicit support, we risk leaving behind those who need the most help to learn how to think critically with, rather than despite, the tools at their fingertips.
References
Brynjolfsson, E., Li, D., Raymond, L. 2025. Generative AI at Work, The Quarterly Journal of Economics, Volume 140, Issue 2, May 2025, Pages 889–942, https://doi.org/10.1093/qje/qjae044
Freeman, J. 2025. Student Generative AI Survey 2025. HEPI Policy Note 61 https://www.hepi.ac.uk/2025/02/26/student-generative-ai-survey-2025/
Gerlich, M., 2025. AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking. Societies, 15(1), p.6.
Kosmyna, N., Hauptmann, E., Yuan, Y.T., Situ, J., Liao, X.H., Beresnitzky, A.V., Braunstein, I. and Maes, P., 2025. Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task. arXiv preprint arXiv:2506.08872.