Testing the Frontier – Generative AI in Legal Education and beyond
AI has the potential to enhance students’ learning experience and support creativity and problem-solving in law. However, there is a gap in AI literacy and essential skills needed to harness this potential effectively and ethically whilst maintaining academic rigour and integrity. There is also an increasing risk of inconsistency and uncertainty within academic staff on how to advise students accordingly.
Our project aims to encourage students to build their critical thinking when using GenAI tools and view the outputs of AI through a critical lens. This reflective process will allow students to acknowledge the limitations of GenAI tools, identify possible inaccuracies and biases and find ways to respond to these challenges and refine GenAI outputs. The impact of this process is two-fold. First, understanding and engaging with the limitations will disincentivise the dishonest uses of GenAI in learning. It will create a culture of trust rather than detection or policing of students and more broadly, promote the adoption of Russell Group principles in the use of AI education at KCL. Second, the project will be a first step to preparing students for the future workplace that will undoubtedly integrate AI tools.1 In addition, the project will equip staff to support students in using AI effectively and ethically and become leaders in an increasingly AI-enabled world.
The project will draw on interdisciplinary and external perspectives and in addition to the project leads, it will benefit from the cooperation of the Dean of the Law School, Professor Dan Hunter, Director of the QuantLaw Law, Pierpaolo Vivo (The School of Mathematics and Natural Sciences), members of staff from the law school and external experts in AI.
Stage 1: Interactive Workshop: Facilitated by staff from the law school and external consultants. Students will learn about key aspects of GenAI theory and be shown how the nature of the prompt can impact output.
Staff and students will be able to build knowledge & AI literacy of effective, appropriate and ethical use of AI within an education setting.
Stage 2: one-day Focus group workshop: 25 students will participate in an exercise where we will provide an essay-type question and a given AI-generated output that relates to financial/fintech law. Students will then be required to critically analyse the text, arguments and sources in order to:
Identify inaccuracies, biases or other limitations in the GenAI output text.
Find ways to respond to these limitations and challenges, refine GenAI outputs and reflect on the process they have carried out.
To experiment with prompting the GenAI tool and create a mind map of the refined output text.
Stage two focuses on the students scrutinising the quality of the output so that they can ‘use generative AI tools effectively and appropriately in their learning experience’ ensuring that ‘academic rigour and integrity is upheld’ (Russell Group AI Principles, 2023).
Stage 3: Interactive Staff-Student Workshop: a selection of the students (3 or 4 students) who participated in stage 2 workshop will present their learning experience including the challenges they faced in the process of scrutinising Gen-AI output and their recommendations for rectifying the identified limitation in these outputs. This will be followed by an open discussion and brainstorming of guidelines for the effective and ethical use of Gen-AI tailored to the Law discipline. This co-creative experience will bring to the surface a more concrete meaning and real-world implications of the Russell Group AI Principles in a law higher education setting.
In terms of dissemination, we will produce a report that outlines the key stages of a reflective process of output generated by GenAI and guidelines on the use of AI in academic writing. The report can be shared with other universities and will encourage collaborative work across universities to establish and maintain best practices (Russel Group AI Principles). We will evaluate our work via Entry and Exit surveys and feedback on guidelines.