King’s contributed to and subscribes to the Russell Group’s five principles on the use of generative AI tools in education:
- Universities will support students and staff to become AI-literate
- Staff should be equipped to support students to use generative AI tools effectively and appropriately in their learning experience
- Universities will adapt teaching and assessment to incorporate the ethical use of generative AI and support equal access
- Universities will ensure academic rigour and integrity is upheld
- Universities will work collaboratively to share best practice as the technology and its application in education evolves.
Each of these has implications for King's staff and for their students. For this reason, governance has been framed with these statements in mind and these principles should be acknowledged as framing and as the launch point for AI literacy work support for staff and students to be able to be critically reflective in their interactions with generative AI, and to inform any programme or assessment modifications we seek to make. Each of the principles is clarified in the Russell Group website.
Governance and policy
At King’s we are committed to ensuring both staff and students receive clear guidance on the ethical dimensions and data security issues and how these relate to existing and evolving policy. In line with the Russell Group principles, the following clarifications to policy cover data privacy, bias potential, accountability for generated information and broader ethical issues. All use should be cognisant of existing King’s policy.
Ethical implications and use
We are witnessing incredibly rapid changes and a potential revolution in ways of working, but with a lack of transparency in the training of generative AI models, training data biases apparent in outputs, intellectual property ownership disputes and data privacy concerns. These combine to present a complex ethical landscape. On the one hand we have an obligation to best support and prepare our students for what is ahead, whilst at the same time need to recognise the many unresolved (even unresolvable) issues and the concomitant academic integrity and skills development implications.
Existing ethical codes apply to the adoption and use of generative AI. AI tools generate responses based on human-created data. As such, they might replicate any societal biases and stereotypes that are embedded in the information they have been trained on. Whilst companies hosting generative AI tools tend to claim outputs are original and therefore not plagiarised, this is an area of ongoing complexity and dispute and thus represents ongoing risk. Furthermore, it is noted that some AI tool developers have outsourced reinforcement learning from human feedback (RLHF) to low wage workers. Finally, the training of generative AI tools involves some quite astounding carbon emission and water consumption, suggesting potentially profound environmental impacts.
Our goal should be to follow and champion ethical and critical practices that seek authentic, accurate and safe use of any generative AI tool that is sustainable and values user empowerment.
See Bentley et al.'s working paper for more detail on ethical implications and a broad framework for responsible use.
Even if an AI tool isn't explicitly trained from user inputs, there exist potential risks related to privacy and intellectual property. This is due to the information that staff and students potentially input into the system. For this reason, it is important that King’s staff do not, without explicit permission, use unsupported tools such as ChatGPT to scrutinise student work. In addition, and despite the name of the leading company in this domain, there is great secrecy about details of how models are trained and the sources of the training data as well as how customer data within products is used.
AI tools derive their data from various sources, some of which might be unreliable or incorrectly referenced. Moreover, any unclear commands or information could be misconstrued by the AI, leading to erroneous or out-of-date outputs. Thus, users must bear the responsibility for the accuracy of the information produced by these tools in different contexts.