Skip to main content

AI Governance Leadership Programme

Law

Course overview

The AI Governance Leadership Programme (AIGLP) equips senior leaders and board‑level decision‑makers to oversee responsible, compliant and strategically aligned AI deployment. Combining core theory with intensive in‑person practical workshops, the programme develops both conceptual understanding and hands‑on governance capability. Participants learn to design real‑world accountability models, oversight structures and decision‑making processes across the AI lifecycle, supported by integrated legal, ethical and risk perspectives. Throughout the week, they build transferable tools including an AI Governance Blueprint, procurement and vendor‑assurance materials, an AI procurement questionnaire and risk‑management documentation aligned with ISO and the NIST AI Risk Management Framework ready to put directly into practice within their organisations.

06 July 2026 - 10 July 2026

Places: Available

Delivery mode: In person

Application deadline: 25 June 2026

Places: Available

Apply now

Course features

Designed by Harry Borovick and Professor Dan Hunter, in collaboration with leading industry experts, the programme equips senior leaders and board‑level decision‑makers to govern AI responsibly, compliantly and strategically.

Through a blend of rigorous theory and hands‑on workshops, participants build the strategic, regulatory and operational capabilities needed to lead AI governance in complex organisations. They gain governance‑focused AI literacy - understanding how AI systems create legal, ethical, operational and reputational risks, and how those risks can be effectively governed, assured and defensibly managed.

The programme prepares learners to operate at the intersection of leadership accountability, regulation, risk management and organisational decision‑making, supporting responsible and compliant AI deployment across regulated and high‑impact sectors including finance, legal services, compliance, audit, manufacturing, technology and banking.

Learning Objectives

By the end of this course, participants will be able to:

  • Explain how AI governance frameworks operate and why organisations require them
  • Distinguish clearly between AI governance, regulation, and assurance in organisational settings
  • Interpret key global AI regulatory regimes and assess their impact on organisational practice
  • Identify and discharge board-level and senior leadership accountability for AI systems
  • Assess and manage legal, ethical, operational, and reputational risks arising from AI use
  • Apply structured AI risk management frameworks (including the ISO and NIST AI Risk Management Framework)
  • Recognise and mitigate AI safety, bias, and human rights risks
  • Design defensible AI governance structures across the AI lifecycle
  • Evaluate AI vendors and systems using procurement, assurance, and documentation tools
  • Implement practical policies, controls, and governance processes for AI deployment
  • Develop a customised AI Governance Blueprint ready for organisational use
 

Course Structure

During the week the mornings and first afternoon sessions will focus on the theory part and the rest of the afternoon will be more practical. 

  Session 1 Session 2 Session 3 Session 4 Session 5
09:30 – 12:30 Governance Foundations, Leadership Accountability & Organisational Readiness  Why Governments Regulate AI and Data Liability, Risk & Assurance Safety, Ethics & Human Rights  Operationalising Governance & Future Policy
13:30 – 17:00 Governance Foundations, Leadership Accountability & Organisational Readiness Navigating Global AI Regulatory Frameworks AI Risk Management Frameworks & Preparing for AI-Related Litigation Ethical & Safety Risk Scenarios (Including Red-teaming) Governance Tools: AI Procurement Questionnaire & Blueprint

Entry requirements

The programme is designed for professionals involved in AI procurement, oversight, governance or AI‑enabled decision‑making. While accessible to a wide audience, the ideal participant has 2–5+ years of experience in their field and is seeking to build advanced, AI‑literate governance capability as AI systems become embedded within their organisation.

It is particularly relevant to individuals working in regulated and high‑impact sectors such as finance, law, compliance, audit, manufacturing, technology and banking, as well as those moving into roles involving AI governance, risk management or senior decision‑making. 

Prior to the start of the course, you receive full joining instructions and access to course materials and readings through our Virtual Learning Environment (KEATS).

Further information

How to apply

Click the “Apply” button and complete the short admissions questionnaire. Please note that your place will only be confirmed once payment has been received. 

This course is open to all applicants. We recommend reviewing the Course Overview and Learner Profile to ensure it align with your goals. If you have any questions or would like to discuss the course’s suitability, please contact ExecEd-Law@kcl.ac.uk and a member of the Programme Team will be happy to assist you.

If you require further information about the course or Executive Education at The Dickson Poon School of Law, please check out our Frequently Asked Questions page or contact ExecEd-Law@kcl.ac.uk.

 

Academic leads

This course is led by Harry Borovick, with strategic oversight from Professor Dan Hunter, Executive Dean of the Dickson Poon School of Law.

Harry Borovick is the Programme Director of the AI Governance Leadership Programme and teaches AI Ethics and Practical AI Risk Management in Legal Practice at King’s College London. He serves as General Counsel and AI Governance Officer at Luminance, leading its global legal, compliance, privacy, and HR functions, and brings extensive experience with regulated technologies, including data privacy and AI, across sectors such as AdTech, gaming, and fintech. Harry is also a LexLab Research Fellow at UC Law San Francisco, a Lecturer at Queen Mary University London and Barbri Global, and a member of the Chartered Institute of Arbitrators’ Technology Group, where he co‑authored guidelines on AI in arbitration. Most recently, he co‑authored the IAPP’s AIGP textbook chapter on AI in Legal Practice.

Professor Dan Hunter is the Executive Dean of the Dickson Poon School of Law at King’s College London and a leading voice in the intersection of law and technology. With decades of experience shaping legal education and innovation globally, he brings a future-focused perspective to AI and its transformative impact on the legal profession. He is an international expert in internet and intellectual property law, AI & law, and legal tech and legal innovation.

Duration:

1 week

5 days

Full Price:

£4,000.00

Staff:

£3,400.00

Alumni:

£3,400.00

King's Students:

£3,400.00

Who will I be taught by

Professor Dan Hunter

Executive Dean, The Dickson Poon School of Law

Harry Borovick

Programme Director of the AI Governance Leadership Programme

Amy Merrick

Senior Legal Counsel at Google DeepMind

Alexander Cooksley

Civil Servant in the Cabinet Office's Economic Secretariat

Charlie Lyons-Rothbart

Lead Commercial Counsel, EMEA at CoreWeave, Senior technology lawyer specialising in AI, Executive advisor

Elizabeth Bullock

Customer Success at Luminance

Professor Gabriela Commatteo

Visiting Professor, Global Digital Enforcement of AI, IP and Cybersecurity

Philip Young

General Counsel, QA Group

Roch Glowacki

Partner (AI, Tech and Media), Lawyer, Guest Lecturer & Speaker

Timothy Watkins

IP & Technology Lawyer, Senior Associate, RPC