Skip to main content

11 November 2019

Guiding the evolution of intelligence: government policy for the beneficial use of AI

Dylan Feldner-Busztin and Michael Miller

DYLAN FELDNER-BUSZTIN AND MICHAEL MILLER: AI is pushing the boundaries of what we thought was imaginable, but we need better regulation to ensure the safe and trustworthy development of AI.

Dylan and Michael

This piece is part of a blog series from the student finalists of Policy Idol 2019. Through the series, we're sharing students' policy ideas for changing the world, which they pitched at the competition earlier this year. 

Find out more about Policy Idol

Read about this year's final

Artificial Intelligence (AI) represents the greatest but also the most dangerous opportunity in human history. The mission of Google DeepMind is “solving intelligence and then using that to solve everything else.” Once we have “solved intelligence”, every other conceivable problem we are facing as individuals, governments and as a species becomes substantially easier. This lends AI a certain economic and scientific inevitability, as Andrew Ng states, “AI is the new electricity” and like electricity, no company or country can hope to compete if they don’t have it. Halting the advance would be tantamount to a willing continuation of the suffering of billions, but such rewards rarely come without considerable risk, and within this paradigm it is imperative that we address how AI is controlled.

Many researchers, lawmakers, scientists and ethicists have recognised the insidious danger that unregulated proliferation of artificially intelligent systems pose to the modern world. To date, little has been done to directly address this in UK policy. Current policymaking in this area comprises the AI Sector Deal, an initiative designed to promote AI development across academic institutions and technology companies. This is done mostly through the allocation of £1 billion for research and organisational restructuring, which will likely return billions of pounds to the UK economy. However, we believe that these reforms need to be accompanied by a governance framework to ensure the safe and trustworthy development of AI.

An AI race now complements the global arms race - whoever is at the forefront of developing AI could become the dominant superpower. Many nations are competing in this race, likely ignoring what has been advocated by safety engineering, including through cutting corners, rash judgements, botched experiments and moral ambiguity. It is imperative that the UK and other countries lead by example and globally promote AI development in tandem with AI safety research.

AI development could for example learn from initiatives such as the Apollo moon missions. The safety and survival of the astronauts embarking on the mission were at the forefront of the development process. Teams were constantly reminded that the brightest and best astronauts would be effectively strapped to an explosion and sent up into space to a destination where no-one could physically help them. This meant that the principles of safety engineering were followed throughout the development of the technology.

In stark contrast, when it comes to AI we have witnessed full-scale security breaches, such as the Cambridge Analytica scandal, which highlights the inability of governments to troubleshoot or anticipate the dangers that technology poses. Learning on a mistake-first basis may work with inventions such as motorised vehicles, but this is an inappropriate strategy for technologies that form the fabric of an information economy. It cannot be overemphasised how insidious it is to build the future of our societies on technologies with no safety engineering, value-alignment, contingency foresight or legal culpability. Technologies which fall into the hands of “digital gangsters”, as the Department for Digital, Culture, Media and Sport recently called Facebook and other technology corporates.

To combat the risks, we propose a body of Chartered Ethical Technologists to facilitate the incorporation of safety engineering principles and social responsibility as a central aspect of the AI Sector Deal. This body would bring together AI developers, safety researchers, social scientists, policy-makers, legal experts and others into an evolving network to refine these ideas, implement regulation and ensure the mutual interest of the industry and public. This is very much a meta-solution to this difficult problem, as the issue is too complex to solve with one simple implementation. Therefore, such a body will likely need to function within a long-term framework and adapt to an ever-accelerating, technology-driven economy and an increasingly complex geopolitical situation.

It is anticipated that many existing institutions and initiatives such as the Ethical Machine, Google DeepMind, Asilomar Principles and chartered legal bodies would be receptive to this kind of umbrella organisation, which brings together earlier disparate attempts at finding a solution. Similarly, there are currently schemes leading the way in terms of providing qualifications for ethical technologists which could help to establish a Chartered Body, such as the UKRI CDT Safe and Trusted Artificial Intelligence PhD programme, hosted by King’s College London and Imperial College London, and the 80,000 Hours, Effective Altruism and Future of Humanity Institute nexus at Oxford University. This body of Chartered Ethical Technologists would be instrumental in engendering professionalism, social responsibility and providing guidance for all working in AI.

Dylan Feldner-Busztin and and Michael Miller are both studying an MSc in Neuroscience at King's College London. 

Related departments