The AI Safety Summit and Security: Governance is Vital but Coherence will be Hard to Find

Strategic Insight 028/2023

Patrick Hinton

31 October 2023

The AI Safety Summit and Security: Governance is Vital but Coherence will be Hard to Find

Britain’s Artificial Intelligence (AI) Safety Summit is due to take place between 1-2 November 2023 at Bletchley Park. The Summit was announced back in June by Prime Minister Rishi Sunak and has been identified as an attempt by the United Kingdom to position itself as a technology hub and vanguard of AI development.

The summit will examine the threat from ‘frontier AI’, which are powerful models with the potential to proliferate widely and be misused. AI is a catch-all term for a diffuse set of related technologies. Currently, AI is largely unregulated. Some countries have strategies, companies have policies and technology institutes publish guidelines, but there is little, if any, statutory control in place to truly hold parties accountable. There are calls for a pause on development of new powerful models until their ramifications can be understood.

Misuse and Loss of Control

The recently published Summit agenda includes risks to safety due to AI misuse, from unpredictable advances and from loss of control. Misuse includes activities like malign actors using powerful AI models to help generate cyber weapons. In the past, the creation of malicious code that could circumvent the security systems of governments or large corporations took significant time and effort. State cyber capabilities are bankrolled to the tune of millions of pounds, with organisations made up of hundreds of specialists. Now, publicly accessible large language models such as Chat GPT have generated malicious code, with researchers able to circumvent built-in ethics controls and other security measures. These models can also help non-laboratory trained individuals develop novel bioweapons. They have even suggested which laboratories would be most amenable to fulfilling orders without due diligence. Misuse is therefore a real concern for the security establishment.

Models are trained on data and the validity of that data is a key consideration. Models which are not robust will not act as expected when deployed in the real world, outside of the laboratory environment in which they have been created. The tactic of trying to negatively influence other parties’ models is known as adversarial AI. At present, the real world threat of these strategies is limited, but as AI usage proliferates the chances for malign influence increases. Models can be designed and trained to resist such attempts.

The second focus of the Summit is the loss of control of models, where outputs evade human oversight. Machine learning models have been able to circumvent Google’s reCAPTCHA system which is designed to weed out bots (although the latest versions remain robust). A stark potential example from a US military simulation was an autonomous uncrewed air system ‘killing’ its own operator when it thought the operator was preventing the system from fulfilling the mission. Whilst it has been stressed that this was a fictional scenario, the threat is not that farfetched given the expected future capabilities of AI. Other examples might include systems deliberately manipulating elections, giving intentionally incorrect medical diagnoses, and interfering with financial markets.

AI models have created fake news, deep fake videos, and led to the arrests of innocent people due to faulty facial recognition. They have also acted as a catalyst for criminal activity in the physical world. For example, a man was recently jailed for attempting to kill Queen Elizabeth II with a crossbow was ostensibly encouraged by a chatbot. AI has the potential to exacerbate pre-existing threats to security in a time of scarce resources to deal with them. Prioritisation of response is an important consideration for national security planners and decision makers. There is a real difficulty in trying to work out what threats these models might conjure up and therefore effective future-proofing is difficult. As such, governance and regulation are receiving a lot of attention from the technology industry, policy makers, and security practitioners.

The Importance of Governance

AI development is happening at pace in the private sector, racing far ahead of current regulation efforts. As such, it is already in the hands of users who can use it nefariously. In the case of frontier AI, it is difficult to understand the totality of its capabilities until the models are in the wild and the combined creativity of software and user is realised.

There are no agreed governance frameworks, certification frameworks or accepted standards on the quality or robustness of AI models. The European Union is one body at the forefront of AI regulation. In June 2023, Members of the European Parliament agreed on a position which is now being negotiated with member states on the EU AI Act’s final form. It is hoped to be complete by the end of the year. The proposed law takes a risk based approach enshrining definitions and defining obligations for AI developers. Some applications will be banned outright, such as systems which deliberately manipulate behaviour using subliminal techniques. High risk but permissible applications include safety systems in aircraft or medical devices. These will be subject to risk management requirements including technical robustness, transparency and human oversight. Interestingly, the regulation will not apply to AI systems developed exclusively for military purposes.

The tangible impact which might be expected from the AI Safety Summit has been questioned by some observers. It looks as if the Summit’s outcome will be a voluntary international register of powerful models, yet it will have no influence over those creations. The strength of regulation is one open question – too stringent and constructive innovation will be stymied; too lax and the threat of damaging developments increases. The speed of development is also of note – large bodies such as the EU and UN take months or years to agree on policies. This is far too slow in an arena where developments take place hourly and daily. This begs the question of who is best placed to manage this governance, with a balance struck between agility and stability.

As governance is shaped and formalised, the importance of the human should be reiterated. It will take time for societies to become AI-literate and to become more resilient to its potential harms and excesses. Existing norms around cybersecurity, data protection and social media use remain extant and valuable.

To conclude, it is unlikely that the AI Safety Summit will be wasted effort. Bringing people together under the guise of making the world more secure in the face of unmanaged AI proliferation is no bad thing. However, countries and companies are competing for influence and coherence is lacking overall. Many different regulations, at different levels, and for different purposes will further muddy the picture. It is inherently difficult to appropriately regulate a future scenario where details remain fuzzy. Defence and security officials must remain alive to the rapid developments in the AI arena and be prepared to react rapidly to novel threats born from the mind of a machine, whilst also encouraging users to be wary of the potential outputs of these technologies.

Major Patrick Hinton was the Chief of the General Staff’s (UK) Visiting Fellow in the Military Sciences Research Group at RUSI until the end of August 2023.

The Azure Forum is a nonpartisan, independent research organisation. In all instances, the Azure Forum retains independence over its research and editorial discretion with respect to outputs, reports, and recommendations. The Azure Forum does not take specific policy positions. Accordingly, all author views should be understood to be solely those of the author(s).

The Azure Forum for Contemporary Security Strategy is Ireland’s first and only independent think tank dedicated to providing recommendations on peace, security and defence. As Ireland’s first national security research institute, the Forum aims to contribute to national and international security analysis and strategic studies for a more peaceful, secure, resilient and prosperous future nationally and globally at a time of emerging global risk.