AI Fundamentals: Nine questions that will determine the future of AI regulation in the UK

candi_june24_airegulation.png

Author: Tessa Hilson-Greener

The development of UK artificial intelligence (AI) law is at a crossroads. The EU's AI Act, approved on 21 May 2024, will affect UK-based companies where AI systems are used to make decisions about employees in the EU. Meanwhile the general election on 4 July 2024 may reshape the UK Government's approach to AI regulation significantly. As Tessa Hilson-Greener explains, understanding these dynamics is key for anyone involved in using AI in the UK.

Here, we first set out the contrasting approaches adopted by the EU and the UK Government to AI regulation up to this point, and then examine a series of questions that are likely to influence the development of the UK's policy in the future.

EU and UK: Contrasting current approaches to AI regulation

The EU's approach, as demonstrated by the AI Act, is highly prescriptive and focuses on pre-emptive risk mitigation across a broad spectrum of AI applications, aiming to set a benchmark for global AI governance. This contrasts with the UK's more flexible and innovation-oriented strategy, which looks to adapt existing regulatory structures to accommodate AI advancements, particularly in the financial sector.

The following table sets out some of the key differences in approach to AI regulation.

EU UK
Regulatory framework and scope
The AI Act provides a detailed, risk-based legal framework specifically targeting various categories of AI applications, prohibiting some and introducing stringent rules for "high-risk" systems. It also outlines obligations for both providers and deployers, including extensive documentation, transparency and oversight requirements. The UK has a more principles-based, sector-specific approach that emphasises flexibility and innovation, particularly within the financial services sector. It does not have a standalone, comprehensive AI law but integrates AI considerations within existing regulatory frameworks.
Prohibited practices and high-risk AI
The AI Act specifies prohibited practices that include emotionally manipulative AI, AI used for indiscriminate surveillance and AI that categorises individuals in sensitive ways. High-risk categories include AI in critical infrastructure, education, employment and law enforcement. While specific prohibitions on AI practices are not outlined, the emphasis is on risk management within the bounds of existing laws, with sectoral regulators like the Financial Conduct Authority (FCA) playing a significant role in overseeing AI applications.
Extraterritorial effects
The AI Act has a broad extraterritorial reach, similar to the EU GDPR, affecting any entity that interacts with EU citizens or that markets AI systems within the EU. The UK's approach primarily targets entities operating within its jurisdiction, with less focus on international reach unless UK-based firms are involved in activities that extend to the EU.
Implementation and compliance
There are stringent compliance mechanisms in place, including conformity assessments and registrations for high-risk AI systems. Providers must also ensure transparency and maintain rigorous oversight over AI deployment. Compliance is more integrated with existing regulatory frameworks and principles set by relevant authorities, with an emphasis on maintaining a flexible regime that can quickly adapt to technological advancements. See the UK AI Toolkit and UK AI Regulation Principles Guidance.

The general election and the future of AI regulation in the UK 

The prime minister, Rishi Sunak, has been adamant that the UK will not rush to regulate AI and a spokesperson confirmed that this still is the Government's policy. However, the global mood on AI has been changing as safety concerns increase and political interests change, and with a general election scheduled for 4 July 2024, we could see significant shifts in the approach to AI regulation depending on the outcome and the priorities of the incoming Government.

The economic climate plays a significant role. In a robust economy, the Government might favour deregulation to spur growth and attract investments. Conversely, a struggling economy might focus on safeguarding jobs from automation threats or harness AI to stimulate recovery. 

1. How might the political priorities of the winning party influence UK AI law after the election? 

The political agenda of the incoming Government will significantly shape AI regulation. A party with a strong focus on digital innovation may enhance a pro-innovation regulatory environment, promoting economic competitiveness. In contrast, a party prioritising privacy and data protection could push for more stringent regulations to address societal concerns like bias and automation's impact on employment.

2. Could public opinion and industry lobbying affect AI policies post-election? 

Absolutely. Public sentiment and industry pressures are powerful influencers in policy-making. If public concern grows around ethical issues like privacy or bias, we could see a push for tighter regulations. Conversely, if economic benefits like efficiency and new tech capabilities are prioritised, the regulatory environment might lean towards an innovation-friendly approach.

3. Will international relations affect the UK's AI regulatory approach? 

Yes, international dynamics are crucial. The UK's regulatory framework might be made to align more with EU standards to simplify trade and cooperation, despite Brexit. Additionally, relationships with global tech leaders such as the USA or China could sway the UK towards either more collaborative or competitive AI strategies.

4. How could technological advancements influence AI laws in the UK? 

As AI technology evolves, so will the need for updated regulations. Swift advancements might bolster a pro-innovation stance to foster growth, or they could trigger precautionary regulations to mitigate unforeseen risks associated with new AI capabilities.

5. How will economic conditions post-election impact AI regulation?

The economic climate plays a significant role. In a robust economy, the Government might favour deregulation to spur growth and attract investments. Conversely, a struggling economy might focus on safeguarding jobs from automation threats or harness AI to stimulate recovery.

6. What are the possible scenarios for AI regulation after the election? 

There are several potential outcomes:

  • Continuity of policies: The newly elected Government might keep the current flexible, pro-innovation approach, focusing on sector-specific regulations.
  • Stricter regulations: A new Government concerned with AI's social and ethical impacts might enforce tighter controls, integrating broader societal issues into the regulatory framework.
  • Hybrid approach: There could be a balanced policy encouraging innovation but within a stricter framework to ensure transparency, accountability and public engagement.

The UK's regulatory framework might be made to align more with EU standards to simplify trade and cooperation, despite Brexit. Additionally, relationships with global tech leaders such as the USA or China could sway the UK towards either more collaborative or competitive AI strategies.

7. What should stakeholders in the AI field do in anticipation of these changes?

Stakeholders should stay informed and adaptable. Understanding the interplay of political, public and economic influences on AI regulation will be crucial. Engaging with policymakers, taking part in public discourse and preparing for multiple regulatory scenarios will help to navigate post-election changes effectively.

8. How will OpenAI comply with GDPR?

OpenAI's compliance with data protection legislation is a fundamental question as many are using the chatbot it developed, ChatGPT, at work. There are key regulatory expectations and challenges surrounding large language models (LLMs) like ChatGPT. This is covered in the EDPB report by the ChatGPT Taskforce released on 23 May 2024. Under art.5(1)(a) of the UK GDPR, you must process personal data in a way that is fair, which means not using it in a detrimental, discriminatory or misleading manner. The report says that OpenAI must assume users will input personal data and ensure compliance with the legislation, regardless of whether inputting such data was initially prohibited. So it is vital that companies have a robust AI policy in place to address the risks with AI in the workplace.

9. How would the TUC's AI Bill impact UK employees?

The UK Trades Union Congress (TUC) has taken the unusual step of publishing its own AI Bill, drafted by multiple stakeholders. This regulates the use of artificial intelligence systems by employers in relation to workers, employees and jobseekers to protect their rights and interests in the workplace. The Bill also provides for trade union rights in relation to the use of artificial intelligence systems by employers, addresses the risks associated with the value chain in the deployment of artificial intelligence systems in the field of employment, and enables the development of safe, secure and fair artificial intelligence systems in the employment field. The rights and obligations contained in the Bill would be enforceable in the employment tribunal, which is ordinarily a "no cost" area (where the parties handle their own costs regardless of outcome).

What to read next