The UK Government has issued a white paper proposing a framework for regulating artificial intelligence that emphasises a pro-innovation, proportionate, adaptable and largely decentralised approach.


UK Government Issues White Paper on Regulating AI

April 5, 2023

The UK Government has issued a white paper, A pro-innovation approach to AI regulation, proposing a framework for regulating artificial intelligence. The paper aims to (a) drive economic growth by making responsible innovation in AI easier through reduced regulatory uncertainty, (b) increase the public’s trust in the use and application of AI by addressing risks and safeguarding fundamental values and (c) strengthen the UK’s position as a global AI leader in AI regulation and development.

The white paper was issued on 29 March 2023 and invites public comments until 21 June 2023.

An Incremental, Flexible Approach

It is worth emphasising that the framework does not itself purport to regulate AI; it sets out principles for how the UK’s ministries, individually and collectively, should regulate AI.

The framework comprises six characteristics and four elements. The characteristics are:

  • Pro-innovation: encourage responsible innovation.
  • Proportionate: align regulatory burdens with risks.
  • Trustworthy: foster public trust in AI by addressing real risks.
  • Adaptable: retain flexibility to respond quickly and effectively to evolving opportunities and risks.
  • Clear: facilitate knowing the rules, who they apply to, how to comply and who enforces them.
  • Collaborative: encourage cooperation among government, regulators and industry to facilitate AI innovation, build trust and consider the public’s views.

The framework’s elements are:

  • Define AI: a common definition, focusing on AI’s unique characteristics, for clarity across the framework.
  • Context-specific: regulate activities, not technologies.
  • Cross-sectoral principles: a standard set of principles to guide regulators from all sectors as they each respond to specific AI opportunities and risks in their own sectors.
  • New central functions: additional central coordination functions to oversee, support, manage and ensure coherence within the framework.

A Functional Definition of AI

The framework defines AI according to two unique functional attributes that give rise to the need for regulation: ‘adaptivity’ and ‘autonomy’. Adaptivity refers to the ability of AI systems to discern patterns and connections in data and then make inferences that humans cannot explain. Autonomy refers to the ability of some AI systems to make decisions or take actions without human intervention.

Activity-Based Regulation

Systems that are adaptive and autonomous can be difficult to explain, predict or control, and it can be difficult to assign responsibility for the systems’ outcomes. The framework contemplates that regulators will identify systems with those attributes within their respective sectors and regulate them for the risks and opportunities they present within those sectors according to the framework’s principles. That means, for example, that generative AI systems may (and likely will) be regulated differently in the health care sector than they will in the legal sector, but in both cases they will be regulated according to a common set of principles.

Cross-Sector Principles

The framework sets out the principles that are expected to underlie all AI regulation across all sectors:

  • Safety, security and robustness: AI systems should be technically secure and function reliably
  • Appropriate transparency and explainability: appropriate information about AI systems and their decision-making processes should be available
  • Fairness: AI systems should not threaten legal rights or create unfair market outcomes
  • Accountability and governance: measures to establish accountability and oversight over the supply and use of AI systems should be implemented
  • Contestability and redress: mechanisms to enable affected parties to contest harmful AI outcomes should be developed 

Initially, regulators will not have a statutory duty to adhere to the principles. After an initial (undefined) period, the government will consider whether codifying the principles as a statutory duty will better help to achieve the aims of the framework.

New Central Functions

Notwithstanding the expected benefits of enabling sectoral experts to tailor AI regulations to their respective sectors, the framework acknowledges that patchwork regulation, left unchecked, can hinder rather than promote innovation and growth. New central functions will therefore be put in place to coordinate, monitor and adapt the framework.

The new central functions are expected to include:

  • Monitoring and assessment of, and feedback on, the framework
  • Support for coherent implementation of the principles
  • Cross-sectoral assessment of AI risks
  • Support for innovators (e.g., testbeds and regulatory sandboxes)
  • AI education and awareness for businesses and consumers
  • Interoperability with international AI regulatory frameworks

The central functions will initially reside in the UK government but could later become independent.

Regulatory Sandbox

The white paper notes the success of regulatory digital sandboxes run by the Information Commissioner’s Office and the Financial Conduct Authority. As part of its commitment to support innovation, the UK government intends to implement a sandbox for the AI sector, with an initial focus on a single-sector sandbox which may require interaction with multiple regulators in operating in that sector.

A Counterpoint to the EU’s Approach to Regulating AI

The framework’s cross-sectoral principles are largely consistent with the objectives of the European Union’s proposed Artificial Intelligence Act (EUAI Act). However, the framework’s light touch, decentralised regulation offers a pointed contrast to the EU AI Act’s more traditional prescriptive and restrictive approach to regulation. 

Key Takeaway 

The UK Government’s proposed AI regulatory framework reflects its explicit pro-innovation objective in line with the UK’s aim to be an attractive jurisdiction for companies seeking to develop AI products and services. Nevertheless, companies must be mindful that they will still need to comply with stricter regulations in other jurisdictions, such as the EU AI Act, if it becomes law, if they wish to commercialise their products and services in those other jurisdictions. 

For More Information

OrionW regularly advises clients on technology matters.  For more information about the regulation of AI, or if you have questions about this article, please contact us at

Disclaimer: This article is for general information only and does not constitute legal advice.


Subscribe to
our newsletters

To subscribe, select the newsletter options that interest you (TMT, FinTech or DPC - Data Protection and Cybersecurity) and provide your details.

  • TMT - Technology, Media and Telecommunications
  • FinTech
  • DPC - Data Protection & Cybersecurity
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.