UK Government Sets Out Proposals for a New AI Rulebook

?On July 18, the U.K. government published a policy paper titled “Establishing a pro-innovation approach to regulating AI” (the “Paper”). Instead of giving responsibility for AI governance to a central national regulatory body, as the EU is planning to do through its draft AI Act, the government’s proposals will allow different regulators to take a tailored approach to the use of AI in a range of settings to ensure that the U.K.’s AI regulations can keep pace with change and don’t stand in the way of innovation.

On the same date, the U.K. Government published its AI Action Plan, summarizing the actions taken and planned to be taken to deliver the U.K.’s National AI Strategy (the “AI Strategy”).

A ‘Pro-Innovation Framework’

The Paper forms part of a response to one of the government’s medium term key actions under the AI Strategy, which envisaged an AI regulation White Paper being published within six months of the strategy’s publication. That March deadline has now passed and this Paper is designed as a stepping stone to the forthcoming White Paper and provides interesting details on scope, the government’s regulatory approach, key principles and the next steps. The Paper also gives detail on what this would look like in a broad sense, namely that the framework is to be:

  • Context-specific.
  • Pro-innovation and risk-based.
  • Coherent.
  • Proportionate and adaptable.

As it stands today, the U.K. leads Europe and is third in the world for levels of private investment in AI after domestic firms attracted $4.65 billion in 2021. Recent government research on AI activity in U.K. businesses suggests that by 2040, more than 1.3 million U.K. businesses will be using AI, investing more than £200 billion—approximately $242 billion—in the process. Since publishing the strategy in September 2021, the government has announced various investments in the sector, including funding for new AI and data science scholarships, in addition to opening up new visa routes to encourage workers to come to the U.K.

How the Government’s Approach Has Evolved

The AI strategy noted the existing cross-sector regulations that already capture aspects of the development of AI across data protection, competition law and financial regulation. For example, the government’s recent response to the “Data: A New Direction” consultation discussed the role of data protection within the broader context of AI governance.

The AI Strategy also noted that having embraced a strongly sector-based approach since 2018, it was now the time to decide whether there was a case for greater cross-cutting AI regulation or greater consistency across regulated sectors. Inconsistent approaches across sectors could introduce contradictory compliance requirements and uncertainty for where responsibility lies. AI regulation could become framed narrowly around prominent existing frameworks, such as data protection. Further, the growing international activity that addresses AI across sectors could overtake a national effort to build a consistent approach.

In the Paper, the government has maintained its cross-sector approach and instead of seeking to set rigid centralized rules, is instead taking the route of developing cross-sectoral principles that each regulator (such as the Information Commissioner’s Office (ICO), the Competition and Markets Authority (CMA), the Office of Communications, the Medicine and Healthcare products Regulatory Authority (MHRA) and the Financial Conduct Authority (FCA)) will be required to interpret and produce relevant guidance for their sector to implement. It is highly likely that this guidance will not be published until after the publication of the White Paper.

The proposed six principles at this stage are:

  • Ensure that AI is used safely—This reflects the fact that AI can have an impact on safety, while seeking to ensure that regulators take a contextual and proportionate approach when producing guidance for their sector. The government expect that any safety measures be commensurate with the actual risk associated with the use case.
  • Ensure that AI is technically secure and functions as designed—This principle seeks to ensure that consumers have trust in the proper functioning of systems even as those systems evolve alongside the technology. It seeks to confirm that AI systems are secure and do what they claim to do when used normally. This may involve testing the system, if proportionate.
  • Make sure that AI is appropriately transparent and explainable—This principle seeks as far as possible to ensure that AI systems can be explained where that might interest the public and/or consumers. In addition, it also suggests circumstances where transparency may be appropriate, while highlighting the importance of maintaining confidentiality in certain scenarios, such as intellectual property rights.
  • Embed considerations of fairness into AI—This addresses the fact that in certain scenarios, the use of AI can have a significant impact on people’s lives, such as job applications. On that basis, this principle exists to ensure that regulators take fairness into account in the context of AI use in their sector.
  • Define legal persons’ responsibility for AI governance—This principle confirms that the accountability and legal liability for outcomes produced by AI must rest with an identified or identifiable legal person, natural or corporate.
  • Clarify routes to redress or contestability—This seeks to ensure that subject to the context and proportionality, the use of AI should not remove the ability to contest an outcome.

The government’s approach aligns with the regulatory principles set out in the Better Regulation Framework by emphasizing proportionality. It also seeks to demonstrate the government’s pro-innovation regulatory position post-Brexit, while aligning with the government’s Plan for Digital Regulation. The Paper also states that the government will support cooperation on key issues facing AI through various bodies such as the Organization for Economic Cooperation and Development, Council of Europe, standards bodies such as the International Organization for Standardization and International Electrotechnical Commission and the Global Partnership on AI.

The Practicalities of This Approach

The government has decided not to have a fixed and universally applicable definition for AI and instead focused on setting out its core capabilities and characteristics, allowing regulators to set out more detail in their guidance. The government believes that this approach is better for the U.K. as it allows for the full capture of the various uses of AI. However, the government acknowledge that the lack of granularity taken by this approach could hinder innovation.

The government’s approach therefore seeks to take a middle ground, not seeking to define AI in detail but also not putting no boundaries on what constitutes AI and leaving it up to the regulators to decide. While this latter approach would offer more flexibility, this would come with a risk that there wouldn’t be a consistent view of what AI is and how it is regulated. In addition, if not defined and left to case law, it could soon vary on a sector-by-sector basis, increasing uncertainty.

Interestingly, there is already a cross-sector definition of artificial intelligence in existing U.K. legislation, which can be found in Schedule 3 of The National Security and Investment Act 2021 (Notifiable Acquisition) (Specification of Qualifying Entities) Regulations 2021. However, it appears that for the regulation of the AI technology itself the government is adopting a different approach.

The government also accepts that they are still in the early stages of considering how best to put their principles-based approach into practice. At this stage the proposal is to put them on a nonstatutory footing to allow them to be monitored, adjusted and remain agile in the face of rapid technological change. The Paper suggests that regulators will be responsible for “identifying, assessing, prioritizing and contextualizing the specific risks addressed by the principles,” while the government could issue guidance to support them in this effort. The idea is to not provide mandatory obligations but instead provide a steer for regulators to satisfy the principles in the context of their sector. As part of this, the Paper confirms that the government may need to review the powers and remits of some regulators as there may be a disparity in terms of their ability to enforce and set rules.

While giving individual regulators the scope to interpret the principles, the government is also aware of the need to ensure that there is a level of regulatory coordination to ensure that AI is regulated coherently across different sectors. To this end, the Paper includes a reference to seeking ways to support collaboration between the various regulators in order to ensure a more streamlined approach with the overall aim of ensuring that that there does not end up being multiple sets of guidance from multiple regulators on the same principle. The Paper also highlights the important role of standards, referencing the AI Standards Hub and its role in creating tools to bring various stakeholders from across the U.K. together.

While the current intention is that the principles are to be on a nonstatutory footing, the Paper makes clear that this will be kept under review and that legislation cannot be ruled out, whether relating to the principles or to better ensure regulatory coordination.

A possible example of how this may work in practice can be found in steps already taken by certain regulators to address AI in their sectors. The ICO has issued multiple sets of guidance on topics such as explaining decisions made with AI, AI auditing frameworks, and more general guidance on AI and data protection. The Equality and Human Rights Commission has also promised guidance on the interplay between the Equality Act and use of AI and other new technologies, having highlighted it as a strategic priority in its 2022-2025 Strategic Plan. Other regulators are also working on AI related initiatives, such as the Health and Safety Executive and the MHRA.

Furthermore, some of the key regulators including the ICO, CMA, FCA and the Office of Communications are already working together through the Digital Regulation Cooperation Forum. In addition, the FCA and Bank of England are also working together through the Artificial Intelligence Public-Private Forum, which focuses on AI innovation in financial services in both the public and private sectors.

Despite the coordination referenced above, the Alan Turing Institute’s recently published report on Common Regulatory Capacity highlighted the need for greater regulatory coordination on the use of AI. Key issues cited include AI’s expanse in scope, scale and complexity requiring ever more specialist expertise, a lack of readiness to regulate and the need for common capacity and support for regulating AI.

It is clear from the “next steps” section of the Paper that many key questions remain unanswered. These include more detail on the proposed framework and whether it will actually work in practice, how the government puts its approach into practice (and what changes it may need to make to do so), and how best to monitor the framework to ensure that it remains relevant, delivers its aims and mitigates risks. All of this and more will need to be answered in the White Paper, which the government has confirmed will now be published in late 2022, meaning that it could be finally published more than six months behind schedule.

The Paper re-emphasizes the priority on growth and innovation as the two cornerstones of AI regulation in the U.K. This continues to be in contrast to the draft EU AI Act, which is more risk-based.

Comment and Next Steps

Overall, the Paper is a somewhat useful document outlining the direction of travel of AI regulation in the U.K. While it is certainly not a substitute for the delayed White Paper, it does provide businesses with some further information—albeit limited and high-level—of the overall regulatory scope and approach that the government will be taking in this area. It also comes as the government publishes the Data Protection and Digital Information Bill, which seeks to reduce burdens on businesses and includes measures on the responsible use of AI while maintaining the U.K.’s high data protection standards.

The key principles proposed by the government should be welcomed by business. However, the risk with the government’s proposed approach could lead to a lack of certainty for business as there could be differing guidance from regulators and different approaches to enforcement.

The other important factor to consider is the cross-border nature when technology is deployed. While suppliers and developers of AI in the U.K. may welcome a light touch, we expect many would seek to deploy their solutions in the EU market. Accordingly, if the draft EU AI Act becomes the gold standard for AI compliance (similar in manner to the General Data Protection Regulation in relation to data privacy), in practice the U.K. AI industry players may well decide to ensure compliance with the EU AI Act in any case.

The government has initiated a call for views and evidence on the Paper, open until Sept. 26, to gather feedback on its proposed approach, after which we will see if the White Paper provides the promised clarity and detail.

Dr. Sam De Silva and Barbara Zapisetskaya are attorneys with CMS in London. Jake Sargent is a trainee solicitor with CMS. © 2022 CMS. All rights reserved. Reposted with permission of Lexology.

Leave a Reply

Your email address will not be published. Required fields are marked *