Unleashing AI's Potential: A New Approach to Regulating Artificial Intelligence in the UK
by Patrick Adams, Consultant
30/03/2023


The UK Government is seeking to embrace the potential of artificial intelligence (AI) by proposing a new approach to regulating AI. This approach aims to build public trust in cutting-edge technologies, make it easier for businesses to innovate, grow, and create jobs, and further solidify the UK's position as a leader in AI. Avoiding a single rulebook, the Government is aiming to strike a balance that allows for innovation and growth.

The need for regulation, and a flexible approach, seems evident. As of January 2022 approximately 15% of all businesses, or 432,000 companies, have adopted at least one AI technology. With 2% of businesses currently piloting AI and 10% planning future adoption - this equates to 62,000 and 292,000 businesses respectively. As a general trend, AI adoption correlates with increasing business size: 68% of large companies, 34% of medium-sized companies, and 15% of small companies have implemented at least one AI technology.

The AI Regulation White Paper sets forth a new approach to AI governance that aims to avoid heavy-handed legislation, instead empowering existing regulators to develop tailored, context-specific approaches to AI in their respective sectors. The white paper outlines five principles to guide the use of AI in the UK:

·      Safety, security, and robustness

·      Transparency and explainability

·      Fairness

·      Accountability and governance

·      Contestability and redress

The UK Government plans to implement AI regulatory principles through existing regulators, leveraging their domain-specific expertise. These regulators will use existing laws rather than being given new powers. During the initial implementation period, the Government will collaborate with regulators to evaluate the non-statutory framework's effectiveness. In addition, a statutory duty on regulators may be introduced, requiring them to consider the principles while allowing flexibility in applying them.

Balancing Risk and Innovation

The Government has identified central support functions to balance risk and innovation in AI regulation. These functions include monitoring and evaluating the framework's effectiveness, assessing AI-related risks, conducting horizon scanning and gap analysis, supporting testbeds and sandbox initiatives, providing education and awareness, and promoting interoperability with international frameworks.

To strengthen the UK's AI capabilities, the Government is establishing a Foundation Model Taskforce. After publishing the AI regulatory framework, the Government plans to engage with stakeholders, issue cross-sectoral principles to regulators, and create an AI Regulation Roadmap. Short to medium-term efforts will focus on developing partnerships, providing guidance, and launching a regulatory £2 million sandbox trial or testbed. Long-term objectives include delivering the first iteration of all central functions, collaborating with key regulators, and publishing monitoring and evaluation reports.

What’s next

Over the next 12 months, regulators will issue practical guidance and provide resources, such as risk assessment templates, to help organisations implement these principles in their sectors. When parliamentary time allows, legislation could be introduced to ensure consistent consideration of the principles across regulators.

The international political economy of AI regulation

Though the UK wants to identify and implement an innovative approach, it is far from alone in looking at how to regulate AI. International competition in the AI landscape is intensifying, with countries like the US and China investing heavily in research, development, and implementation. As nations strive to keep pace with AI superpowers, a robust AI strategy becomes crucial for maintaining competitiveness and capitalizing on the transformative potential of artificial intelligence. Notably, the UK’s strategy has been launched against a backdrop of Big Tech companies cutting staff from AI teams, raising concerns about potential abuses as the technology becomes more widespread. Critics of the cuts argue these teams are vital to ensuring proper consideration of ethical boundaries, and that the cuts may leave algorithms exposed to advertising imperatives, negatively affecting vulnerable individuals and democracy.

The EU and UK have taken distinct approaches to artificial to AI. The EU focuses on regulation and data protection, prioritising safety and privacy but hindering innovation and competitiveness in the AI market. In contrast, the UK invests more in AI research and development, fostering growth in the sector and positioning itself as a key player in the global AI landscape. These different strategies reflect the EU's cautious approach and the UK's ambition to capitalise on market niches and drive technological advancements, reflecting further divergence from the EU in UK policy since Brexit.

Conclusion

The UK's commitment to developing AI capabilities and maintaining a competitive edge in the international landscape is evident. Through this AI White Paper, as well as related reforms like the Data Protection and Digital Information Bill and moving away from the EU's 'one-size-fits-all' approach, the UK aspires to enhance public and business confidence in AI technologies, unlocking its transformative potential for society and the economy. However, concerns have already been raised that the UK’s approach, despite being home to twice as many companies providing AI products and services as any other European country, is over reliant on systems ill-equipped to deal with the complexities of AI technology. Though the UK Government is bringing innovation to the heart of its policy agenda, it will need to balance against future risks that could undermine the growth potential of its forward thinking.

We’ve cultivated an environment that harbours independence. Whether they are early birds who go to yoga and then smash their news updates before 8.30am, or they simply hate travelling on the tube in rush hour, we trust and respect our team’s skills and conscientiousness. As long as core responsibilities are covered, our team is free to work flexibly.

We’re proud to be a living wage employer. We believe that no one should have to choose between financial stability and doing a job they love, so we pay a wage that allows our team to save for a rainy day and guarantees a good quality of life.

Many members of the Atticus Partners team hold the Communications Management Standard (CMS). CMS demonstrates a commitment to achieving excellence and assures our clients that we are providing the most effective service possible.

Sign up to receive the Atticus Agenda


Sign Up Here